Member since
04-14-2016
16
Posts
4
Kudos Received
0
Solutions
05-08-2018
09:55 PM
Adding all nodes in /etc/hosts across all of them fixed the problem. Thanks!
... View more
05-08-2018
01:29 PM
I also have export HADOOP_ZKFC_OPTS="-Dzookeeper.sasl.client=true
-Dzookeeper.sasl.client.username=zookeeper
-Djava.security.auth.login.config=/usr/hdp/2.6.0.3-8/hadoop/conf/secure/hdfs_jaas.conf
-Dzookeeper.sasl.clientconfig=Client $HADOOP_ZKFC_OPTS" hdfs_jaas.conf: Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
useTicketCache=false
keyTab="/etc/security/keytabs/nn.service.keytab"
principal="nn/namenodehost1.local@MYREALM.FS";
};
... View more
05-08-2018
01:23 PM
/etc/krb5.conf:
[libdefaults]
renew_lifetime = 7d
forwardable = true
default_realm = MYREALM.FS
ticket_lifetime = 24h
dns_lookup_realm = false
dns_lookup_kdc = false
default_ccache_name = /tmp/krb5cc_%{uid}
#default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
#default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
[logging]
default = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
kdc = FILE:/var/log/krb5kdc.log
[realms]
MYREALM.FS = {
admin_server = mykdc.local
kdc = mykdc.local
}
Looking at the hadoop-hdfs-zkfc log file, I am trying to figure out where zkfc gets its zk connection string from: 2018-05-07 16:12:49,965 INFO zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(1019)) - Opening socket connection to server 10.169.110.22/10.169.110.22:2181. Will attempt to SASL-authenticate using Login Context section 'Client'.
... View more
05-08-2018
01:19 PM
HDP-2.6.0.3/ Ambari Version 2.6.1.5 Centos 7.4 (64bit) /etc/hosts file only contains FQDN entry for its host. DNS is enabled (forward ONLY). Cluster nodes: 3 zk + 2 NN (HA node) + Ranger (KMS) + 3 DN. KDC setup was done following the steps at https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_security/content/install-kdc.html. All principals have been created in KDC during Kerberos enabling process. They match what was in the Excel file I'd downloaded. /var/kerberos/krb5kdc/kadm5.acl:
*/admin@MYREALM.FS *
... View more
05-08-2018
11:53 AM
I went ahead and re-built everything from scratch and still having the same issue. Any idea where ZKFC gets its ZK connection string besides ha.zookeeper.quorum ?
... View more
05-02-2018
06:39 PM
Hi, I had enabled Kerberos on my cluster w/o realizing that the hostname was never included on /etc/hosts. I went and did that and also remove and re-add Kerberos. I still cannot get rid of this error: nn/namenodehost1.local@MYREALM.FS for zookeeper/10.169.110.22@MYREALM.FS, Server not found in Kerberos database As if the _HOST var doesn't get translated to the host's FQDN. Any help is really appreciated. Sadek
... View more
Labels:
- Labels:
-
Apache Zookeeper
-
Kerberos
11-09-2016
06:26 PM
The provisioning script should have created it but it hadn't. Thanks!
... View more
11-08-2016
06:59 PM
Hi, I am having issues while creating an encryption zone using the HDFS super-user (hdfs). I can list leys just fine but when I exec the create zone command I get: [hdfs@myhost ~]$ hdfs crypto -createZone -keyName ezkey2 -path /enc_zone2/
16/11/08 18:46:41 WARN retry.RetryInvocationHandler: Exception while invoking ClientNamenodeProtocolTranslatorPB.createEncryptionZone over sm04.atlnp1/10.121.41.198:8020. Not retrying because try once and fail.
org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): cannot find /enc_zone2 Any idea what may be causing this? Thanks, Sadek
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Ranger
04-18-2016
05:24 PM
@Rahul Pathak That didn't quite fix everything though. I was trying to put a file in an (hdfs) encryted zone and got the follwoing exception: put: java.util.concurrent.ExecutionException: org.apache.hadoop.security.authorize.AuthorizationException: User:nn not allowed to do 'GENERATE_EEK' on 'mykey'. The nn procipal should map to the 'hdfs' OS user according to the entry in hadoop.security.auth_to_local: RULE:[2:$1@$0](nn@MYREALM.COM)s/.*/hdfs/ Even after adding similar properties as above to the hdfs user
hadoop.kms.proxyuser.hdfs.users=* hadoop.kms.proxyuser.hdfs.hosts=* And allowing all permissions to 'hdfs' user in the KMS policy.
... View more
04-18-2016
11:10 AM
That did it!.
... View more