Created on 05-13-2016 09:04 PM - edited 09-16-2022 03:19 AM
Here is what I found for YARN and HDFS to access ZK using SASL. YARN configuration makes perfect sense. However, is there something missing for HDFS such as configuring its own hdfs_jaas.conf? How about Hive?
Created 06-13-2016 06:13 AM
@ScipioTheYounger, I expect this is similar to another question you asked.
I'll repeat the same information here for simplicity.
change ha.zookeeper.acl in core-site.xml to this:
<property> <name>ha.zookeeper.acl</name> <value>sasl:nn:rwcda</value> </property>
Then, you'd want to run the following to reformat ZooKeeper for NameNode HA, which would reinitialize the znode used by NameNode HA to coordinate automatic failover.
hdfs zkfc -formatZK -force
The tricky part, as you noticed, is getting that command to authenticate with SASL. The ZooKeeper and SASL guide in the Apache documentation discusses implementation and configuration of SASL in ZooKeeper in detail. For this particular command, you can use this procedure.
First, create a JAAS configuration file at /etc/hadoop/conf/hdfs_jaas.conf:
Client { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true useTicketCache=false keyTab="/etc/security/keytabs/nn.service.keytab" principal="nn/<HOST>@EXAMPLE.COM"; };
Note that the will be different depending on the NameNode hostnames in your environment. Likewise, you'll need to change EXAMPLE.COM to the correct Kerberos realm.
Next, edit /etc/hadoop/conf/hadoop-env.sh, and add the following line to enable SASL when running the zkfc command.
export HADOOP_ZKFC_OPTS="-Dzookeeper.sasl.client=true -Dzookeeper.sasl.client.username=zookeeper -Djava.security.auth.login.config=/etc/hadoop/conf/hdfs_jaas.conf -Dzookeeper.sasl.clientconfig=Client ${HADOOP_ZKFC_OPTS}"
Then, run the "hdfs zkfc -formatZK -force" command.
Created 06-13-2016 06:13 AM
@ScipioTheYounger, I expect this is similar to another question you asked.
I'll repeat the same information here for simplicity.
change ha.zookeeper.acl in core-site.xml to this:
<property> <name>ha.zookeeper.acl</name> <value>sasl:nn:rwcda</value> </property>
Then, you'd want to run the following to reformat ZooKeeper for NameNode HA, which would reinitialize the znode used by NameNode HA to coordinate automatic failover.
hdfs zkfc -formatZK -force
The tricky part, as you noticed, is getting that command to authenticate with SASL. The ZooKeeper and SASL guide in the Apache documentation discusses implementation and configuration of SASL in ZooKeeper in detail. For this particular command, you can use this procedure.
First, create a JAAS configuration file at /etc/hadoop/conf/hdfs_jaas.conf:
Client { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true useTicketCache=false keyTab="/etc/security/keytabs/nn.service.keytab" principal="nn/<HOST>@EXAMPLE.COM"; };
Note that the will be different depending on the NameNode hostnames in your environment. Likewise, you'll need to change EXAMPLE.COM to the correct Kerberos realm.
Next, edit /etc/hadoop/conf/hadoop-env.sh, and add the following line to enable SASL when running the zkfc command.
export HADOOP_ZKFC_OPTS="-Dzookeeper.sasl.client=true -Dzookeeper.sasl.client.username=zookeeper -Djava.security.auth.login.config=/etc/hadoop/conf/hdfs_jaas.conf -Dzookeeper.sasl.clientconfig=Client ${HADOOP_ZKFC_OPTS}"
Then, run the "hdfs zkfc -formatZK -force" command.