07-30-2014 06:48 PM
for example,the yarn.keytab
klist -k yarn.keytab
Then,I 'scp' the yarn.keytab file to the host datanode2,it also worked with the method:
kinit -kt yarn.keytab yarn/datanode1@HADOOP.COM
Does that mean the keytab file can be used in any host?
07-30-2014 06:54 PM
Being able to kinit with the keytab doesn't mean you could start services on that host as yarn and accept client requests. You have a principal that is now invalid if you were to attempt to use it for service to service interaction, becuase the forward/reverse lookup mechanics would fail.
07-30-2014 06:56 PM
really handy walkthrogh on kerberos authenticaiton... I still had to read it through like 10 times to get it.... dont be offended by the title, its this bloggers way to approaching stuff...
11-08-2014 02:58 AM
ok，I understand. Now I have another question.
each client host where I run the mapreduce or other programs which need connect to hdfs or hbase should be installed kerberos client and /etc/krb5.conf ?
11-11-2014 02:45 PM
On each host of the cluster, when enabling kerberos, you would install krb5-workstation, krb5-libs and would have the same /etc/krb5.conf present on them all to be configured properly. you would also have user accounts present on the linux OS for the cluster users who will be authenticating during kerberos. Mapreduce/Hadoop core requires user accounts be present for isolation of runtime jobs by user.