Support Questions

Find answers, ask questions, and share your expertise

Ambari cluster with Kerberos - wrong principal expected

I have successfully enabled Kerberos for Ambari managed cluster. I have used the Wizard to generate the principals and everything. However the datanodes do not connect to namenodes. The reason is following:

2016-07-08 16:10:54,753 INFO ipc.Server (Server.java:doRead(891)) - Socket Reader #1 for port 8020: readAndProcess from client 172.30.52.137 threw exception [org.apache.hadoop.security.authorize.AuthorizationException: User dn/hadoop-poc2-02.int.na.prodxxx.com@HADOOPXXX.COM (auth:KERBEROS) is not authorized for protocol interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol, expected client Kerberos principal is dn/172.30.52.137@HADOOPXXX.COM]

They expect principals containing IP address instead of hostnames... I have checked the keytabs and it is generated properly:

Keytab name: FILE:dn.service.keytab
KVNO Principal
---- --------------------------------------------------------------------------
1 dn/hadoop-poc2-02.int.na.prodxxx.com@HADOOPXXX.COM
1 dn/hadoop-poc2-02.int.na.prodxxx.com@HADOOPXXX.COM
1 dn/hadoop-poc2-02.int.na.prodxxx.com@HADOOPXXX.COM
1 dn/hadoop-poc2-02.int.na.prodxxx.com@HADOOPXXX.COM
1 dn/hadoop-poc2-02.int.na.prodxxx.com@HADOOPXXX.COM

Any hints?

1 ACCEPTED SOLUTION

So the issue was very likely caused by the fact that reverse lookup for IP address is performed. We do not have PTR records and /etc/hosts contains info about current host only. I have added records for all hosts of the cluster to /etc/hosts and it works now.

Please note that I have dfs.namenode.datanode.registration.ip-hostname-check set to false in custom hdfs-site.xml.

View solution in original post

3 REPLIES 3

@Milan Sladky

Are you sure that the hostname resolution is correct at your end? like `hostname -f` or "/etc/hosts" file ...etc.

It looks suspect because the Error indicates IPAddress "expected client Kerberos principal is dn/172.30.52.137@HADOOPXXX.COM]"

Where as your keytabs looks more valid with the hostname "dn/hadoop-poc2-02.int.na.prodxxx.com@HADOOPXXX.COM"

The hostname resolution works fine. However the issue is very likely in reverse lookups for IP addresses.

So the issue was very likely caused by the fact that reverse lookup for IP address is performed. We do not have PTR records and /etc/hosts contains info about current host only. I have added records for all hosts of the cluster to /etc/hosts and it works now.

Please note that I have dfs.namenode.datanode.registration.ip-hostname-check set to false in custom hdfs-site.xml.

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.