Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

javax.security.auth.login.LoginException: Client not found in Kerberos database (6) - CLIENT_NOT_Fou

avatar
Explorer

Hi,

I am trying to implement Kerberos security on cloudera CDH-5.3.  In kerberos implementation wizard it generates principals for all the services.

The prinicpal generated are as follows - 

 


kadmin.local: listprincs
HTTP/01hw310845.India.ABC.com@INDIA.ABC.COM
K/M@INDIA.ABC.COM
cloudera-scm@INDIA.ABC.COM
hdfs/01hw310845.India.ABC.com@INDIA.ABC.COM
hive/01hw310845.India.ABC.com@INDIA.ABC.COM
hue/01hw310845.India.ABC.com@INDIA.ABC.COM
impala/01hw310845.India.ABC.com@INDIA.ABC.COM
kadmin/01hw310845.india.ABC.com@INDIA.ABC.COM
kadmin/admin@INDIA.ABC.COM
kadmin/changepw@INDIA.ABC.COM
krbtgt/INDIA.ABC.COM@INDIA.ABC.COM
mapred/01hw310845.India.ABC.com@INDIA.ABC.COM
yarn/01hw310845.India.ABC.com@INDIA.ABC.COM
zookeeper/01hw310845.India.ABC.com@INDIA.ABC.COM

 

But when i try to start all the services in the cluster it gives following error -

 

Failed to start namenode.
java.io.IOException: Login failure for hdfs/01hw310845.india.abc.com@INDIA.ABC.COM from keytab hdfs.keytab
at org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:947)
at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:242)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loginAsNameNodeUser(NameNode.java:560)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:579)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:754)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:738)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1427)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1493)
Caused by: javax.security.auth.login.LoginException: Client not found in Kerberos database (6) - CLIENT_NOT_FOUND
at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:763)
at com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:584)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:762)
at javax.security.auth.login.LoginContext.access$000(LoginContext.java:203)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:690)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:688)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:687)
at javax.security.auth.login.LoginContext.login(LoginContext.java:595)
at org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:938)
... 7 more
Caused by: KrbException: Client not found in Kerberos database (6) - CLIENT_NOT_FOUND
at sun.security.krb5.KrbAsRep.<init>(KrbAsRep.java:82)
at sun.security.krb5.KrbAsReqBuilder.send(KrbAsReqBuilder.java:319)
at sun.security.krb5.KrbAsReqBuilder.action(KrbAsReqBuilder.java:364)
at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:735)
... 20 more
Caused by: KrbException: Identifier doesn't match expected value (906)
at sun.security.krb5.internal.KDCRep.init(KDCRep.java:143)
at sun.security.krb5.internal.ASRep.init(ASRep.java:65)
at sun.security.krb5.internal.ASRep.<init>(ASRep.java:60)
at sun.security.krb5.KrbAsRep.<init>(KrbAsRep.java:60)
... 23 more

 

The problem seems to be that principal name that cloudera uses to authenticate is in SMALL LETTERS of FQDN while the generated princpals are

in CAPITAL LETTERS.

 

How to ensure that cloudera generates the principals(domain name) from - /etc/host file without converting it into small case

11 REPLIES 11

avatar
Master Guru

Hadoop in general expects that your hostnames and domain names are all lowercase.  When Kerberos is introduced, this becomes important.  While it is possible to override this behavior (of expecting lowercase) by doing manual configuration, I recommend ensuring via /etc/hosts or DNS that your host and domain are lower case.  After that is corrected, regenerate credentials and that should correct the problem.

 

Regards,

 

Ben

avatar
New Contributor

We are seeing similar issue. Everything was working fine for our test setup but now we started seeing this issue.

You notice the "Client not found" it is relevent to jaas.conf, It has Server by default and it used to work but now we are seeing th default option Server but when we restart zookeepr, hdfs, hbase service it looks for Client. Since this dynamic config we can not do manual fix, we try running command manually after fix, it works for zookeeper, while there is no jaas.conf in hdfs folder.

 

What could have change that all applications starts looking for "Client" from "Server" option in jaas.conf.

 

 

ERROR org.apache.zookeeper.server.ZooKeeperServerMain: Unexpected exception, exiting abnormally
java.io.IOException: Could not configure server because SASL configuration did not allow the  ZooKeeper server to authenticate itself properly: javax.security.auth.login.LoginException: Client not found in Kerberos database (6) - CLIENT_NOT_FOUND
	at org.apache.zookeeper.server.ServerCnxnFactory.configureSaslLogin(ServerCnxnFactory.java:207)
	at org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:87)
	at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:116)
	at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:91)
	at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:53)
	at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:121)
	at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:79)

 

 

avatar
New Contributor
I found the issue with our KDC server setup, We use master and slave KDC server. Hadoop was using slave KDC for authentication and Updates were made on Master but not replicated properly. While using kinit, key was working. Once we reviewed KDC krb5.log we found same message in krb log that user was not present. Once the replication issue is fixed. Hadoop also started working.

avatar
Community Manager

There is some great discussion here.  @singhuda have you resolved the original issue?


Cy Jervis, Manager, Community Program
Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.

avatar
Explorer

I am also facing similar issue. Without kerberos all the services are running properly but when I try to kerberized the cluster with AD external authentication, the CM's wizard took me properly until stopping the cluster but when the cluster is restarting I am facing the issues in first step of hdfs dependency .. zookeeper

Unexpected exception, exiting abnormally
java.io.IOException: Could not configure server because SASL configuration did not allow the  ZooKeeper server to authenticate itself properly: javax.security.auth.login.LoginException: Client not found in Kerberos database (6)
	at org.apache.zookeeper.server.ServerCnxnFactory.configureSaslLogin(ServerCnxnFactory.java:207)
	at org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:87)
	at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:135)
	at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116)
	at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:79)

 we have generated credentials for common/single user for entire cluster services.. 

Any idea whats the issue..

avatar
Master Guru

@krb,

 

Make sure you have your /etc/krb5.conf configured correctly so that the zookeeper is sending its AS_REQ to the right KDC.  If you have just changed from one KDC to another, the /etc/krb5.conf also needs to be updated.  If you are not managing it with Cloudera Manager, it needs to be changed manually.

 

Either way, you could do a tcpdump on port 88 and check if output requests are going to the right KDC if /etc/krb5.conf is configured properly for your new KDC.

 

 

avatar
Explorer

@bgooley

Thanks!. 

I just groomed all krb5.* files on all hosts and re-enable the kerberos through CM where it can regenerate all missing credentials including managing krb. This time I gave freehand to CM to create individual service princiaples to various services(hdfs, hive, hue, etc.) instead of existing service principle (a system user). 

This time Zookeeper started successfully but not HDFS. The HttpFS is also started in HDFS. I can't see any errors but can see WARNINGS in log file

 

 

CredentialManager kt_renewer WARNING Couldn't kinit as 'HTTP/xxx.xx.com' using
/run/cloudera-scm-agent/process/1330-hdfs-HTTPFS/httpfs.keytab --- kinit:
Client 'HTTP/xxx.xx.xxx.xx@xx.xx.xx' not found in Kerberos database while getting
initial credentials

 

 

avatar
Master Guru

@krb,

 

What you provided appears to be an agent log message that indicates an attempt to kinit with the HTTP principal on the host where HTTPFS role runs was not successful.  Check on the host where the httpfs role runs and make sure the krb5.conf file is correct.  This shoud not impact HDFS as a whole since HTTPFS is a client of HDFS really.

 

Cloudera Manager should merge the HTTP principal automatically, so please run the following to make sure the keytab has the right keys:

 

# klist -kte /run/cloudera-scm-agent/process/1330-hdfs-HTTPFS/httpfs.keytab

 

 

avatar
Explorer

I have the same issue , 

 

My hosts in cluster have hostname something like this

 

192.168.X.X  Master

192.168.X.X Slave1

192.168.X.X Slave2

192.168.X.X Slave3 

 

And generated principal names were like

 

hdfs/Master@CLOUDERA

spark/Slave1@CLOUDERA

 

And when a data node is started it was looking for hdfs/master@Former Member instead of hdfs/Master@Former Member

 

Resoultion steps:

 

1)Change HOSTNAME in /etc/sysconfig/network 

 

HOSTNAME=master on Master node , HOSTNAME=slave1 on Slave1 node

2)Have all the hosts in cluster maintain same hostname 

 

192.168.X.X master

192.168.X.X slave

 

3) Reboot all hosts

 

4) Check for the hostname 

 

5) On Cloudera manager -> For each hosts - > regenrate keytab

 

6) Go to Administration->Security->KerberosCredentials ->Check the prinicipal names are with correct hosts like 

 

hdfs/master@CLOUDERA

hdfs/slave1@CLOUDERA