Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

inital password for kerberos principals

avatar
Explorer

I used Cloudera Manager to enable kerberos. I verified that all the principals for all hosts are created in my Kerberos database and all the keytabs are distributed to all the nodes. But when I try to authenticate using any of the principals, like hdfs, hbase, etc.. I get this: 

 

$ kinit hdfs/hostname

Password for hdfs/hostname@REALM:

kinit: Password incorrect while getting initial credentials

 

I don't remember the CM wizard asking me for a password for all the principals. (It just asked for cloudera-scm/admin principal in Kerberos so that it can create new principals)

 

Does anyone know what the initial password is for the newly-created principals? Or do I have to go and change the passwords for all and redistribute the keytab to all nodes?

 

Thank you.

1 ACCEPTED SOLUTION

avatar
Master Guru

@ramin,

 

I'm happy to help.  The problem now is that your /etc/krb5.conf is using a cache that Java is not able to access.

If you look at your klist output you see:

 

Ticket cache: KEYRING:persistent:0:krb_ccache_KK2INr6

 

By default, recent linux OSes have the "keyring" type of cache defined in /etc/krb5.conf.

While MIT Kerberos's kinit command recognizes the cache type, Java does not.   So, when you run hdfs dfs -ls /, java cannot find any Ticket Granting Ticket and you get the error.

 

To solve this, edit your /etc/krb5.conf and comment out the line with "default_ccache_name" in it.  Add a pount sign in front of it.  For example:

#default_ccache_name = KEYRING:persistent:%{uid}

 

 

This will allow the "kinit" command to store the credentials cache in the default /tmp directory location using the "FILE" type of cache.  Java can access this as it uses the same default type.

 

 

View solution in original post

8 REPLIES 8

avatar
Master Guru

Hi @ramin,

 

When your hadoop service credentials are created, the password is randomized so you are not supposed to know what it is.

If you would like, you can create a user in your KDC with a principal "hdfs@REALM" so you can create the password.

 

Alternatively, you can kinit via an hdfs keytab like this (assuming you are on a namenode host)

 

kinit -kt /var/run/cloudera-scm-agent/process/`ls -lrt /var/run/cloudera-scm-agent/process/ | awk '{print $9}' |grep NAMENODE| tail -1`/hdfs.keytab hdfs/hostname@REALM

 

The keytab contains the password so you do not need to know it.  That is why you need to be very careful to protect access to any keytabs you create.

 

All that said, it is advisable to try to create users who are not "hdfs" and then either  make them superusers or give them the necessary permission to do what they need to do.  That way, actions they take can be reviewed via audit more readily.

 

 

avatar
Explorer

@bgooley, thank you for responding. I did what you suggested, but I am still unable to authenticate against HDFS. See below.

 

[root@datanode01 process]# kinit -kt 226-hdfs-DATANODE/hdfs.keytab hdfs/datanode01.domain.com@REALM

[root@datanode01 process]# klist

Ticket cache: KEYRING:persistent:0:krb_ccache_KK2INr6

Default principal: hdfs/datanode01.domain.com@REALM

 

Valid starting       Expires              Service principal

03/16/2018 22:55:09  03/17/2018 22:55:09  krbtgt/REALM@REALM

[root@datanode01 process]# hdfs dfs -ls /

18/03/16 22:55:22 WARN security.UserGroupInformation: PriviledgedActionException as:root (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]

18/03/16 22:55:22 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]

18/03/16 22:55:22 WARN security.UserGroupInformation: PriviledgedActionException as:root (auth:KERBEROS) cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]

ls: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "datanode01.domain.com/10.0.0.5"; destination host is: "namenode01.domain.com":8020; 

Your help is greatly appreciated.

 

avatar
Master Guru

@ramin,

 

I'm happy to help.  The problem now is that your /etc/krb5.conf is using a cache that Java is not able to access.

If you look at your klist output you see:

 

Ticket cache: KEYRING:persistent:0:krb_ccache_KK2INr6

 

By default, recent linux OSes have the "keyring" type of cache defined in /etc/krb5.conf.

While MIT Kerberos's kinit command recognizes the cache type, Java does not.   So, when you run hdfs dfs -ls /, java cannot find any Ticket Granting Ticket and you get the error.

 

To solve this, edit your /etc/krb5.conf and comment out the line with "default_ccache_name" in it.  Add a pount sign in front of it.  For example:

#default_ccache_name = KEYRING:persistent:%{uid}

 

 

This will allow the "kinit" command to store the credentials cache in the default /tmp directory location using the "FILE" type of cache.  Java can access this as it uses the same default type.

 

 

avatar
Explorer

Thank you! That was indeed the problem. You rock!

avatar
New Contributor

Hello Guys,

 

i facing same proble too. I am able to list down the file from hdfs

 

[s_hadoop@inggnvcdera1 ~]$ hadoop fs -ls /
Found 5 items
drwxr-xr-x   - hdfs  supergroup          0 2018-08-20 13:40 /externaltableslocation
drwx------   - hbase hbase               0 2018-09-05 14:54 /hbase
drwxrwxr-x   - solr  solr                0 2018-06-21 22:06 /solr
drwxrwxrwt   - hdfs  supergroup          0 2018-08-09 16:44 /tmp
drwxr-xr-x   - hdfs  supergroup          0 2018-08-20 16:45 /user

 

but when i try to delete the i am getting error as below

 

[root@inggnvcdera1 etc]# sudo -u hdfs hadoop fs -rm /externaltableslocation/servicenow_proactivesla_extb/part-m-00000
18/09/07 12:16:56 WARN security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
18/09/07 12:16:56 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
18/09/07 12:16:56 WARN security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:KERBEROS) cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
rm: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "inggnvcdera1.******.com/10.202.42.106"; destination host is: "inggnvcdera1.******.com":8020;

 

Kindly guide me where i am doning mistake

i am not able to find default_cache_** in my krb5.conf file that can i comment

root@inggnvcdera1 etc]# cat /etc/krb5.conf
[libdefaults]
default_realm = GLOBAL.******.NET
dns_lookup_kdc = false
dns_lookup_realm = false
ticket_lifetime = 86400
renew_lifetime = 604800
forwardable = true
default_tgs_enctypes = rc4-hmac
default_tkt_enctypes = rc4-hmac
permitted_enctypes = rc4-hmac
udp_preference_limit = 1
kdc_timeout = 3000
[realms]
GLOBAL.******.NET = {
kdc = inggnvcdera1.******.com
admin_server = inggnvcdera1.******.com
}
[domain_realm]

 

any help will really appreciated.

 

 

 

avatar
New Contributor

issue has been resolved with fix. Thanks

avatar

I have been debugging this issue and was not able to find the solution. Thanks so much for your solution here @bgooley Really appreciate it!!

 

By commenting the default cache name in krb.conf file I was able to perform the secure cluster commands.

Thank you so much!!

 

avatar
Master Guru

@cloud123user, Thanks for the kind feedback.  I am glad that the solution worked for you!