Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Problem with Kerberos & user hdfs

avatar
Guru

Hi,

 

after enabling Kerberos security on the cluster (related guideline here) I got stuck at step 15 (Create the hdfs Super User Principal).

At the end I am not able to execute a hadoop command as user hdfs from the cmd-line, like "sudo -u hdfs hadoop dfs -ls /user". After reading some doc's and sites I verified that I have installed the Java security jar's and that the krbtgt principal doesn't have the attribute "requires_preauth".

 

Problem:

=======

execution of 

sudo -u hdfs hadoop dfs -ls /user

fails with error:

""

root@hadoop-pg-2:~# sudo -u hdfs hadoop dfs -ls /user
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

14/02/25 14:32:10 ERROR security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
14/02/25 14:32:10 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
14/02/25 14:32:10 ERROR security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:KERBEROS) cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
ls: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "hadoop-pg-2.cluster/10.147.210.2"; destination host is: "hadoop-pg-2.cluster":8020;

""

 

Previous steps

============

1. create hdfs principal via kadmin: addprinc hdfs@HADOOP-PG

2. obtain a tgt for user hdfs: kinit hdfs@HADOOP-PG

3. check: klist -f

root@hadoop-pg-2:~# klist -f
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: hdfs@HADOOP-PG

Valid starting Expires Service principal
02/25/14 14:30:32 02/26/14 14:30:29 krbtgt/HADOOP-PG@HADOOP-PG
renew until 03/04/14 14:30:29, Flags: FPRIA

 

=> thereby I assume authentication for user hdfs worked nicely, since at creation of principal and obtaining the tgt the provided password was accepted....and a tgt was created successfully

 

4. execute the Hadoop command mentioned above.......results in the error shown above 😞

 

5. try to renew the ticket: kinit -R . Execute successfully

6. repeat step 4. => same error

 

7. enable Kerberos debug output and try to run 4. Log:

""

root@hadoop-pg-2:~$ su - hdfs
hdfs@hadoop-pg-2:~$ kinit
Password for hdfs@HADOOP-PG: 

hdfs@hadoop-pg-2:~$ klist
Ticket cache: FILE:/tmp/krb5cc_996
Default principal: hdfs@HADOOP-PG

Valid starting     Expires            Service principal
02/25/14 14:55:26  02/26/14 14:55:26  krbtgt/HADOOP-PG@HADOOP-PG
	renew until 03/04/14 14:55:26

hdfs@hadoop-pg-2:~$ hadoop dfs -ls /user
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Config name: /etc/krb5.conf
>>>KinitOptions cache name is /tmp/krb5cc_996
>>>DEBUG <CCacheInputStream>  client principal is hdfs@HADOOP-PG
>>>DEBUG <CCacheInputStream> server principal is krbtgt/HADOOP-PG@HADOOP-PG
>>>DEBUG <CCacheInputStream> key type: 18
>>>DEBUG <CCacheInputStream> auth time: Tue Feb 25 14:55:26 CET 2014
>>>DEBUG <CCacheInputStream> start time: Tue Feb 25 14:55:26 CET 2014
>>>DEBUG <CCacheInputStream> end time: Wed Feb 26 14:55:26 CET 2014
>>>DEBUG <CCacheInputStream> renew_till time: Tue Mar 04 14:55:26 CET 2014
>>> CCacheInputStream: readFlags()  FORWARDABLE; PROXIABLE; RENEWABLE; INITIAL;
>>>DEBUG <CCacheInputStream>  client principal is hdfs@HADOOP-PG
>>>DEBUG <CCacheInputStream> server principal is X-CACHECONF:/krb5_ccache_conf_data/fast_avail/krbtgt/HADOOP-PG@HADOOP-PG
>>>DEBUG <CCacheInputStream> key type: 0
>>>DEBUG <CCacheInputStream> auth time: Thu Jan 01 01:00:00 CET 1970
>>>DEBUG <CCacheInputStream> start time: Thu Jan 01 01:00:00 CET 1970
>>>DEBUG <CCacheInputStream> end time: Thu Jan 01 01:00:00 CET 1970
>>>DEBUG <CCacheInputStream> renew_till time: Thu Jan 01 01:00:00 CET 1970
>>> CCacheInputStream: readFlags() 
>>> unsupported key type found the default TGT: 18
14/02/25 14:55:40 ERROR security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
14/02/25 14:55:40 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
14/02/25 14:55:40 ERROR security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:KERBEROS) cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
ls: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "hadoop-pg-2.cluster/10.147.210.2"; destination host is: "hadoop-pg-2.cluster":8020;

 

""

 

The message unsupported key type found the default TGT: 18 makes me thinking of missing the Java strong crypto files,but I copied the jar's US_export_policy.jar and local_policy.jar into the folder /usr/lib/jvm/java-7-oracle/jre/lib/security =>

hdfs@hadoop-pg-2:/usr/lib/jvm$ ls -al /usr/lib/jvm/java-7-oracle/jre/lib/security/
total 140
drwxr-xr-x  2 root root  4096 Jan 31 10:30 .
drwxr-xr-x 16 root root  4096 Jan 31 10:30 ..
-rw-r--r--  1 root root  2770 Jan 31 10:30 blacklist
-rw-r--r--  1 root root 82586 Jan 31 10:30 cacerts
-rw-r--r--  1 root root   158 Jan 31 10:30 javafx.policy
-rw-r--r--  1 root root  2593 Jan 31 10:30 java.policy
-rw-r--r--  1 root root 17838 Jan 31 10:30 java.security
-rw-r--r--  1 root root    98 Jan 31 10:30 javaws.policy
-rw-r--r--  1 root root  2500 Feb 21 15:41 local_policy.jar
-rw-r--r--  1 root root     0 Jan 31 10:30 trusted.libraries
-rw-r--r--  1 root root  2487 Feb 21 15:41 US_export_policy.jar

 

I have no idea what to check next, any help appreciated 🙂 (I want to avoid removing AES256 from being supported by Kerberos and thereby recreate all principals or even creating a new Kerberos db...

 

 

1 ACCEPTED SOLUTION

avatar
Guru

STUPID ME 😉

 

Re-checking the installation of the JCE files brought me on the right track.

Executing the hadoop-command on the shell was using "old" Java6 and I installed the JCE files just for Java7, since I configured in CM JAVA_HOME to use Java7.

 

A simple "export JAVA_HOME=/usr/lib/jvm/java-7-oracle/jre" before executing "hadoop dfs ..." on the shell solved this issue.

View solution in original post

4 REPLIES 4

avatar
Guru

Tried a different approach, sadly resulting in the same problem/error.

I tried to use the hdfs user-principal created by ClouderaManager to submit a hdfs command on the shell, but I still get this "unsupported key type found the default TGT: 18"

 

Log:

===

 

#>su - hdfs

#>export HADOOP_OPTS="-Dsun.security.krb5.debug=true"

#>kinit -k -t /var/run/cloudera-scm-agent/process/1947-hdfs-DATANODE/hdfs.keytab hdfs/hadoop-pg-7.cluster

#>kinit -R

#>hadoop dfs -ls /user

DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Config name: /etc/krb5.conf
>>>KinitOptions cache name is /tmp/krb5cc_998
>>>DEBUG <CCacheInputStream> client principal is hdfs/hadoop-pg-7.cluster@HADOOP-PG
>>>DEBUG <CCacheInputStream> server principal is krbtgt/HADOOP-PG@HADOOP-PG
>>>DEBUG <CCacheInputStream> key type: 18
>>>DEBUG <CCacheInputStream> auth time: Wed Feb 26 11:07:49 CET 2014
>>>DEBUG <CCacheInputStream> start time: Wed Feb 26 11:07:55 CET 2014
>>>DEBUG <CCacheInputStream> end time: Thu Feb 27 11:07:55 CET 2014
>>>DEBUG <CCacheInputStream> renew_till time: Wed Mar 05 11:07:49 CET 2014
>>> CCacheInputStream: readFlags() FORWARDABLE; PROXIABLE; RENEWABLE; INITIAL;
>>> unsupported key type found the default TGT: 18
14/02/26 11:08:07 ERROR security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
14/02/26 11:08:07 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
14/02/26 11:08:07 ERROR security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:KERBEROS) cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
ls: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "hadoop-pg-7.cluster/10.147.210.7"; destination host is: "hadoop-pg-2.cluster":8020;

 

~~~~~~~

 

#>klist -ef

Ticket cache: FILE:/tmp/krb5cc_998
Default principal: hdfs/hadoop-pg-7.cluster@HADOOP-PG

Valid starting Expires Service principal
02/26/14 11:08:21 02/27/14 11:08:21 krbtgt/HADOOP-PG@HADOOP-PG
renew until 03/05/14 11:07:49, Flags: FPRIT
Etype (skey, tkt): AES-256 CTS mode with 96-bit SHA-1 HMAC, AES-256 CTS mode with 96-bit SHA-1 HMAC

 

Now what?

avatar
Guru

STUPID ME 😉

 

Re-checking the installation of the JCE files brought me on the right track.

Executing the hadoop-command on the shell was using "old" Java6 and I installed the JCE files just for Java7, since I configured in CM JAVA_HOME to use Java7.

 

A simple "export JAVA_HOME=/usr/lib/jvm/java-7-oracle/jre" before executing "hadoop dfs ..." on the shell solved this issue.

avatar
Explorer

Hi i am using /usr/java/jdk1.8.0_202 and i am facing issue even if i submit the 

export JAVA_HOME=/usr/java/jdk1.8.0_202 command. i am stucked in the middle of a job with the following error. please help me to sort out the issue.
 
 
thanks in advance

avatar
New Contributor

Hi @Jibinjks  ,

Was this issue resolved? if yes, can you update the solution.

I am stuck in a similar problem. Export did not work for me too.

 

Br

Sandeep