Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

HDFS is not accessible from an user after kerberos implementation

Highlighted

HDFS is not accessible from an user after kerberos implementation

Hi,

 

We are unable to access HDFS from a particular user 'edtuser' from putty after enabling kerberos. But we are able to access HDFS from 'root' user and 'hdfs' user. When we are trying to access HDFS from 'edtuser', the below error is coming:-

 

[edtuser@nladfmrvu11 ~]$
[edtuser@nladfmrvu11 ~]$ hadoop fs -ls /
19/08/22 09:48:06 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/22 09:48:06 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/22 09:48:06 INFO retry.RetryInvocationHandler: java.io.IOException: DestHost:destPort nladfmrvu12.mtn.co.za:8020 , LocalHost:localPort nladfmrvu11.mtn.co.za/10.244.8.52:0. Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS], while invoking ClientNamenodeProtocolTranslatorPB.getFileInfo over nladfmrvu12.mtn.co.za/10.244.8.53:8020 after 1 failover attempts. Trying to failover after sleeping for 610ms.
19/08/22 09:48:06 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/22 09:48:06 INFO retry.RetryInvocationHandler: java.io.IOException: DestHost:destPort nladfmrvu11.mtn.co.za:8020 , LocalHost:localPort nladfmrvu11.mtn.co.za/10.244.8.52:0. Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS], while invoking ClientNamenodeProtocolTranslatorPB.getFileInfo over nladfmrvu11.mtn.co.za/10.244.8.52:8020 after 2 failover attempts. Trying to failover after sleeping for 1680ms.
19/08/22 09:48:08 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/22 09:48:08 INFO retry.RetryInvocationHandler: java.io.IOException: DestHost:destPort nladfmrvu12.mtn.co.za:8020 , LocalHost:localPort nladfmrvu11.mtn.co.za/10.244.8.52:0. Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS], while invoking ClientNamenodeProtocolTranslatorPB.getFileInfo over nladfmrvu12.mtn.co.za/10.244.8.53:8020 after 3 failover attempts. Trying to failover after sleeping for 5206ms.
19/08/22 09:48:13 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/22 09:48:13 INFO retry.RetryInvocationHandler: java.io.IOException: DestHost:destPort nladfmrvu11.mtn.co.za:8020 , LocalHost:localPort nladfmrvu11.mtn.co.za/10.244.8.52:0. Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS], while invoking ClientNamenodeProtocolTranslatorPB.getFileInfo over nladfmrvu11.mtn.co.za/10.244.8.52:8020 after 4 failover attempts. Trying to failover after sleeping for 10481ms.
19/08/22 09:48:24 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/22 09:48:24 INFO retry.RetryInvocationHandler: java.io.IOException: DestHost:destPort nladfmrvu12.mtn.co.za:8020 , LocalHost:localPort nladfmrvu11.mtn.co.za/10.244.8.52:0. Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS], while invoking ClientNamenodeProtocolTranslatorPB.getFileInfo over nladfmrvu12.mtn.co.za/10.244.8.53:8020 after 5 failover attempts. Trying to failover after sleeping for 8347ms.
19/08/22 09:48:32 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/22 09:48:32 INFO retry.RetryInvocationHandler: java.io.IOException: DestHost:destPort nladfmrvu11.mtn.co.za:8020 , LocalHost:localPort nladfmrvu11.mtn.co.za/10.244.8.52:0. Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS], while invoking ClientNamenodeProtocolTranslatorPB.getFileInfo over nladfmrvu11.mtn.co.za/10.244.8.52:8020 after 6 failover attempts. Trying to failover after sleeping for 13318ms.
19/08/22 09:48:45 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/22 09:48:45 INFO retry.RetryInvocationHandler: java.io.IOException: DestHost:destPort nladfmrvu12.mtn.co.za:8020 , LocalHost:localPort nladfmrvu11.mtn.co.za/10.244.8.52:0. Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS], while invoking ClientNamenodeProtocolTranslatorPB.getFileInfo over nladfmrvu12.mtn.co.za/10.244.8.53:8020 after 7 failover attempts. Trying to failover after sleeping for 14341ms.
19/08/22 09:49:00 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/22 09:49:00 INFO retry.RetryInvocationHandler: java.io.IOException: DestHost:destPort nladfmrvu11.mtn.co.za:8020 , LocalHost:localPort nladfmrvu11.mtn.co.za/10.244.8.52:0. Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS], while invoking ClientNamenodeProtocolTranslatorPB.getFileInfo over nladfmrvu11.mtn.co.za/10.244.8.52:8020 after 8 failover attempts. Trying to failover after sleeping for 17707ms.
19/08/22 09:49:17 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/22 09:49:17 INFO retry.RetryInvocationHandler: java.io.IOException: DestHost:destPort nladfmrvu12.mtn.co.za:8020 , LocalHost:localPort nladfmrvu11.mtn.co.za/10.244.8.52:0. Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS], while invoking ClientNamenodeProtocolTranslatorPB.getFileInfo over nladfmrvu12.mtn.co.za/10.244.8.53:8020 after 9 failover attempts. Trying to failover after sleeping for 12086ms.
19/08/22 09:49:29 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/22 09:49:29 INFO retry.RetryInvocationHandler: java.io.IOException: DestHost:destPort nladfmrvu11.mtn.co.za:8020 , LocalHost:localPort nladfmrvu11.mtn.co.za/10.244.8.52:0. Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS], while invoking ClientNamenodeProtocolTranslatorPB.getFileInfo over nladfmrvu11.mtn.co.za/10.244.8.52:8020 after 10 failover attempts. Trying to failover after sleeping for 11451ms.
19/08/22 09:49:41 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/22 09:49:41 INFO retry.RetryInvocationHandler: java.io.IOException: DestHost:destPort nladfmrvu12.mtn.co.za:8020 , LocalHost:localPort nladfmrvu11.mtn.co.za/10.244.8.52:0. Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS], while invoking ClientNamenodeProtocolTranslatorPB.getFileInfo over nladfmrvu12.mtn.co.za/10.244.8.53:8020 after 11 failover attempts. Trying to failover after sleeping for 22482ms.
19/08/22 09:50:03 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/22 09:50:03 INFO retry.RetryInvocationHandler: java.io.IOException: DestHost:destPort nladfmrvu11.mtn.co.za:8020 , LocalHost:localPort nladfmrvu11.mtn.co.za/10.244.8.52:0. Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS], while invoking ClientNamenodeProtocolTranslatorPB.getFileInfo over nladfmrvu11.mtn.co.za/10.244.8.52:8020 after 12 failover attempts. Trying to failover after sleeping for 22157ms.
19/08/22 09:50:25 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/22 09:50:25 INFO retry.RetryInvocationHandler: java.io.IOException: DestHost:destPort nladfmrvu12.mtn.co.za:8020 , LocalHost:localPort nladfmrvu11.mtn.co.za/10.244.8.52:0. Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS], while invoking ClientNamenodeProtocolTranslatorPB.getFileInfo over nladfmrvu12.mtn.co.za/10.244.8.53:8020 after 13 failover attempts. Trying to failover after sleeping for 20978ms.
19/08/22 09:50:46 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/22 09:50:46 INFO retry.RetryInvocationHandler: java.io.IOException: DestHost:destPort nladfmrvu11.mtn.co.za:8020 , LocalHost:localPort nladfmrvu11.mtn.co.za/10.244.8.52:0. Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS], while invoking ClientNamenodeProtocolTranslatorPB.getFileInfo over nladfmrvu11.mtn.co.za/10.244.8.52:8020 after 14 failover attempts. Trying to failover after sleeping for 16712ms.
^Z
[1]+ Stopped hadoop fs -ls /
[edtuser@nladfmrvu11 ~]$

3 REPLIES 3

Re: HDFS is not accessible from an user after kerberos implementation

Explorer

log in as the user and kinit to re- establish validity

Re: HDFS is not accessible from an user after kerberos implementation

Super Mentor

@pritam_konar 

Please make sure that you have a valid kerberos ticket before running a hdfs command.
You can get a valid kerberos ticket as following:

1). Get the principal name from the keytab:

Example:

# klist -kte /etc/security/keytabs/hdfs.headless.keytab 
Keytab name: FILE:/etc/security/keytabs/hdfs.headless.keytab
KVNO Timestamp Principal
---- ------------------- ------------------------------------------------------
2 08/11/2019 01:58:27 hdfs-ker1latest@EXAMPLE.COM (des-cbc-md5) 
2 08/11/2019 01:58:27 hdfs-ker1latest@EXAMPLE.COM (aes256-cts-hmac-sha1-96) 
2 08/11/2019 01:58:27 hdfs-ker1latest@EXAMPLE.COM (des3-cbc-sha1) 
2 08/11/2019 01:58:27 hdfs-ker1latest@EXAMPLE.COM (arcfour-hmac) 
2 08/11/2019 01:58:27 hdfs-ker1latest@EXAMPLE.COM (aes128-cts-hmac-sha1-96)

 

2). Get a valid kerberos ticke t as following. Please not that in the following command your Principal name might be different based on your cluster. So please change the principal name according to the output that you received from above command.

# kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-ker1latest@EXAMPLE.COM

# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: hdfs-ker1latest@EXAMPLE.COM

Valid starting Expires Service principal
08/22/2019 22:47:43 08/23/2019 22:47:43 krbtgt/EXAMPLE.COM@EXAMPLE.COM

3). Now try to run the same HDFS command. This time you should be able to run those commands successfully.

# hadoop fs -ls /


*NOTE:* In the above case we are using "/etc/security/keytabs/hdfs.headless.keytab" in your case you can have your own a valid keytab that allows you to interact with HDFS then you should use that one. For testing you can use the hdfs.headless.keytab.

Re: HDFS is not accessible from an user after kerberos implementation

Mentor

@pritam_konar 

 

In reality a user shouldn't be able to execute or kinit with the hdfs keytab but have a keytab created for the specific user and when need be deleted when the user is disabled on the cluster typically this user setup happens on the edge node where the hadoop client software are installed and is the recommended setup for giving users access to the cluster.

Below is a demo of the user konar when he attempts to access services in a kerberized cluster

# su - konar

[konar@simba ~]$ id
uid=1024(konar) gid=1024(konar) groups=1024(konar)

Now try to list the directories in HDFS

[konar@simba ~]$ hdfs dfs -ls /

Error 
19/08/24 23:59:25 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
ls: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "simba.kenya.ke/192.168.0.87"; destination host is: "simba.kenya.ke":8020;

 

Below is the desired output when the user konar attempts to use the hdfs headless keytab,

[konar@simba ~]$ kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-jair@KENYA.KE
kinit: Permission denied while getting initial credentials


To enable a user to access the cluster, on the Kerberos server as the root (Keberos admin)  do the following steps,

Assumption  Realm is KENYA.KE  and KDC  host is simba and you have root access on the KDC.

Create the admin principle for user konar

[root@simba ~]# kadmin.local
Authenticating as principal root/admin@KENYA.KE with password.
kadmin.local: addprinc konar@KENYA.KE
WARNING: no policy specified for konar@KENYA.KE; defaulting to no policy
Enter password for principal "konar@KENYA.KE":
Re-enter password for principal "konar@KENYA.KE":
Principal "konar@KENYA.KE" created.
kadmin.local: q

Validate the principal was created using the subcommand listprincs [List principals] and limiting the output by restricting to  konar classic Unix stuff

[root@simba ~]# kadmin.local
Authenticating as principal root/admin@KENYA.KE with password.
kadmin.local: listprincs *konar
konar@KENYA.KE

Type q [quit] to exit the kadmin utility

 

Generate the keytab

Generate keytab for user konar using the ktutil, it's good to change to /tmp or whatever you choose so you know the location of the generated keytab your encryption Algorithm could be different but this should work

[root@simba tmp]# ktutil
ktutil: addent -password -p konar@KENYA.KE -k 1 -e RC4-HMAC
Password for konar@KENYA.KE:
ktutil: wkt konar.keytab
ktutil: q

 

Validate the keytab creation

Check the keytab was generated in the current directory, notice the file permissions!!

[root@simba tmp]# ls -lrt
-rw------- 1 root root 58 Aug 25 18:22 konar.keytab

 

As root copy the generate keytab to the home directory of user konar typically on the edge node

[root@simba tmp]# cp konar.keytab /home/konar/

 

Change to konar's home dir and vaildate the copy was successful
[root@simba tmp]# cd /home/konar/
[root@simba konar]# ll
total 4
-rw------- 1 root root 58 Aug 25 18:28 konar.keytab

 

Change file ownership

Change the file permission on the konar.keytab so that user konar has the appropriate permissions.

[root@simba konar]# chown konar:konar konar.keytab
[root@simba konar]# ll
total 4
-rw------- 1 konar konar 58 Aug 25 18:28 konar.keytab

 

Switch to user konar and validate that the user has can't  still access to hdfs

$ hdfs dfs -ls /

[konar@simba ~]$ hdfs dfs -ls /

Output
19/08/25 18:36:44 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
ls: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "simba.kenya.ke/192.168.0.87"; destination host is: "simba.kenya.ke":8020;

 

The kerberos klist also confirms that

[konar@simba ~]$ klist
klist: No credentials cache found (filename: /tmp/krb5cc_1024)

 

As user Konar now try to kinit with the correct principal, the first step is to identify the correct principal

[konar@simba ~]$ klist -kt konar.keytab
Keytab name: FILE:konar.keytab
KVNO Timestamp Principal
---- ------------------- ------------------------------------------------------
1 08/25/2019 18:22:34 konar@KENYA.KE

The above shows the konar user keytab is valid with the principal in the output

 

Now user konar can grab a valid ticket ûsing the below snippet concatenating the keytab + principal

 

[konar@simba ~]$ kinit -kt konar.keytab konar@KENYA.KE

The above should throw any error

 

Now validate the user has a valid ticket

[konar@simba ~]$ klist
Ticket cache: FILE:/tmp/krb5cc_1024
Default principal: konar@KENYA.KE

Valid starting Expires Service principal
08/25/2019 18:53:40 08/26/2019 18:53:40 krbtgt/KENYA.KE@KENYA.KE

Bravo you have a valid ticket and hence access to the cluster let's validate that the below  HDFS list  directory should succeed

 

[konar@simba ~]$ hdfs dfs -ls /
Found 10 items
drwxrwxrwx - yarn hadoop 0 2018-12-17 21:53 /app-logs
drwxr-xr-x - hdfs hdfs 0 2018-09-24 00:22 /apps
drwxr-xr-x - yarn hadoop 0 2018-09-24 00:12 /ats
drwxr-xr-x - hdfs hdfs 0 2018-09-24 00:12 /hdp
drwxr-xr-x - mapred hdfs 0 2018-09-24 00:12 /mapred
drwxrwxrwx - mapred hadoop 0 2018-09-24 00:12 /mr-history
drwxr-xr-x - hdfs hdfs 0 2018-12-17 19:16 /ranger
drwxrwxrwx - spark hadoop 0 2019-08-25 18:59 /spark2-history
drwxrwxrwx - hdfs hdfs 0 2018-10-11 11:16 /tmp
drwxr-xr-x - hdfs hdfs 0 2018-09-24 00:23 /user

 

User konar can now list and execute jobs on the cluster !!!! as reiterated the konar user in a recommended architecture should be on the edge node.

 

 

 

Don't have an account?
Coming from Hortonworks? Activate your account here