Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

After enabling kerberos (local MIT) unable to access the HDFS.

avatar

Please find the logs.

$ HADOOP_ROOT_LOGGER=DEBUG,console hdfs dfs -ls /
19/10/14 08:59:25 DEBUG util.Shell: setsid exited with exit code 0
19/10/14 08:59:25 DEBUG conf.Configuration: parsing URL jar:file:/usr/hdp/3.0.1.0-187/hadoop/hadoop-common-3.1.1.3.0.1.0-187.jar!/core-default.xml
19/10/14 08:59:25 DEBUG conf.Configuration: parsing input stream sun.net.www.protocol.jar.JarURLConnection$JarURLInputStream@66480dd7
19/10/14 08:59:25 DEBUG conf.Configuration: parsing URL file:/etc/hadoop/3.0.1.0-187/0/core-site.xml
19/10/14 08:59:25 DEBUG conf.Configuration: parsing input stream java.io.BufferedInputStream@1877ab81
19/10/14 08:59:25 DEBUG security.SecurityUtil: Setting hadoop.security.token.service.use_ip to true
19/10/14 08:59:25 DEBUG security.Groups: Creating new Groups object
19/10/14 08:59:25 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
19/10/14 08:59:25 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library
19/10/14 08:59:25 DEBUG security.JniBasedUnixGroupsMapping: Using JniBasedUnixGroupsMapping for Group resolution
19/10/14 08:59:25 DEBUG security.JniBasedUnixGroupsMappingWithFallback: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping
19/10/14 08:59:25 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
19/10/14 08:59:25 DEBUG core.Tracer: sampler.classes = ; loaded no samplers
19/10/14 08:59:25 DEBUG core.Tracer: span.receiver.classes = ; loaded no span receivers
19/10/14 08:59:25 DEBUG security.UserGroupInformation: hadoop login
19/10/14 08:59:25 DEBUG security.UserGroupInformation: hadoop login commit
19/10/14 08:59:25 DEBUG security.UserGroupInformation: using local user:UnixPrincipal: hdfs
19/10/14 08:59:25 DEBUG security.UserGroupInformation: Using user: "UnixPrincipal: hdfs" with name hdfs
19/10/14 08:59:25 DEBUG security.UserGroupInformation: User entry: "hdfs"
19/10/14 08:59:25 DEBUG security.UserGroupInformation: UGI loginUser:hdfs (auth:SIMPLE)
19/10/14 08:59:25 DEBUG core.Tracer: sampler.classes = ; loaded no samplers
19/10/14 08:59:25 DEBUG core.Tracer: span.receiver.classes = ; loaded no span receivers
19/10/14 08:59:25 DEBUG fs.FileSystem: Loading filesystems
19/10/14 08:59:25 DEBUG fs.FileSystem: file:// = class org.apache.hadoop.fs.LocalFileSystem from /usr/hdp/3.0.1.0-187/hadoop/hadoop-common-3.1.1.3.0.1.0-187.jar
19/10/14 08:59:25 DEBUG fs.FileSystem: viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem from /usr/hdp/3.0.1.0-187/hadoop/hadoop-common-3.1.1.3.0.1.0-187.jar
19/10/14 08:59:25 DEBUG fs.FileSystem: har:// = class org.apache.hadoop.fs.HarFileSystem from /usr/hdp/3.0.1.0-187/hadoop/hadoop-common-3.1.1.3.0.1.0-187.jar
19/10/14 08:59:25 DEBUG fs.FileSystem: http:// = class org.apache.hadoop.fs.http.HttpFileSystem from /usr/hdp/3.0.1.0-187/hadoop/hadoop-common-3.1.1.3.0.1.0-187.jar
19/10/14 08:59:25 DEBUG fs.FileSystem: https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from /usr/hdp/3.0.1.0-187/hadoop/hadoop-common-3.1.1.3.0.1.0-187.jar
19/10/14 08:59:25 DEBUG fs.FileSystem: hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem from /usr/hdp/3.0.1.0-187/hadoop-hdfs/hadoop-hdfs-client-3.1.1.3.0.1.0-187.jar
19/10/14 08:59:25 DEBUG fs.FileSystem: webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem from /usr/hdp/3.0.1.0-187/hadoop-hdfs/hadoop-hdfs-client-3.1.1.3.0.1.0-187.jar
19/10/14 08:59:25 DEBUG fs.FileSystem: swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSystem from /usr/hdp/3.0.1.0-187/hadoop-hdfs/hadoop-hdfs-client-3.1.1.3.0.1.0-187.jar
19/10/14 08:59:25 DEBUG gcs.GoogleHadoopFileSystemBase: GHFS version: 1.9.0.3.0.1.0-187
19/10/14 08:59:25 DEBUG fs.FileSystem: gs:// = class com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem from /usr/hdp/3.0.1.0-187/hadoop-mapreduce/gcs-connector-1.9.0.3.0.1.0-187-shaded.jar
19/10/14 08:59:25 DEBUG fs.FileSystem: s3n:// = class org.apache.hadoop.fs.s3native.NativeS3FileSystem from /usr/hdp/3.0.1.0-187/hadoop-mapreduce/hadoop-aws-3.1.1.3.0.1.0-187.jar
19/10/14 08:59:25 DEBUG fs.FileSystem: Looking for FS supporting hdfs
19/10/14 08:59:25 DEBUG fs.FileSystem: looking for configuration option fs.hdfs.impl
19/10/14 08:59:26 DEBUG fs.FileSystem: Looking in service filesystems for implementation class
19/10/14 08:59:26 DEBUG fs.FileSystem: FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSystem
19/10/14 08:59:26 DEBUG impl.DfsClientConf: dfs.client.use.legacy.blockreader.local = false
19/10/14 08:59:26 DEBUG impl.DfsClientConf: dfs.client.read.shortcircuit = true
19/10/14 08:59:26 DEBUG impl.DfsClientConf: dfs.client.domain.socket.data.traffic = false
19/10/14 08:59:26 DEBUG impl.DfsClientConf: dfs.domain.socket.path = /var/lib/hadoop-hdfs/dn_socket
19/10/14 08:59:26 DEBUG hdfs.DFSClient: Sets dfs.client.block.write.replace-datanode-on-failure.min-replication to 0
19/10/14 08:59:26 DEBUG hdfs.HAUtilClient: No HA service delegation token found for logical URI hdfs://datalakeqa
19/10/14 08:59:26 DEBUG impl.DfsClientConf: dfs.client.use.legacy.blockreader.local = false
19/10/14 08:59:26 DEBUG impl.DfsClientConf: dfs.client.read.shortcircuit = true
19/10/14 08:59:26 DEBUG impl.DfsClientConf: dfs.client.domain.socket.data.traffic = false
19/10/14 08:59:26 DEBUG impl.DfsClientConf: dfs.domain.socket.path = /var/lib/hadoop-hdfs/dn_socket
19/10/14 08:59:26 DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
19/10/14 08:59:26 DEBUG ipc.Server: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcProtobufRequest, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@6babf3bf
19/10/14 08:59:26 DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@3d6f0054
19/10/14 08:59:26 DEBUG unix.DomainSocketWatcher: org.apache.hadoop.net.unix.DomainSocketWatcher$2@6129020f: starting with interruptCheckPeriodMs = 60000
19/10/14 08:59:26 DEBUG shortcircuit.DomainSocketFactory: The short-circuit local reads feature is enabled.
19/10/14 08:59:26 DEBUG sasl.DataTransferSaslUtil: DataTransferProtocol using SaslPropertiesResolver, configured QOP dfs.data.transfer.protection = authentication,privacy, configured class dfs.data.transfer.saslproperties.resolver.class = class org.apache.hadoop.security.SaslPropertiesResolver
19/10/14 08:59:26 DEBUG ipc.Client: The ping interval is 60000 ms.
19/10/14 08:59:26 DEBUG ipc.Client: Connecting to /10.49.70.13:8020
19/10/14 08:59:26 DEBUG security.UserGroupInformation: PrivilegedAction as:hdfs (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:796)
19/10/14 08:59:26 DEBUG security.SaslRpcClient: Sending sasl message state: NEGOTIATE

19/10/14 08:59:26 DEBUG security.SaslRpcClient: Get token info proto:interface org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB info:@org.apache.hadoop.security.token.TokenInfo(value=class org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSelector)
19/10/14 08:59:26 DEBUG security.SaslRpcClient: tokens aren't supported for this protocol or user doesn't have one
19/10/14 08:59:26 DEBUG security.SaslRpcClient: client isn't using kerberos
19/10/14 08:59:26 DEBUG security.UserGroupInformation: PrivilegedActionException as:hdfs (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/10/14 08:59:26 DEBUG security.UserGroupInformation: PrivilegedAction as:hdfs (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:720)
19/10/14 08:59:26 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/10/14 08:59:26 DEBUG security.UserGroupInformation: PrivilegedActionException as:hdfs (auth:SIMPLE) cause:java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/10/14 08:59:26 DEBUG ipc.Client: closing ipc connection to /10.49.70.13:8020: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:757)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:720)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:813)
at org.apache.hadoop.ipc.Client$Connection.access$3600(Client.java:410)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1558)
at org.apache.hadoop.ipc.Client.call(Client.java:1389)
at org.apache.hadoop.ipc.Client.call(Client.java:1353)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:900)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1654)
at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1583)
at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1580)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1595)
at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:65)
at org.apache.hadoop.fs.Globber.doGlob(Globber.java:283)
at org.apache.hadoop.fs.Globber.glob(Globber.java:149)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:2067)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:353)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:250)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:233)
at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:104)
at org.apache.hadoop.fs.shell.Command.run(Command.java:177)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:391)
Caused by: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
at org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:173)
at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:390)
at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:614)
at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:410)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:800)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:796)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:796)

7 REPLIES 7

avatar
Master Mentor

@saivenkatg55 

What do you mean by local (MIT) if I guess right you are accessing the HDP cluster from a client laptop or edge node where you installed the Kerberos client libraries. To communicate with secure Hadoop clusters that use Kerberos authentication, known as Kerberized clusters, a client uses the Kerberos client utilities. You MUST install these utilities on the same system where you are connecting from.

For Linux desktops here are the different options

Ubunt: # apt install krb5-user
RHEL/Centos: # yum install -y krb5-server krb5-libs krb5-workstation

These packages deliver the krb5.conf that the client should configure to connect to a kerberized cluster, the easier and recommended way is to copy the krb5.conf from the kdc server to all clients that need to connect to the Kerberized cluster in RHEL/CentOS it's located in /etc/krb5.conf. This file has the pointer to the REALM, KDC and ADMIN server

Here is an example

[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
default_realm = REDHAT.COM
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
[realms]
REDHAT.COM = {
kdc = KDC.REDHAT.COM
admin_server = KDC.REDHAT.COM
}
[domain_realm]
.redhat.com = REDHAT.COM
redhat.com = REDHAT.COM


Else share /var/log/kadmind.log and /var/log/kadmind.log

 

HTH

avatar

@Shelton yes shelton, installed all the keberos libs and packages in all the cluster hosts. After enabling kerberos, I am  trying to access hdfs from the client. But, after enabling kerberos it is not allowing me to access namenode, also namenode lost it s high availability. Attaching the Krbkdc and kadmin logs for your reference. Kindly do the needful.

avatar

@Shelton I am unable to add the kadmin and krbkdc logs in the bods, since it contains more lines.

any idea how to attach the logs 

avatar
Master Mentor

@saivenkatg55 

Share /var/log/kadmind.log and /var/log/kadmind.log .. you can use Big file Transfer

avatar

@Shelton big file transfer asking mail id. can you please share it, so that i can send it 

 

avatar
Master Mentor

@saivenkatg55 

 

I can see your name node is in safe mode can you do the following

As root

# su - hdfs

# Check  to validate what I saw in the log

$ hdfs dfs -safemode get

# Resolve the lock out

$ hdfs dfs -safemode leave

# Validate safe mode is off

$ hdfs dfs -safemode get

That should resolve the issue!

Then also send me Share /var/log/kadmind.log and /var/log/kadmind.log

avatar

@Shelton not able to execute any of the HDFS command due to kerberos. 

 

hadoop fs -ls /
19/10/15 13:12:55 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/10/15 13:12:55 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/10/15 13:12:55 INFO retry.RetryInvocationHandler: java.io.IOException: DestHost:destPort hostname:8020 , LocalHost:localPort hostname/10.49.70.18:0. Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS], while invoking ClientNamenodeProtocolTranslatorPB.getFileInfo over hostname10.49.70.14:8020 after 1 failover attempts. Trying to failover after sleeping for 1171ms