Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

After Kerberos is enabled, HDFS authentication still cannot use any commands like hdfs dfs -ls /

avatar
Explorer

[root@cdp1~]# kinit hdfs
Password for hdfs@HADOOP.COM:********
[root@cdp1~]# klist
Ticket cache: KCM:0:86966
Default principal: hdfs@HADOOP.COM

Valid starting Expires Service principal
2022-02-11T01:52:21 2022-02-12T01:52:21 krbtgt/HADOOP.COM@HADOOP.COM
renew until 2022-02-18T01:52:21

[root@cdp1 ~]# hdfs dfs -ls /
22/02/11 01:53:31 DEBUG util.Shell: setsid exited with exit code 0
22/02/11 01:53:31 DEBUG conf.Configuration: parsing URL jar:file:/opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p0.15945976/lib/hadoop/hadoop-common-3.1.1.7.1.7.0-551.jar!/core-default.xml
22/02/11 01:53:31 DEBUG conf.Configuration: parsing input stream sun.net.www.protocol.jar.JarURLConnection$JarURLInputStream@4678c730
22/02/11 01:53:32 DEBUG conf.Configuration: parsing URL file:/etc/hadoop/conf.cloudera.yarn/core-site.xml
22/02/11 01:53:32 DEBUG conf.Configuration: parsing input stream java.io.BufferedInputStream@369f73a2
22/02/11 01:53:32 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Rate of successful kerberos logins and latency (milliseconds)])
22/02/11 01:53:32 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Rate of failed kerberos logins and latency (milliseconds)])
22/02/11 01:53:32 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[GetGroups])
22/02/11 01:53:32 DEBUG lib.MutableMetricsFactory: field private org.apache.hadoop.metrics2.lib.MutableGaugeLong org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTotal with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Renewal failures since startup])
22/02/11 01:53:32 DEBUG lib.MutableMetricsFactory: field private org.apache.hadoop.metrics2.lib.MutableGaugeInt org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Renewal failures since last successful login])
22/02/11 01:53:32 DEBUG impl.MetricsSystemImpl: UgiMetrics, User and group related metrics
22/02/11 01:53:32 DEBUG security.SecurityUtil: Setting hadoop.security.token.service.use_ip to true
22/02/11 01:53:32 DEBUG security.Groups: Creating new Groups object
22/02/11 01:53:32 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000; warningDeltaMs=5000
22/02/11 01:53:32 DEBUG security.UserGroupInformation: hadoop login
22/02/11 01:53:32 DEBUG security.UserGroupInformation: hadoop login commit
22/02/11 01:53:32 DEBUG security.UserGroupInformation: using local user:UnixPrincipal: root
22/02/11 01:53:32 DEBUG security.UserGroupInformation: Using user: "UnixPrincipal: root" with name root
22/02/11 01:53:32 DEBUG security.UserGroupInformation: User entry: "root"
22/02/11 01:53:32 DEBUG security.UserGroupInformation: UGI loginUser:root (auth:SIMPLE)
22/02/11 01:53:32 DEBUG fs.FileSystem: Loading filesystems
22/02/11 01:53:32 DEBUG gcs.GoogleHadoopFileSystemBase: GHFS version: 2.1.2.7.1.7.0-551
22/02/11 01:53:32 DEBUG fs.FileSystem: gs:// = class com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem from /opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p0.15945976/jars/gcs-connector-2.1.2.7.1.7.0-551-shaded.jar
22/02/11 01:53:32 DEBUG fs.FileSystem: s3n:// = class org.apache.hadoop.fs.s3native.NativeS3FileSystem from /opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p0.15945976/lib/hadoop/hadoop-aws-3.1.1.7.1.7.0-551.jar
22/02/11 01:53:32 DEBUG fs.FileSystem: file:// = class org.apache.hadoop.fs.LocalFileSystem from /opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p0.15945976/lib/hadoop/hadoop-common-3.1.1.7.1.7.0-551.jar
22/02/11 01:53:32 DEBUG fs.FileSystem: viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem from /opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p0.15945976/lib/hadoop/hadoop-common-3.1.1.7.1.7.0-551.jar
22/02/11 01:53:32 DEBUG fs.FileSystem: har:// = class org.apache.hadoop.fs.HarFileSystem from /opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p0.15945976/lib/hadoop/hadoop-common-3.1.1.7.1.7.0-551.jar
22/02/11 01:53:32 DEBUG fs.FileSystem: http:// = class org.apache.hadoop.fs.http.HttpFileSystem from /opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p0.15945976/lib/hadoop/hadoop-common-3.1.1.7.1.7.0-551.jar
22/02/11 01:53:32 DEBUG fs.FileSystem: https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from /opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p0.15945976/lib/hadoop/hadoop-common-3.1.1.7.1.7.0-551.jar
22/02/11 01:53:32 DEBUG fs.FileSystem: o3fs:// = class org.apache.hadoop.fs.ozone.OzoneFileSystem from /opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p0.15945976/lib/hadoop/hadoop-ozone-filesystem-hadoop3-1.1.0.7.1.7.0-551.jar
22/02/11 01:53:32 DEBUG fs.FileSystem: ofs:// = class org.apache.hadoop.fs.ozone.RootedOzoneFileSystem from /opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p0.15945976/lib/hadoop/hadoop-ozone-filesystem-hadoop3-1.1.0.7.1.7.0-551.jar
22/02/11 01:53:32 DEBUG fs.FileSystem: hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem from /opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p0.15945976/jars/hadoop-hdfs-client-3.1.1.7.1.7.0-551.jar
22/02/11 01:53:32 DEBUG fs.FileSystem: webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem from /opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p0.15945976/jars/hadoop-hdfs-client-3.1.1.7.1.7.0-551.jar
22/02/11 01:53:32 DEBUG fs.FileSystem: swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSystem from /opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p0.15945976/jars/hadoop-hdfs-client-3.1.1.7.1.7.0-551.jar
22/02/11 01:53:32 DEBUG fs.FileSystem: Looking for FS supporting hdfs
22/02/11 01:53:32 DEBUG fs.FileSystem: looking for configuration option fs.hdfs.impl
22/02/11 01:53:32 DEBUG fs.FileSystem: Looking in service filesystems for implementation class
22/02/11 01:53:32 DEBUG fs.FileSystem: FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSystem
22/02/11 01:53:32 DEBUG impl.DfsClientConf: dfs.client.use.legacy.blockreader.local = false
22/02/11 01:53:32 DEBUG impl.DfsClientConf: dfs.client.read.shortcircuit = true
22/02/11 01:53:32 DEBUG impl.DfsClientConf: dfs.client.domain.socket.data.traffic = false
22/02/11 01:53:32 DEBUG impl.DfsClientConf: dfs.domain.socket.path = /var/run/hdfs-sockets/dn
22/02/11 01:53:32 DEBUG hdfs.DFSClient: Sets dfs.client.block.write.replace-datanode-on-failure.min-replication to 0
22/02/11 01:53:32 DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
22/02/11 01:53:32 DEBUG ipc.Server: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcProtobufRequest, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@63a12c68
22/02/11 01:53:32 DEBUG ipc.Client: getting client out of cache: Client-25ed7ec2740446f896ca1edf51121c24
22/02/11 01:53:32 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
22/02/11 01:53:32 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library
22/02/11 01:53:32 DEBUG unix.DomainSocketWatcher: org.apache.hadoop.net.unix.DomainSocketWatcher$2@22cadff5: starting with interruptCheckPeriodMs = 60000
22/02/11 01:53:32 DEBUG shortcircuit.DomainSocketFactory: The short-circuit local reads feature is enabled.
22/02/11 01:53:32 DEBUG sasl.DataTransferSaslUtil: DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
22/02/11 01:53:32 DEBUG fs.Globber: Created Globber for path=/, symlinks=true
22/02/11 01:53:32 DEBUG fs.Globber: Starting: glob /
22/02/11 01:53:32 DEBUG fs.Globber: Filesystem glob /
22/02/11 01:53:32 DEBUG fs.Globber: Pattern: /
22/02/11 01:53:32 DEBUG ipc.Client: The ping interval is 60000 ms.
22/02/11 01:53:32 DEBUG ipc.Client: Connecting to cdp1.localdomain/192.168.159.20:8020
22/02/11 01:53:32 DEBUG ipc.Client: Setup connection to cdp1.localdomain/192.168.159.20:8020
22/02/11 01:53:32 DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:818)
22/02/11 01:53:33 DEBUG security.SaslRpcClient: Sending sasl message state: NEGOTIATE

22/02/11 01:53:33 DEBUG security.SaslRpcClient: Get token info proto:interface org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB info:@org.apache.hadoop.security.token.TokenInfo(value=class org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSelector)
22/02/11 01:53:33 DEBUG security.SaslRpcClient: tokens aren't supported for this protocol or user doesn't have one
22/02/11 01:53:33 DEBUG security.SaslRpcClient: client isn't using kerberos
22/02/11 01:53:33 DEBUG security.UserGroupInformation: PrivilegedActionException as:root (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
22/02/11 01:53:33 DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:741)
22/02/11 01:53:33 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
22/02/11 01:53:33 DEBUG security.UserGroupInformation: PrivilegedActionException as:root (auth:SIMPLE) cause:java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
22/02/11 01:53:33 DEBUG ipc.Client: closing ipc connection to cdp1.localdomain/192.168.159.20:8020: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:778)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:741)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:835)
at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:413)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1636)
at org.apache.hadoop.ipc.Client.call(Client.java:1452)
at org.apache.hadoop.ipc.Client.call(Client.java:1405)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:957)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:431)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362)
at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1693)
at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1745)
at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1742)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1757)
at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:115)
at org.apache.hadoop.fs.Globber.doGlob(Globber.java:362)
at org.apache.hadoop.fs.Globber.glob(Globber.java:202)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:2103)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:353)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:250)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:233)
at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:104)
at org.apache.hadoop.fs.shell.Command.run(Command.java:177)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:391)
Caused by: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
at org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:173)
at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:390)
at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:622)
at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:413)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:822)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:818)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:818)
... 36 more
22/02/11 01:53:33 DEBUG ipc.Client: IPC Client (169663597) connection to cdp1.localdomain/192.168.159.20:8020 from root: closed
22/02/11 01:53:33 DEBUG retry.RetryInvocationHandler: Exception while invoking call #0 ClientNamenodeProtocolTranslatorPB.getFileInfo over null. Not retrying because try once and fail.

 

I don't understand why it's still the root user here:

22/02/11 01:53:32 DEBUG security.UserGroupInformation: hadoop login
22/02/11 01:53:32 DEBUG security.UserGroupInformation: hadoop login commit
22/02/11 01:53:32 DEBUG security.UserGroupInformation: using local user:UnixPrincipal: root
22/02/11 01:53:32 DEBUG security.UserGroupInformation: Using user: "UnixPrincipal: root" with name root
22/02/11 01:53:32 DEBUG security.UserGroupInformation: User entry: "root"
22/02/11 01:53:32 DEBUG security.UserGroupInformation: UGI loginUser:root (auth:SIMPLE)

7 REPLIES 7

avatar
Super Guru

This looks like your HDFS service is misconfigured.

Are you using CDP or open-source HDFS?

Could you please share your HDFS configuration, specifically the properties that you set to enable Kerberos?

 

André

 

--
Was your question answered? Please take some time to click on "Accept as Solution" below this post.
If you find a reply useful, say thanks by clicking on the thumbs up button.

avatar
Explorer

Thank you. This is cdp7 1.7 and CM7 4.4. The system is RedHat 8.2.

I started Kerberos very smoothly. I can also authenticate, but it didn't work.

Moreover, not only HDFS, hive and other components cannot succeed.

I think the Kerberos configuration of HDFS components is carried out by CDP. I also saw that these components have changed the relevant configuration in the CM web interface.

 

The following is / etc / krb5.conf:

[logging]

default = FILE:/var/log/krb5libs. log

kdc = FILE:/var/log/krb5kdc. log

admin_ server = FILE:/var/log/kadmind. log

 

[libdefaults]

default_ realm = HADOOP. COM

ticket_ lifetime = 24h

renew_ lifetime = 7d

forwardable = true

renewable = true

rdns = false

udp_ prefrence_ limit=0

 

[realms]

HADOOP. COM = {

kdc = cdp1. localdomain

admin_ server = cdp1. localdomain

}

 

[domain_realm]

.hadoop. com = HADOOP. COM

hadoop. com = HADOOP. COM

 

The following is / etc / hadoop/conf/hdfs-site.xml:
<property>
<name>dfs.namenode.kerberos.principal</name>
<value>hdfs/_HOST@HADOOP.COM</value>
</property>
<property>
<name>dfs.namenode.kerberos.internal.spnego.principal</name>
<value>HTTP/_HOST@HADOOP.COM</value>
</property>
<property>
<name>dfs.datanode.kerberos.principal</name>
<value>hdfs/_HOST@HADOOP.COM</value>
</property>

 

avatar
Super Guru

Try changing udp_preference_limit to 1 in the krb5.conf file on all the hosts and restart your cluster.

 

Also notice that you have a typo in that parameter's name. The correct is udp_preference_limit and not udp_prefrence_limit.

--
Was your question answered? Please take some time to click on "Accept as Solution" below this post.
If you find a reply useful, say thanks by clicking on the thumbs up button.

avatar
Explorer

Thanks

I made these changes, but it doesn't seem to be the reason.

I'm new to CDP, maybe I made a more obvious mistake,the obvious error is that the users loaded here are wrong, but I don't understand why?It should be 'hdfs', not 'root'.

 

22/02/11 01:53:32 DEBUG security.UserGroupInformation: using local user:UnixPrincipal: root
22/02/11 01:53:32 DEBUG security.UserGroupInformation: Using user: "UnixPrincipal: root" with name root
22/02/11 01:53:32 DEBUG security.UserGroupInformation: User entry: "root"

22/02/11 01:53:32 DEBUG security.UserGroupInformation: UGI loginUser:root (auth:SIMPLE)

 

If you can guess from here that it is the configuration problem of components. In fact, when Kerberos is enabled on CDP, the configuration modification of components is transparent to me. If there is no special need, it seems that I don't need to care.

avatar
Super Guru

Which steps did you take to enable Kerberos? Did you use the wizard in Cloudera Manager?

 

How many nodes does your cluster have?

Which node are you running these commands from? Have you tried from others nodes (e.g have you tried from the Name node host?)

--
Was your question answered? Please take some time to click on "Accept as Solution" below this post.
If you find a reply useful, say thanks by clicking on the thumbs up button.

avatar
Explorer

Yes, I use the CDP wizard. And the command is run on namenode. It has also been tried at other nodes, and the same problem will occur.

My test cluster has only four nodes, and I have a cdp7.1.5 cluster on Centos7.9, which did not encounter this problem when opening Kerberos.

I will try to reinstall cdp7.1.5 on RedHat 8 to see if this problem will occur. Thank you for your help.

avatar
Contributor

Reply might be late but KCM and keyring based Kerberos credentials cache are not supported with hadoop.

 

# klist
Ticket cache: KCM:0:86966