Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Kerberos ticket error:No rules applied to hdfs@CDH5.14.2

avatar
Contributor

Users are synced to hosts as user@example.com, I can do hadoop fs -ls as hdfs user with out a problem, But when I tried as a user from AD I am getting the error "INFO util.KerberosName: No auth_to_local rules applied to user@userexample.com

 

Here is the complete log:


[exampleuser@example.com@explehost1 ~]$ hadoop fs -ls
18/06/28 02:20:56 DEBUG util.Shell: setsid exited with exit code 0
18/06/28 02:20:56 DEBUG conf.Configuration: parsing URL jar:file:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-common-2.6.0-cdh5.14.2.jar!/core-default.xml
18/06/28 02:20:56 DEBUG conf.Configuration: parsing input stream sun.net.www.protocol.jar.JarURLConnection$JarURLInputStream@271053e1
18/06/28 02:20:56 DEBUG conf.Configuration: parsing URL file:/etc/hadoop/conf.cloudera.YARN/core-site.xml
18/06/28 02:20:56 DEBUG conf.Configuration: parsing input stream java.io.BufferedInputStream@5bc79255
18/06/28 02:20:56 DEBUG core.Tracer: sampler.classes = ; loaded no samplers
18/06/28 02:20:56 TRACE core.TracerId: ProcessID(fmt=%{tname}/%{ip}): computed process ID of "FsShell/hostiP"
18/06/28 02:20:56 TRACE core.TracerPool: TracerPool(Global): adding tracer Tracer(FsShell/hostiP)
18/06/28 02:20:56 DEBUG core.Tracer: span.receiver.classes = ; loaded no span receivers
18/06/28 02:20:56 TRACE core.Tracer: Created Tracer(FsShell/hostiP) for FsShell
18/06/28 02:20:56 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, always=false, sampleName=Ops, type=DEFAULT, valueName=Time, value=[Rate of successful kerberos logins and latency (milliseconds)])
18/06/28 02:20:56 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, always=false, sampleName=Ops, type=DEFAULT, valueName=Time, value=[Rate of failed kerberos logins and latency (milliseconds)])
18/06/28 02:20:56 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, always=false, sampleName=Ops, type=DEFAULT, valueName=Time, value=[GetGroups])
18/06/28 02:20:56 DEBUG lib.MutableMetricsFactory: field private org.apache.hadoop.metrics2.lib.MutableGaugeLong org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTotal with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, always=false, sampleName=Ops, type=DEFAULT, valueName=Time, value=[Renewal failures since startup])
18/06/28 02:20:56 DEBUG lib.MutableMetricsFactory: field private org.apache.hadoop.metrics2.lib.MutableGaugeInt org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, always=false, sampleName=Ops, type=DEFAULT, valueName=Time, value=[Renewal failures since last successful login])
18/06/28 02:20:56 DEBUG impl.MetricsSystemImpl: UgiMetrics, User and group related metrics
18/06/28 02:20:56 DEBUG security.SecurityUtil: Setting hadoop.security.token.service.use_ip to true
Java config name: null
Native config name: /etc/krb5.conf
Loaded from native config
18/06/28 02:20:56 DEBUG security.Groups: Creating new Groups object
18/06/28 02:20:56 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=1000; warningDeltaMs=5000
18/06/28 02:20:56 DEBUG security.UserGroupInformation: hadoop login
18/06/28 02:20:56 DEBUG security.UserGroupInformation: hadoop login commit
18/06/28 02:20:56 DEBUG security.UserGroupInformation: using local user:UnixPrincipal: exampleuser@example.com
18/06/28 02:20:56 DEBUG security.UserGroupInformation: Using user: "UnixPrincipal: exampleuser@example.com" with name exampleuser@example.com
18/06/28 02:20:56 INFO util.KerberosName: No auth_to_local rules applied to exampleuser@example.com
18/06/28 02:20:56 DEBUG security.UserGroupInformation: User entry: "exampleuser@example.com"
18/06/28 02:20:56 DEBUG security.UserGroupInformation: UGI loginUser:exampleuser@example.com (auth:SIMPLE)
18/06/28 02:20:56 DEBUG core.Tracer: sampler.classes = ; loaded no samplers
18/06/28 02:20:56 TRACE core.TracerId: ProcessID(fmt=%{tname}/%{ip}): computed process ID of "FSClient/hostiP"
18/06/28 02:20:56 TRACE core.TracerPool: TracerPool(Global): adding tracer Tracer(FSClient/hostiP)
18/06/28 02:20:56 DEBUG core.Tracer: span.receiver.classes = ; loaded no span receivers
18/06/28 02:20:56 TRACE core.Tracer: Created Tracer(FSClient/hostiP) for FSClient
18/06/28 02:20:56 DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
18/06/28 02:20:56 DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false
18/06/28 02:20:56 DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
18/06/28 02:20:56 DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path = /var/run/hdfs-sockets/dn
18/06/28 02:20:56 DEBUG hdfs.DFSClient: Sets dfs.client.block.write.replace-datanode-on-failure.min-replication to 0
18/06/28 02:20:56 TRACE security.SecurityUtil: Name lookup for namenode XX.XX.XX.XX18/06/28 02:20:56 TRACE security.SecurityUtil: Name lookup for spmbaexampleuser.example.com took 0 ms.
18/06/28 02:20:56 DEBUG hdfs.HAUtil: No HA service delegation token found for logical URI hdfs://nameservice1
18/06/28 02:20:56 DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
18/06/28 02:20:56 DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false
18/06/28 02:20:56 DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
18/06/28 02:20:56 DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path = /var/run/hdfs-sockets/dn
18/06/28 02:20:56 DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
18/06/28 02:20:56 DEBUG ipc.Server: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@534a5a98
18/06/28 02:20:56 DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@1e6a3214
18/06/28 02:20:56 DEBUG azure.NativeAzureFileSystem: finalize() called.
18/06/28 02:20:56 DEBUG azure.NativeAzureFileSystem: finalize() called.
18/06/28 02:20:57 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
18/06/28 02:20:57 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library
18/06/28 02:20:57 DEBUG unix.DomainSocketWatcher: org.apache.hadoop.net.unix.DomainSocketWatcher$2@6a8b3053: starting with interruptCheckPeriodMs = 60000
18/06/28 02:20:57 TRACE unix.DomainSocketWatcher: DomainSocketWatcher(337574644): adding notificationSocket 168, connected to 167
18/06/28 02:20:57 DEBUG util.PerformanceAdvisory: Both short-circuit local reads and UNIX domain socket are disabled.
18/06/28 02:20:57 DEBUG sasl.DataTransferSaslUtil: DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
18/06/28 02:20:57 TRACE ipc.ProtobufRpcEngine: 1: Call -> namenode/XX.XX.XX.XX:8020: getFileInfo {src: "/user/exampleuser@example.com"}
18/06/28 02:20:57 DEBUG ipc.Client: The ping interval is 60000 ms.
18/06/28 02:20:57 DEBUG ipc.Client: Connecting to namenode/XX.XX.XX.XX:8020
18/06/28 02:20:57 DEBUG ipc.Client: IPC Client (1029472813) connection to namenode/XX.XX.XX.XX:8020 from exampleuser@example.com: starting, having connections 1
18/06/28 02:20:57 DEBUG ipc.Client: IPC Client (1029472813) connection to namenode/XX.XX.XX.XX:8020 from exampleuser@example.com sending #0 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
18/06/28 02:20:59 DEBUG ipc.Client: IPC Client (1029472813) connection to namenode/XX.XX.XX.XX:8020 from exampleuser@example.com got value #0
18/06/28 02:20:59 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 2343ms
18/06/28 02:20:59 TRACE ipc.ProtobufRpcEngine: 1: Response <- namenode/XX.XX.XX.XX:8020: getFileInfo {}
ls: `.': No such file or directory
18/06/28 02:20:59 TRACE core.TracerPool: TracerPool(Global): removing tracer Tracer(FsShell/hostiP)
18/06/28 02:20:59 DEBUG ipc.Client: stopping client from cache: org.apache.hadoop.ipc.Client@1e6a3214
18/06/28 02:20:59 DEBUG ipc.Client: removing client from cache: org.apache.hadoop.ipc.Client@1e6a3214
18/06/28 02:20:59 DEBUG ipc.Client: stopping actual client because no more references remain: org.apache.hadoop.ipc.Client@1e6a3214
18/06/28 02:20:59 DEBUG ipc.Client: Stopping client
18/06/28 02:20:59 DEBUG ipc.Client: IPC Client (1029472813) connection to namenode/XX.XX.XX.XX:8020 from exampleuser@example.com: closed
18/06/28 02:20:59 DEBUG ipc.Client: IPC Client (1029472813) connection to namenode/XX.XX.XX.XX:8020 from exampleuser@example.com: stopped, remaining connections 0

 

In need of a serious help, Thanks in advance!

1 ACCEPTED SOLUTION

avatar
Master Guru

That's great news.

To avoid any confusion, the automatically generated auth_to_local rules (based on a realm listed in "Trusted Kerbreros Realms" would look like this:

 

RULE:[1:$1@$0](.*@\QEXAMPLE.COM\E$)s/@\QEXAMPLE.COM\E$//
RULE:[2:$1@$0](.*@\QEXAMPLE.COM\E$)s/@\QEXAMPLE.COM\E$//

It appears that perhaps some of your characters were interpretted as special when you printed the generaged rules.

 

 

View solution in original post

16 REPLIES 16

avatar
Master Guru

Hi @balusu,

 

Actually, the error in your log snippet is:

 

18/06/28 02:20:56 INFO util.KerberosName: No auth_to_local rules applied to exampleuser@example.com.

 

This error occurs when no rules in your "hadoop.security.auth_to_local" property in the server's core-site.xml matched the principal, "exampleuser@example.com"

 

This is not a kerberos error; rather, this is a message being returned by hadoop code when hadoop tries to map your principal to a unix user name.

 

Generally, if you are attempting to act on a hadoop service with a user who is not in the hadoop cluster's Kerberos realm, you need to make sure that the hadoop.security.auth_to_local property includes rules that will match the principal and convert the string to just a username.  Cloudera Manager will create such rules for you if you add the other realm to the "Trusted Realms" or "Trusted Kerberos Realms" configuration.

 

see:

 

https://www.cloudera.com/documentation/enterprise/5-14-x/topics/cm_sg_kerbprin_to_sn.html

 

Note that you will need to deploy client configuration and restart the cluster after making this change.

 

-Ben

avatar
Contributor

Hi Ben,

 

Thanks for the quick reply. I have already tried that but the error remains same:

 

Trusted Kerberos Realms: Example.COM
Additional Rules to Map Kerberos Principals to Short Names: RULE:[1:$1](sbalusu\..*)s/sbalusu\.(.*)/$1/g

 

 

Thanks & Regards,
Siva

avatar
Contributor

Hi Ben,

 

I had the pattern wrong in the rule, Here is the updated and working one:


RULE:[1:$1@$0](.*@\EXAMPLE.COM)s/@\EXAMPLE.COM//
RULE:[2:$1@$0](.*@\EXAMPLE.COM)s/@\EXAMPLE.COM//

 

 

Thanks & Regards,

Siva

avatar
Master Guru

That's great news.

To avoid any confusion, the automatically generated auth_to_local rules (based on a realm listed in "Trusted Kerbreros Realms" would look like this:

 

RULE:[1:$1@$0](.*@\QEXAMPLE.COM\E$)s/@\QEXAMPLE.COM\E$//
RULE:[2:$1@$0](.*@\QEXAMPLE.COM\E$)s/@\QEXAMPLE.COM\E$//

It appears that perhaps some of your characters were interpretted as special when you printed the generaged rules.

 

 

avatar
Contributor

I tried exactly same but it threw the below error

 

Failed to start namenode.
java.util.regex.PatternSyntaxException: Illegal/unsupported escape sequence near index 22
.*@\EXAMPLE.COM\E$
^

 

also i had to add example.com to make it work, Can you please suggest if there is a way to ignore case in the rule.

avatar
Master Guru

@balusu,

 

As mentioned, you would want to add the realm to the HDFS configuration "Trusted Kerberos Realms".  This will allow Cloudera Manager to generate the necessary auth_to_local rule for that realm.

 

The regex you used is, indeed, not correct as you have two "\E" but no "\Q" to match.

 

I am not sure, exactly, what trouble you had with the case of realms, but the realm should always be in uppercase format.

 

For more information on regex, etc., this is a great resource:

 

https://www.cloudera.com/documentation/enterprise/5-14-x/topics/cdh_sg_kerbprin_to_sn.html#topic_19_...

avatar
Contributor

@bgooley

 

Ya, The pattern is wrong and I am glad that the documentation link you provided is very clear.

 

I observed an interesting thing in our environment:

 

When only example.com as trusted domain:
[sbalusu@example.com@host ~]$ hadoop org.apache.hadoop.security.HadoopKerberosName sbalusu@example.com
Name: sbalusu@example.com to sbalusu
[sbalusu@example.com@host ~]$ hadoop org.apache.hadoop.security.HadoopKerberosName sbalusu@EXAMPLE.COM
Name: sbalusu@EXAMPLE.COM to sbalusu

When only EXAMPLE.COM as trusted domain:
[sbalusu@example.com@host ~]$ hadoop org.apache.hadoop.security.HadoopKerberosName sbalusu@EXAMPLE.COM
Name: sbalusu@EXAMPLE.COM to sbalusu
[sbalusu@example.com@host ~]$ hadoop org.apache.hadoop.security.HadoopKerberosName sbalusu@example.com
18/06/28 17:41:41 INFO util.KerberosName: No auth_to_local rules applied to sbalusu@example.com
Name: sbalusu@example.com to sbalusu@example.com

and kinit shows EXAMPLE.COM
[sbalusu@example.com@host ~]$ kinit
Password for sbalusu@EXAMPLE.COM:

 

Not sure from where hadoop is picking lower case realm.

  

avatar
Master Guru

@balusu,

 

Can you clarify what you are trying to test with lower case realms?  The realm in the kerberos principal should be uppercase, so the lower case is not expected.

 

If you "kinit" make certain you specify the realm in uppercase.

 

The auth_to_local rules are not intended to match a lowercase realm, so the response you get is expected.

 

-Ben

avatar
Contributor
@bgooley

I completely agree with the Uppercase realm concept and it worked fine for many clusters we deployed for different clients but somehow the current cluster only works when I have the lower case in trusted realms.

Thanks,
Siva