Support Questions

Find answers, ask questions, and share your expertise

Kerberos ticket error:No rules applied to hdfs@CDH5.14.2

avatar
Contributor

Users are synced to hosts as user@example.com, I can do hadoop fs -ls as hdfs user with out a problem, But when I tried as a user from AD I am getting the error "INFO util.KerberosName: No auth_to_local rules applied to user@userexample.com

 

Here is the complete log:


[exampleuser@example.com@explehost1 ~]$ hadoop fs -ls
18/06/28 02:20:56 DEBUG util.Shell: setsid exited with exit code 0
18/06/28 02:20:56 DEBUG conf.Configuration: parsing URL jar:file:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-common-2.6.0-cdh5.14.2.jar!/core-default.xml
18/06/28 02:20:56 DEBUG conf.Configuration: parsing input stream sun.net.www.protocol.jar.JarURLConnection$JarURLInputStream@271053e1
18/06/28 02:20:56 DEBUG conf.Configuration: parsing URL file:/etc/hadoop/conf.cloudera.YARN/core-site.xml
18/06/28 02:20:56 DEBUG conf.Configuration: parsing input stream java.io.BufferedInputStream@5bc79255
18/06/28 02:20:56 DEBUG core.Tracer: sampler.classes = ; loaded no samplers
18/06/28 02:20:56 TRACE core.TracerId: ProcessID(fmt=%{tname}/%{ip}): computed process ID of "FsShell/hostiP"
18/06/28 02:20:56 TRACE core.TracerPool: TracerPool(Global): adding tracer Tracer(FsShell/hostiP)
18/06/28 02:20:56 DEBUG core.Tracer: span.receiver.classes = ; loaded no span receivers
18/06/28 02:20:56 TRACE core.Tracer: Created Tracer(FsShell/hostiP) for FsShell
18/06/28 02:20:56 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, always=false, sampleName=Ops, type=DEFAULT, valueName=Time, value=[Rate of successful kerberos logins and latency (milliseconds)])
18/06/28 02:20:56 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, always=false, sampleName=Ops, type=DEFAULT, valueName=Time, value=[Rate of failed kerberos logins and latency (milliseconds)])
18/06/28 02:20:56 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, always=false, sampleName=Ops, type=DEFAULT, valueName=Time, value=[GetGroups])
18/06/28 02:20:56 DEBUG lib.MutableMetricsFactory: field private org.apache.hadoop.metrics2.lib.MutableGaugeLong org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTotal with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, always=false, sampleName=Ops, type=DEFAULT, valueName=Time, value=[Renewal failures since startup])
18/06/28 02:20:56 DEBUG lib.MutableMetricsFactory: field private org.apache.hadoop.metrics2.lib.MutableGaugeInt org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, always=false, sampleName=Ops, type=DEFAULT, valueName=Time, value=[Renewal failures since last successful login])
18/06/28 02:20:56 DEBUG impl.MetricsSystemImpl: UgiMetrics, User and group related metrics
18/06/28 02:20:56 DEBUG security.SecurityUtil: Setting hadoop.security.token.service.use_ip to true
Java config name: null
Native config name: /etc/krb5.conf
Loaded from native config
18/06/28 02:20:56 DEBUG security.Groups: Creating new Groups object
18/06/28 02:20:56 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=1000; warningDeltaMs=5000
18/06/28 02:20:56 DEBUG security.UserGroupInformation: hadoop login
18/06/28 02:20:56 DEBUG security.UserGroupInformation: hadoop login commit
18/06/28 02:20:56 DEBUG security.UserGroupInformation: using local user:UnixPrincipal: exampleuser@example.com
18/06/28 02:20:56 DEBUG security.UserGroupInformation: Using user: "UnixPrincipal: exampleuser@example.com" with name exampleuser@example.com
18/06/28 02:20:56 INFO util.KerberosName: No auth_to_local rules applied to exampleuser@example.com
18/06/28 02:20:56 DEBUG security.UserGroupInformation: User entry: "exampleuser@example.com"
18/06/28 02:20:56 DEBUG security.UserGroupInformation: UGI loginUser:exampleuser@example.com (auth:SIMPLE)
18/06/28 02:20:56 DEBUG core.Tracer: sampler.classes = ; loaded no samplers
18/06/28 02:20:56 TRACE core.TracerId: ProcessID(fmt=%{tname}/%{ip}): computed process ID of "FSClient/hostiP"
18/06/28 02:20:56 TRACE core.TracerPool: TracerPool(Global): adding tracer Tracer(FSClient/hostiP)
18/06/28 02:20:56 DEBUG core.Tracer: span.receiver.classes = ; loaded no span receivers
18/06/28 02:20:56 TRACE core.Tracer: Created Tracer(FSClient/hostiP) for FSClient
18/06/28 02:20:56 DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
18/06/28 02:20:56 DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false
18/06/28 02:20:56 DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
18/06/28 02:20:56 DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path = /var/run/hdfs-sockets/dn
18/06/28 02:20:56 DEBUG hdfs.DFSClient: Sets dfs.client.block.write.replace-datanode-on-failure.min-replication to 0
18/06/28 02:20:56 TRACE security.SecurityUtil: Name lookup for namenode XX.XX.XX.XX18/06/28 02:20:56 TRACE security.SecurityUtil: Name lookup for spmbaexampleuser.example.com took 0 ms.
18/06/28 02:20:56 DEBUG hdfs.HAUtil: No HA service delegation token found for logical URI hdfs://nameservice1
18/06/28 02:20:56 DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
18/06/28 02:20:56 DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false
18/06/28 02:20:56 DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
18/06/28 02:20:56 DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path = /var/run/hdfs-sockets/dn
18/06/28 02:20:56 DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
18/06/28 02:20:56 DEBUG ipc.Server: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@534a5a98
18/06/28 02:20:56 DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@1e6a3214
18/06/28 02:20:56 DEBUG azure.NativeAzureFileSystem: finalize() called.
18/06/28 02:20:56 DEBUG azure.NativeAzureFileSystem: finalize() called.
18/06/28 02:20:57 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
18/06/28 02:20:57 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library
18/06/28 02:20:57 DEBUG unix.DomainSocketWatcher: org.apache.hadoop.net.unix.DomainSocketWatcher$2@6a8b3053: starting with interruptCheckPeriodMs = 60000
18/06/28 02:20:57 TRACE unix.DomainSocketWatcher: DomainSocketWatcher(337574644): adding notificationSocket 168, connected to 167
18/06/28 02:20:57 DEBUG util.PerformanceAdvisory: Both short-circuit local reads and UNIX domain socket are disabled.
18/06/28 02:20:57 DEBUG sasl.DataTransferSaslUtil: DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
18/06/28 02:20:57 TRACE ipc.ProtobufRpcEngine: 1: Call -> namenode/XX.XX.XX.XX:8020: getFileInfo {src: "/user/exampleuser@example.com"}
18/06/28 02:20:57 DEBUG ipc.Client: The ping interval is 60000 ms.
18/06/28 02:20:57 DEBUG ipc.Client: Connecting to namenode/XX.XX.XX.XX:8020
18/06/28 02:20:57 DEBUG ipc.Client: IPC Client (1029472813) connection to namenode/XX.XX.XX.XX:8020 from exampleuser@example.com: starting, having connections 1
18/06/28 02:20:57 DEBUG ipc.Client: IPC Client (1029472813) connection to namenode/XX.XX.XX.XX:8020 from exampleuser@example.com sending #0 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
18/06/28 02:20:59 DEBUG ipc.Client: IPC Client (1029472813) connection to namenode/XX.XX.XX.XX:8020 from exampleuser@example.com got value #0
18/06/28 02:20:59 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 2343ms
18/06/28 02:20:59 TRACE ipc.ProtobufRpcEngine: 1: Response <- namenode/XX.XX.XX.XX:8020: getFileInfo {}
ls: `.': No such file or directory
18/06/28 02:20:59 TRACE core.TracerPool: TracerPool(Global): removing tracer Tracer(FsShell/hostiP)
18/06/28 02:20:59 DEBUG ipc.Client: stopping client from cache: org.apache.hadoop.ipc.Client@1e6a3214
18/06/28 02:20:59 DEBUG ipc.Client: removing client from cache: org.apache.hadoop.ipc.Client@1e6a3214
18/06/28 02:20:59 DEBUG ipc.Client: stopping actual client because no more references remain: org.apache.hadoop.ipc.Client@1e6a3214
18/06/28 02:20:59 DEBUG ipc.Client: Stopping client
18/06/28 02:20:59 DEBUG ipc.Client: IPC Client (1029472813) connection to namenode/XX.XX.XX.XX:8020 from exampleuser@example.com: closed
18/06/28 02:20:59 DEBUG ipc.Client: IPC Client (1029472813) connection to namenode/XX.XX.XX.XX:8020 from exampleuser@example.com: stopped, remaining connections 0

 

In need of a serious help, Thanks in advance!

1 ACCEPTED SOLUTION

avatar
Master Guru

That's great news.

To avoid any confusion, the automatically generated auth_to_local rules (based on a realm listed in "Trusted Kerbreros Realms" would look like this:

 

RULE:[1:$1@$0](.*@\QEXAMPLE.COM\E$)s/@\QEXAMPLE.COM\E$//
RULE:[2:$1@$0](.*@\QEXAMPLE.COM\E$)s/@\QEXAMPLE.COM\E$//

It appears that perhaps some of your characters were interpretted as special when you printed the generaged rules.

 

 

View solution in original post

16 REPLIES 16

avatar
Master Guru

No problem.  I as long as you have a reasonable solution to address the issue, that's all good.  🙂

avatar
Contributor

@bgooley  hdfs is not picking up the users from supergroup@domain.com, does auth to local rule works for groups?

 

hadoop.security.group.mapping org.apache.hadoop.security.ShellBasedUnixGroupsMapping

 

[sbalusu@domain.com@hostname ~]$ hadoop fs -chown hdfs:supergroup /user/test
chown: changing ownership of '/user/test': Non-super user cannot change owner

[sbalusu@domain.com@hostname ~]$ getent group supergroup@domain.com
supergroup@domain.com:*:514734591:sbalusu@supergroup.com

 

I tried both group short name as well as group fqdn:
dfs.permissions.supergroup, dfs.permissions.superusergroup supergroup@domain.com

dfs.permissions.supergroup, dfs.permissions.superusergroup supergroup

 

any suggestions?

 

 

avatar
Master Guru

@balusu,

 

auth_to_local is used to map a user's principal to a unix name only.  It is not used for anything group-oriented.

 

By default, only the "hdfs" user is a superuser so it is the only user who can perform "chown" operations.

If you want to make other users superusers, you can do so by defining which group will be the "supergroup" and which users belong to it.

 

The group must be accessible via the OS (getent group supergroup).  The default name for the supergroup is "supergroup"

 

In cloudera Manager you can see this configuration in HDFS --> Configuration --> Superuser Group

 

is there a reason you are trying to attach the "@domain" onto the group name?

I would recommend adding a group named "supergroup" if you don't need to change the default.  Then add sbalusu as a member.

 

Note this has nothing to do with Kerberos at all at this point... this is all group mapping for hadoop.

avatar
Contributor

@bgooley

 

I appolgise for the confusion, The supergroup I mentioned is hadoopadmingroup@example.com

 

In cloudera Manager i changed this configuration in HDFS --> Configuration --> Superuser Group

and tried setting it to 

hadoopadmingroup@example.com and then hadoopadmingroup, both of them did not worked.

 

sssd is set up to have a domain name at the end of Unix group and Unix user, Somehow hdfs is not able to map user to group with the domain name at the end. 

 

 

True, I agree this is not a Kerberos issue. My intention is to find if Hadoop can work having a domain name at the end of the group so that I can have a conversation with Unix team to trim domain name at the end of the group.

 

Thanks,

Siva

 

 

avatar
Master Guru

@balusu,

 

Yeah, I'm not sure if supergroup mapping will work if the group has the domain on it.  I can't confirm it won't, but if you changed the group name, restarted HFDS, and still didn't have group access, that does indicate the config may not work.

 

You may try running "hdfs groups <user>" to see if that command "sees" your groups....

avatar
Contributor

@bgooley 

 

Ya, it does not seem to be working. 

 

HDFS --> Configuration --> Superuser Group = hadoopadmingroup@example.com and then hadoopadmingroup, both of them yielded zero groups.

 

[sbalusu@example.com@hostname ~]$ hdfs groups sbalusu@example.com
sbalusu_c@example.com :
[sbalusu@example.com@hostname ~]$ hdfs groups sbalusu_c
sbalusu_c :

 

 

Thanks & Regards,
Siva

avatar
Contributor
I have the SSSD configured to short name and everything looks good now!!! Thanks @bgooley