Reply
Contributor
Posts: 29
Registered: ‎10-04-2017

Kerberos between two clusters is failing

Hi,

 

We have two clusters one which has all the CDH services and the other which has just kafka and zookeeper. We have different realms for these clusters. We have enabled trust between these clusters. When i kinit with cluster A realm in cluster B and do hdfs ls of cluster A, i'm receiving below error.

 

 

hdfs dfs -ls hdfs://srvbdadvlsk20.devoperational1.xxxxxx.pre.corp:8020/

ls: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS]
Warning: fs.defaultFS is not set when running "ls" command.

 

When i kinit with cluster B realm in cluster A and do hdfs ls of cluster A, i'm receiving below error.

 

at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:372)
Caused by: java.io.IOException: Couldn't setup connection for SVC_TEST@DEVKAFKA.xxxx.PRE.CORP to xxxxxxxxx.devoperational1.xxxx.pre.corp/xxxxxxx:8020
at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:710)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:681)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:769)
at org.apache.hadoop.ipc.Client$Connection.access$3000(Client.java:396)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1557)
at org.apache.hadoop.ipc.Client.call(Client.java:1480)
... 29 more
Caused by: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Fail to create credential. (63) - No service creds)]
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:416)
at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:594)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:396)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:761)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:757)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:756)
... 32 more
Caused by: GSSException: No valid credentials provided (Mechanism level: Fail to create credential. (63) - No service creds)
at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:770)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
... 41 more
Caused by: KrbException: Fail to create credential. (63) - No service creds
at sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:162)
at sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:458)
at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:693)
... 44 more

 

 

Posts: 1,566
Kudos: 287
Solutions: 240
Registered: ‎07-31-2013

Re: Kerberos between two clusters is failing

For your first scenario's error, you appear to be attempting to use an
unsecure client config to talk to a remote secure cluster. Perhaps you're
using a gateway host that does not have updated client configs? Ideally the
error should mirror the second scenario error type.

For your second scenario's error, the two realms do not appear to have a
cross-realm trust setup. If you're using MIT Kerberos, follow this for your
two realms:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Deployment_Guide/sec-k...
Backline Customer Operations Engineer
Contributor
Posts: 29
Registered: ‎10-04-2017

Re: Kerberos between two clusters is failing

Hi @HarshJ

In the first case, i do not have hdfs service in the cluster. So i do not get what client configs you were refering to. For the second one, cross realm is setup but i will have a look at it again.
Contributor
Posts: 29
Registered: ‎10-04-2017

Re: Kerberos between two clusters is failing

Hi @Harsh J

 

Cross relam is also fine. You could help point out issue if I missed to find in the trust.

 

Cluster B

krbtgt/DEVKAFKA.xxxxxxxx.PRE.CORP@DEVKAFKA.xxxxxxxx.PRE.CORP
krbtgt/DEVKAFKA.xxxxxxxx.PRE.CORP@DEVOPERATIONAL1.xxxxxxxx.PRE.CORP
krbtgt/DEVOPERATIONAL1.xxxxxxxx.PRE.CORP@DEVKAFKA.xxxxxxxx.PRE.CORP

Cluster A

krbtgt/DEVKAFKA.xxxxxxxx.PRE.CORP@DEVOPERATIONAL1.xxxxxxxx.PRE.CORP
krbtgt/DEVOPERATIONAL1.xxxxxxxx.PRE.CORP@DEVKAFKA.xxxxxxxx.PRE.CORP
krbtgt/DEVOPERATIONAL1.xxxxxxxx.PRE.CORP@DEVOPERATIONAL1.xxxxxxxx.PRE.CORP

Posts: 1,566
Kudos: 287
Solutions: 240
Registered: ‎07-31-2013

Re: Kerberos between two clusters is failing

You can test cross realm trust by using MIT Kerberos' 'kvno' command:

Assuming realms A and B,
1. kinit as any identity from realm A
2. Run: kvno hdfs/namenode-host@B

If kvno grabs a service ticket, everything is fine with the trust between B
and A

Repeat the test in the inverse fashion, with A's namenode-host to check the
other direction.

P.s. Ensure that the encryption types of all the krbtgt principals is the
same on both KDCs (verify with getprinc ), and that both clusters
pass the Hosts -> Security Inspector check in CM.
Backline Customer Operations Engineer
Contributor
Posts: 29
Registered: ‎10-04-2017

Re: Kerberos between two clusters is failing

@Harsh J

 

Trust is fine from A to B.

 

Cluster A: Has all services

[root@srvbdadvlsk20 ~]# kvno hdfs/srvbdadvlsk21.devoperational1.xxxxxxxx.pre.corp@DEVOPERATIONAL1.xxxxxxxx.PRE.CORP
hdfs/srvbdadvlsk21.devoperational1.xxxxxxxx.pre.corp@DEVOPERATIONAL1.xxxxxxxx.PRE.CORP: kvno = 2
[root@srvbdadvlsk20 ~]#
[root@srvbdadvlsk20 ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: c0252495@DEVOPERATIONAL1.xxxxxxxx.PRE.CORP

Valid starting Expires Service principal
11/08/17 11:45:44 11/09/17 11:45:44 krbtgt/DEVOPERATIONAL1.xxxxxxxx.PRE.CORP@DEVOPERATIONAL1.xxxxxxxx.PRE.CORP
renew until 11/15/17 11:45:44
11/08/17 11:45:49 11/09/17 11:45:44 krbtgt/DEVKAFKA.xxxxxxxx.PRE.CORP@DEVOPERATIONAL1.xxxxxxxx.PRE.CORP
renew until 11/15/17 11:45:44
11/08/17 11:46:29 11/09/17 11:45:44 hdfs/srvbdadvlsk21.devoperational1.xxxxxxxx.pre.corp@DEVOPERATIONAL1.xxxxxxxx.PRE.CORP
renew until 11/13/17 11:46:29
[root@srvbdadvlsk20 ~]#

Cluster B: Has only KAFKA AND ZOKEEPER

[root@srvbdadvlsk36 ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: SVC_TEST@DEVKAFKA.xxxxxxxx.PRE.CORP

Valid starting Expires Service principal
11/08/17 11:49:30 11/09/17 11:49:30 krbtgt/DEVKAFKA.xxxxxxxx.PRE.CORP@DEVKAFKA.xxxxxxxx.PRE.CORP
renew until 11/15/17 11:49:30
11/08/17 11:49:42 11/09/17 11:49:30 krbtgt/DEVOPERATIONAL1.xxxxxxxx.PRE.CORP@DEVKAFKA.xxxxxxxx.PRE.CORP
renew until 11/15/17 11:49:30
[root@srvbdadvlsk36 ~]# kvno hdfs/srvbdadvlsk21.devoperational1.xxxxxxxx.pre.corp@DEVOPERATIONAL1.xxxxxxxx.PRE.CORP
kvno: KDC returned error string: PROCESS_TGS while getting credentials for hdfs/srvbdadvlsk21.devoperational1.xxxxxxxx.pre.corp@DEVOPERATIONAL1.xxxxxxxx.PRE.CORP
[root@srvbdadvlsk36 ~]#

 

 

Announcements