Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Can we pass custom kerberos ticket cache at sametime for different realms?

avatar
Contributor

I have a scenario where we have two clusters Development and production. I have an edge node where I can submit my jobs to both clusters, so every time I submit the jobs to two different clusters I have to kinit with the dev domain id to submit jobs to dev cluster and kinit with prod domain id to submit jobs for prod cluster. If a dev job is running from edgenode in meanwhile if cron triggers to kinit with prod ID and submit jobs, the running dev job fails with

Kerberos GSS exception failed Kerberos authentication error.

Is there any way I can pass custom ticket cache for two different realms at the same time to submit jobs from same edge node?

I went through Kerberos documentation it states everything runs based on login user basis and if someone kinit's in the middle it will update the default cache file /tmp/krb5c* itself.

1 ACCEPTED SOLUTION

avatar
Rising Star

@prashanth ramesh

You can specify the cache file while doing kinit itself as below

[root@storm-h2 ~]# kinit -c /tmp/kafka.ticket -kt /etc/security/keytabs/kafka.service.keytab kafka/storm-h1@HDF.COM
[root@storm-h2 ~]# kinit -c /tmp/zk.ticket -kt /etc/security/keytabs/zk.service.keytab zookeeper/storm-h2@HDF.COM
[root@storm-h2 ~]#
[root@storm-h2 ~]# klist -c /tmp/kafka.ticket
Ticket cache: FILE:/tmp/kafka.ticket
Default principal: kafka/storm-h1@HDF.COM
Valid starting       Expires              Service principal
10/17/2017 17:03:00  10/18/2017 17:03:00  krbtgt/HDF.COM@HDF.COM
[root@storm-h2 ~]# klist -c /tmp/zk.ticket
Ticket cache: FILE:/tmp/zk.ticket
Default principal: zookeeper/storm-h2@HDF.COM
Valid starting       Expires              Service principal
10/17/2017 17:03:39  10/18/2017 17:03:39  krbtgt/HDF.COM@HDF.COM

View solution in original post

4 REPLIES 4

avatar
Rising Star

@prashanth ramesh

You can specify the cache file while doing kinit itself as below

[root@storm-h2 ~]# kinit -c /tmp/kafka.ticket -kt /etc/security/keytabs/kafka.service.keytab kafka/storm-h1@HDF.COM
[root@storm-h2 ~]# kinit -c /tmp/zk.ticket -kt /etc/security/keytabs/zk.service.keytab zookeeper/storm-h2@HDF.COM
[root@storm-h2 ~]#
[root@storm-h2 ~]# klist -c /tmp/kafka.ticket
Ticket cache: FILE:/tmp/kafka.ticket
Default principal: kafka/storm-h1@HDF.COM
Valid starting       Expires              Service principal
10/17/2017 17:03:00  10/18/2017 17:03:00  krbtgt/HDF.COM@HDF.COM
[root@storm-h2 ~]# klist -c /tmp/zk.ticket
Ticket cache: FILE:/tmp/zk.ticket
Default principal: zookeeper/storm-h2@HDF.COM
Valid starting       Expires              Service principal
10/17/2017 17:03:39  10/18/2017 17:03:39  krbtgt/HDF.COM@HDF.COM

avatar
Contributor

@nshetty

Thanks for the response I tried the above solution to run a hive job but it will pick up the ticket in default location i,e "/tmp/krb5cc", it would not pick up the custom location ticket "/tmp/kafka.ticket".

so how can i pass this custom ticket cache as part of hadoop commands? Is it possible to pass custome ticket cache as part of commands?

avatar
Rising Star

@prashanth ramesh

That you can set using environment variable KRB5CCNAME

Example: export KRB5CCNAME=/tmp/zk.ticket

Please accept the answer if your issue is solved

avatar
Rising Star

@prashanth ramesh

Please accept the answer if the solution worked for you