- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
klist: no credentials cache found
- Labels:
-
Apache Hadoop
Created ‎09-21-2017 04:46 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have "klist" written in front of all hdfs commands in my script. When the job starts, it says the credentials are present and valid for next few days. But immediately once the next hdfs command starts it says as follows:
"klist: No credentials cache found (ticket cache FILE:/tmp/krb5cc_603)"
[2017-09-20 08:24:57,335] {bash_operator.py:74} INFO - javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] [2017-09-20 08:24:57,335] {bash_operator.py:74} INFO - at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212) [2017-09-20 08:24:57,335] {bash_operator.py:74} INFO - at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:413) [2017-09-20 08:24:57,335] {bash_operator.py:74} INFO - at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:563) [2017-09-20 08:24:57,335] {bash_operator.py:74} INFO - at org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:378) [2017-09-20 08:24:57,335] {bash_operator.py:74} INFO - at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:732) [2017-09-20 08:24:57,336] {bash_operator.py:74} INFO - at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:728) [2017-09-20 08:24:57,336] {bash_operator.py:74} INFO - at java.security.AccessController.doPrivileged(Native Method) [2017-09-20 08:24:57,336] {bash_operator.py:74} INFO - at javax.security.auth.Subject.doAs(Subject.java:415)
I am really not sure if any other process is corrupting the cache.
Has anyone faced the same issue ?
Created ‎10-02-2017 08:22 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Robert Levas I found it...some job from some other scheduler was issuing kdestroy and it was taking away the cache.
Thanks for your answers
Created ‎09-21-2017 05:07 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This is a tough question to answer since there is no indication of what your script is doing. The cache file, /tmp/krb5cc_603, is owned by the user with the uid of 603. Is this the user that was used to issue the kinit? Assuming a kinit was executed at some point before or during the script execution.
Created ‎09-21-2017 06:22 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Robert Levas forgot to mention that this is sporadic. the script does bunch of HDFS operations.
sudo -u user1 bash -c "kinit -R || kinit -k -t ~user1/user1.headless.keytab
Created ‎09-21-2017 07:02 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
From https://web.mit.edu/kerberos/krb5-1.12/doc/user/user_commands/kinit.html
-R requests renewal of the ticket-granting ticket. Note that an expired ticket cannot be renewed, even if the ticket is still within its renewable life.
Maybe sometime during the script execution the ticket expires and kinit -K is not failing as you expect?
Created ‎09-25-2017 03:17 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Is there a way to find if any other process is destroying the ticket..may be some logs ?
Created ‎11-04-2019 02:26 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
How did you find that some process is destroying the ticket?
I am also facing same issue.
Created ‎10-02-2017 08:22 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Robert Levas I found it...some job from some other scheduler was issuing kdestroy and it was taking away the cache.
Thanks for your answers
