Created 05-07-2019 03:27 PM
Hi All,
We came across a requirement to maintain kerberos ticket for 2 different realms on a single node, at the same time.
We found that Kerberos supports collection cache types, as on v1.12. We implemented DIR cache type, upon which we are able to generate and maintain tickets for 2 realms at the same time. Klist -A successfully lists both the tickets.
However, none of the Hadoop clients (hdfs,beeline) are able to find tickets from the DIR cache directory.
Below is the [libdefaults] cache name config from krb5.conf,
default_ccache_name = DIR:/tmp/tickets
Along with this, we are also setting KRB5CCNAME, KRB5RCACHEDIR, although it shouldn't matter when we already have the same setting in krb5.conf.
The hadoop clients throw the below error,
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
Upon some investigation found that Java Kerberos implementation specifically looks for FILE: type cache, and hadoop is dependent on it.
However, I am interested to know if there is any workaround to force them to use collection cache types (DIR/API/KEYRING).
Created 05-07-2019 03:29 PM
This question was previously posted to the Solutions track. Upon further review, the moderators moved it to the Security track Tue May 7 08:28 PDT 2019.
Created 01-10-2020 02:53 PM
I am facing the same issue.
Have a DIR type cache and none of the clients work neither beeline nor hdfs.
Using a FILE type and switching works fine. good enough for manual stuff but rather awkward if i wanted to automate anything
Created 11-13-2022 01:14 PM
Hello everybody,
Sorry in advance, I I restart the conversation ...
How can "hdfs dfs" be able to use two kerberos realms and whose cache mode is set to DIR: and not FILE: (set by default) please ?
Thank you very much !