We came across a requirement to maintain kerberos ticket for 2 different realms on a single node, at the same time.
We found that Kerberos supports collection cache types, as on v1.12. We implemented DIR cache type, upon which we are able to generate and maintain tickets for 2 realms at the same time. Klist -A successfully lists both the tickets.
However, none of the Hadoop clients (hdfs,beeline) are able to find tickets from the DIR cache directory.
Below is the [libdefaults] cache name config from krb5.conf,
default_ccache_name = DIR:/tmp/tickets
Along with this, we are also setting KRB5CCNAME, KRB5RCACHEDIR, although it shouldn't matter when we already have the same setting in krb5.conf.
The hadoop clients throw the below error,
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
Upon some investigation found that Java Kerberos implementation specifically looks for FILE: type cache, and hadoop is dependent on it.
However, I am interested to know if there is any workaround to force them to use collection cache types (DIR/API/KEYRING).
This question was previously posted to the Solutions track. Upon further review, the moderators moved it to the Security track Tue May 7 08:28 PDT 2019.
I am facing the same issue.
Have a DIR type cache and none of the clients work neither beeline nor hdfs.
Using a FILE type and switching works fine. good enough for manual stuff but rather awkward if i wanted to automate anything