Created 05-07-2019 03:27 PM
Hi All,
We came across a requirement to maintain kerberos ticket for 2 different realms on a single node, at the same time.
We found that Kerberos supports collection cache types, as on v1.12. We implemented DIR cache type, upon which we are able to generate and maintain tickets for 2 realms at the same time. Klist -A successfully lists both the tickets.
However, none of the Hadoop clients (hdfs,beeline) are able to find tickets from the DIR cache directory.
Below is the [libdefaults] cache name config from krb5.conf,
default_ccache_name = DIR:/tmp/tickets
Along with this, we are also setting KRB5CCNAME, KRB5RCACHEDIR, although it shouldn't matter when we already have the same setting in krb5.conf.
The hadoop clients throw the below error,
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
Upon some investigation found that Java Kerberos implementation specifically looks for FILE: type cache, and hadoop is dependent on it.
However, I am interested to know if there is any workaround to force them to use collection cache types (DIR/API/KEYRING).