Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Spark can't connect to HBase using Kerberos in Cluster mode

avatar
Contributor

Hi,

I am running a spark application in a Kerberos based HDP platform. This spark application connects to HBase, write and read data perfectly well in a local mode on any node in the cluster. However, when I run this application on the cluster by using "-master yarn and --deploymode client (or cluster)" the Kerberos authentication fails. I have tried all sorts of things by doing Kinit outside of the application on each node, and doing Kerberos authentication inside the application as well but none of it has worked so far. In the local mode, nothing seems to have any issue and everything works: when I do kinit outside and do not perform any authentication inside the application. However, in the cluster mode nothing works whether I authenticate inside the application of outside the application. Here is an extract of the stack trace:

ERROR ipc.AbstractRpcClient: SASL authentication failed. The most likely cause is missing or invalid credentials. Consider 'kinit'.javax.security.sasl.SaslException: GSS initiate failed 

[Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)

Below is the code that I used for authenticating inside the application:

Configuration conf = HBaseConfiguration.create();  conf.addResource(new Path(hbaseConfDir,"hbase-site.xml")); conf.addResource(new Path(hadoopConfDir,"core-site.xml")); conf.set("hbase.client.keyvalue.maxsize", "0");conf.set("hbase.rpc.controllerfactory.class","org.apache.hadoop.hbase.ipc.RpcControllerFactory");
 <b> conf.set("hadoop.security.authentication", "kerberos");
 conf.set("hbase.security.authentication", "kerberos");
 UserGroupInformation.setConfiguration(conf);
  String keyTab="/etc/security/keytabs/somekeytab";
  UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI("name@xyz.com", keyTab);
   UserGroupInformation.setLoginUser(ugi); </b>
        
  connection=ConnectionFactory.createConnection(conf);
    logger.debug("HBase connected"); 

Adding or removing the bold lines in the above code didn't really have any effect other than the fact that when the bold lines are there kinit outside of the application is not needed.

Please let me know how can I solve this problem. It has been quite some time I am hitting my head on this issue.

1 ACCEPTED SOLUTION

avatar
Super Guru

You should not rely on an external ticket cache for distributed jobs. The best solution is to ship a keytab with your application or rely on a keytab being deployed on all nodes where your Spark task may be executed.

You likely want to replace:

UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI("name@xyz.com", keyTab);
UserGroupInformation.setLoginUser(ugi);

With:

UserGroupInformation.loginUserFromKeytab("name@xyz.com", keyTab);
connection=ConnectionFactory.createConnection(conf);

With your approach above, you would need to do something like the following after obtaining the UserGroupInformation instance:

ugi.doAs(new PrivilegedAction<Void>() {
  public Void run() {
    connection = ConnectionFactory.createConnection(conf);
    ...
    return null;
  }
});

View solution in original post

17 REPLIES 17

avatar
Super Guru

You got the same error message?

avatar
Contributor

Yes, I got the same kerberos credential error that I posted above forloginUserFromKeytab()

When I shipped files, the error changed slightly to: can't get password from the keytab.

avatar
Rising Star

In addition to Josh's recommendations, the configuration details in this KB article are also relevant to setting up Spark-to-HBase connectivity in a secure environment.

avatar
Super Collaborator

First of all, which Spark version are you using. Apache Spark 2.0 has support for automatically acquiring HBase security tokens correctly for that job and all its executors. Apache Spark 1.6 does not have that feature but in HDP Spark 1.6 we have backported that feature and it can acquire the HBase tokens for the jobs. The tokens are automatically acquired if 1) security is enabled and 2) hbase-site.xml is present on the client classpath 3) that hbase-site.xml has kerberos security configured. Then hbase tokens for the hbase master specified in that hbase-site.xml are acquired and used in the job.

In order to obtain the tokens spark client needs to use hbase code and so specific hbase jars need to be present in the client classpath. This is documented in here on the SHC github page. Search for "secure" on that page.

To access hbase inside the spark jobs the job obviously needs hbase jars to be present for the driver and/or executors. That would be part of your existing job submission for non-secure clusters, which I assume already works.

If this job is going to be long running and run beyond the token expire time (typically 7 days) then you need to submit the Spark job with the --keytab and --principal option such that Spark can use that keytab to re-acquire tokens before the current ones expire.

avatar
New Contributor

Hi Bikas, If I want to use HbaseConnection directly to access hbase, would Apache Spark 2.2 refresh token for me? If yes, how to get connection object, just call ConnectionFactory.createConnection(conf) ?

avatar
Rising Star

Hi Josh, Should it also work when we use the function saveAsNewAPIHadoopDataset over a rdd of "JavaPairRDD<ImmutableBytesWritable, Put>"? I tried with an without the doas and I was not able to make it work. I don't get any errors just nothing happen. Any idea? Thanks, Michel

avatar
Explorer

I have tried both approaches, and end up getting the same error message ->

Caused by: javax.security.auth.login.LoginException: Unable to obtain password from user

The same keytab file works just fine when attempting interactive login to hbase. Also, the same code works just fine when i submit the job with "local[*]" as master instead of yarn.

Any pointers ?

avatar
Contributor

in hbase-site.xml hbase.coprocessor.region.classes should contain also

org.apache.hadoop.hbase.security.token.TokenProvider