Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Error when trying HBase CopyTable across two Kerberized Platforms

Solved Go to solution

Error when trying HBase CopyTable across two Kerberized Platforms

Contributor

I am trying to use CopyTable utility to copy table from one cluster to other (both are on same version of HBase). I have a valid ticket generated before running the copytable utility but seeing below error,

FYI.. distcp is working fine without any issues between these two clusters. Also these two clusters are in different realms.

2017-03-07 15:24:14,821 ERROR [main-SendThread(pxnhd237.hadoop.local:2181)] zookeeper.ClientCnxn: SASL authentication with Zookeeper Quorum member failed: javax.security.sasl.SaslException: An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - UNKNOWN_SERVER)]) occurred when evaluating Zookeeper Quorum Member's received SASL token. This may be caused by Java's being unable to resolve the Zookeeper Quorum Member's hostname correctly. You may want to try to adding '-Dsun.net.spi.nameservice.provider.1=dns,sun' to your client's JVMFLAGS environment. Zookeeper Client will go to AUTH_FAILED state. 2017-03-07 15:24:41,613 WARN [main] zookeeper.ZKUtil: TokenUtil-getAuthToken-0x25a77f32daa1881, quorum=pxnhd137.hadoop.local:2181,pxnhd237.hadoop.local:2181, baseZNode=/hbase-secure Unable to set watcher on znode (/hbase-secure/hbaseid) org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /hbase-secure/hbaseid at org.apache.zookeeper.KeeperException.create(KeeperException.java:123) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:221) at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:417) at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65) at org.apache.hadoop.hbase.security.token.TokenUtil.getAuthToken(TokenUtil.java:363) at org.apache.hadoop.hbase.security.token.TokenUtil.addTokenForJob(TokenUtil.java:327) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initCredentials(TableMapReduceUtil.java:451) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableReducerJob(TableMapReduceUtil.java:673) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableReducerJob(TableMapReduceUtil.java:606) at org.apache.hadoop.hbase.mapreduce.CopyTable.createSubmittableJob(CopyTable.java:168) at org.apache.hadoop.hbase.mapreduce.CopyTable.run(CopyTable.java:348) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.hbase.mapreduce.CopyTable.main(CopyTable.java:341) 2017-03-07 15:24:41,614 ERROR [main] zookeeper.ZooKeeperWatcher: TokenUtil-getAuthToken-0x25a77f32daa1881, quorum=pxnhd137.hadoop.local:2181,pxnhd237.hadoop.local:2181, baseZNode=/hbase-secure Received unexpected KeeperException, re-throwing exception org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /hbase-secure/hbaseid at org.apache.zookeeper.KeeperException.create(KeeperException.java:123) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:221) at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:417) at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65) at org.apache.hadoop.hbase.security.token.TokenUtil.getAuthToken(TokenUtil.java:363) at org.apache.hadoop.hbase.security.token.TokenUtil.addTokenForJob(TokenUtil.java:327) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initCredentials(TableMapReduceUtil.java:451) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableReducerJob(TableMapReduceUtil.java:673) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableReducerJob(TableMapReduceUtil.java:606) at org.apache.hadoop.hbase.mapreduce.CopyTable.createSubmittableJob(CopyTable.java:168) at org.apache.hadoop.hbase.mapreduce.CopyTable.run(CopyTable.java:348) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.hbase.mapreduce.CopyTable.main(CopyTable.java:341) 2017-03-07 15:24:41,616 INFO [main] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x15a98edf7d49fa2 2017-03-07 15:24:41,616 DEBUG [main] ipc.AbstractRpcClient: Stopping rpc client Exception in thread "main" java.io.IOException: org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /hbase-secure/hbaseid at org.apache.hadoop.hbase.security.token.TokenUtil.getAuthToken(TokenUtil.java:369) at org.apache.hadoop.hbase.security.token.TokenUtil.addTokenForJob(TokenUtil.java:327) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initCredentials(TableMapReduceUtil.java:451) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableReducerJob(TableMapReduceUtil.java:673) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableReducerJob(TableMapReduceUtil.java:606) at org.apache.hadoop.hbase.mapreduce.CopyTable.createSubmittableJob(CopyTable.java:168) at org.apache.hadoop.hbase.mapreduce.CopyTable.run(CopyTable.java:348) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.hbase.mapreduce.CopyTable.main(CopyTable.java:341) Caused by: org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /hbase-secure/hbaseid at org.apache.zookeeper.KeeperException.create(KeeperException.java:123) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:221) at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:417) at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65) at org.apache.hadoop.hbase.security.token.TokenUtil.getAuthToken(TokenUtil.java:363) ... 9 more

1 ACCEPTED SOLUTION

Accepted Solutions

Re: Error when trying HBase CopyTable across two Kerberized Platforms

Expert Contributor
@Saikiran Parepally

Also along with Josh's suggestion, please check on the cross cluster realm setup here. Refer the documentation below:

https://community.hortonworks.com/articles/18686/kerberos-cross-realm-trust-for-distcp.html

You will need to have the cross cluster realm setup right to perform the copyTable across two secure clusters

7 REPLIES 7

Re: Error when trying HBase CopyTable across two Kerberized Platforms

ZooKeeper is telling you what went wrong:

An error: (java.security.PrivilegedActionException: 
javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Server not
 found in Kerberos database (7) - UNKNOWN_SERVER)]) occurred when 
evaluating Zookeeper Quorum Member's received SASL token. This may be 
caused by Java's being unable to resolve the Zookeeper Quorum Member's 
hostname correctly. You may want to try to adding 
'-Dsun.net.spi.nameservice.provider.1=dns,sun' to your client's JVMFLAGS
 environment. Zookeeper Client will go to AUTH_FAILED state.

SASL authentication (with Kerberos) failed which caused the ZK client to fall back to an un-authenticated state. When the HBase client tried to read the ACL'ed znodes in ZK, it failed because you were not autheticated.

The error implies that the code was able to find your Kerberos ticket, however, one or more of the ZooKeeper servers which you specified were not found when looking them up in the KDC to perform the Kerberos authentication.

Make sure that you are specifying the correct, fully-qualified domain name for each ZooKeeper server. This must exactly match the "instance" component of the Kerberos principal that your ZooKeeper servers are using (e.g. "host.domain.com" in the principal "zookeeper/host.domain.com@DOMAIN.COM").

Re: Error when trying HBase CopyTable across two Kerberized Platforms

Expert Contributor
@Saikiran Parepally

Also along with Josh's suggestion, please check on the cross cluster realm setup here. Refer the documentation below:

https://community.hortonworks.com/articles/18686/kerberos-cross-realm-trust-for-distcp.html

You will need to have the cross cluster realm setup right to perform the copyTable across two secure clusters

Re: Error when trying HBase CopyTable across two Kerberized Platforms

Contributor

Hi @Sumesh I have followed the same documentation for setting up cross realm. I think issue is with configuring domain_realms. Can u please let me know how that needs to be configured.

Re: Error when trying HBase CopyTable across two Kerberized Platforms

Expert Contributor

@Saikiran Parepally

Better follow this doc here to review your settings :

http://crazyadmins.com/setup-cross-realm-trust-two-mit-kdc/

Re: Error when trying HBase CopyTable across two Kerberized Platforms

Expert Contributor

@Saikiran Parepally

Did that fix the issue here?

Re: Error when trying HBase CopyTable across two Kerberized Platforms

Contributor

@Sumesh @Josh Elser Thanks for your response. Distcp from dev to prod worked without any issues, but I tried Hbase copy table from prod to dev and found that dev was not correctly configured for Cross realm trust. After fixing dev to accept prod tickets, I am able to successfully copy data.

Highlighted

Re: Error when trying HBase CopyTable across two Kerberized Platforms

Expert Contributor

@Saikiran Parepally

Please accept the answer if that has helped to resolve the issue

Don't have an account?
Coming from Hortonworks? Activate your account here