Member since
01-20-2017
39
Posts
6
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2936 | 03-09-2017 07:42 PM |
03-07-2017
11:33 PM
1 Kudo
I am trying to use CopyTable utility to copy table from one cluster to other (both are on same version of HBase). I have a valid ticket generated before running the copytable utility but seeing below error, FYI.. distcp is working fine without any issues between these two clusters. Also these two clusters are in different realms. 2017-03-07 15:24:14,821 ERROR [main-SendThread(pxnhd237.hadoop.local:2181)] zookeeper.ClientCnxn: SASL authentication with Zookeeper Quorum member failed: javax.security.sasl.SaslException: An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - UNKNOWN_SERVER)]) occurred when evaluating Zookeeper Quorum Member's received SASL token. This may be caused by Java's being unable to resolve the Zookeeper Quorum Member's hostname correctly. You may want to try to adding '-Dsun.net.spi.nameservice.provider.1=dns,sun' to your client's JVMFLAGS environment. Zookeeper Client will go to AUTH_FAILED state.
2017-03-07 15:24:41,613 WARN [main] zookeeper.ZKUtil: TokenUtil-getAuthToken-0x25a77f32daa1881, quorum=pxnhd137.hadoop.local:2181,pxnhd237.hadoop.local:2181, baseZNode=/hbase-secure Unable to set watcher on znode (/hbase-secure/hbaseid)
org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /hbase-secure/hbaseid
at org.apache.zookeeper.KeeperException.create(KeeperException.java:123)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:221)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:417)
at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65)
at org.apache.hadoop.hbase.security.token.TokenUtil.getAuthToken(TokenUtil.java:363)
at org.apache.hadoop.hbase.security.token.TokenUtil.addTokenForJob(TokenUtil.java:327)
at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initCredentials(TableMapReduceUtil.java:451)
at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableReducerJob(TableMapReduceUtil.java:673)
at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableReducerJob(TableMapReduceUtil.java:606)
at org.apache.hadoop.hbase.mapreduce.CopyTable.createSubmittableJob(CopyTable.java:168)
at org.apache.hadoop.hbase.mapreduce.CopyTable.run(CopyTable.java:348)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.hbase.mapreduce.CopyTable.main(CopyTable.java:341)
2017-03-07 15:24:41,614 ERROR [main] zookeeper.ZooKeeperWatcher: TokenUtil-getAuthToken-0x25a77f32daa1881, quorum=pxnhd137.hadoop.local:2181,pxnhd237.hadoop.local:2181, baseZNode=/hbase-secure Received unexpected KeeperException, re-throwing exception
org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /hbase-secure/hbaseid
at org.apache.zookeeper.KeeperException.create(KeeperException.java:123)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:221)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:417)
at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65)
at org.apache.hadoop.hbase.security.token.TokenUtil.getAuthToken(TokenUtil.java:363)
at org.apache.hadoop.hbase.security.token.TokenUtil.addTokenForJob(TokenUtil.java:327)
at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initCredentials(TableMapReduceUtil.java:451)
at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableReducerJob(TableMapReduceUtil.java:673)
at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableReducerJob(TableMapReduceUtil.java:606)
at org.apache.hadoop.hbase.mapreduce.CopyTable.createSubmittableJob(CopyTable.java:168)
at org.apache.hadoop.hbase.mapreduce.CopyTable.run(CopyTable.java:348)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.hbase.mapreduce.CopyTable.main(CopyTable.java:341)
2017-03-07 15:24:41,616 INFO [main] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x15a98edf7d49fa2
2017-03-07 15:24:41,616 DEBUG [main] ipc.AbstractRpcClient: Stopping rpc client
Exception in thread "main" java.io.IOException: org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /hbase-secure/hbaseid
at org.apache.hadoop.hbase.security.token.TokenUtil.getAuthToken(TokenUtil.java:369)
at org.apache.hadoop.hbase.security.token.TokenUtil.addTokenForJob(TokenUtil.java:327)
at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initCredentials(TableMapReduceUtil.java:451)
at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableReducerJob(TableMapReduceUtil.java:673)
at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableReducerJob(TableMapReduceUtil.java:606)
at org.apache.hadoop.hbase.mapreduce.CopyTable.createSubmittableJob(CopyTable.java:168)
at org.apache.hadoop.hbase.mapreduce.CopyTable.run(CopyTable.java:348)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.hbase.mapreduce.CopyTable.main(CopyTable.java:341)
Caused by: org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /hbase-secure/hbaseid
at org.apache.zookeeper.KeeperException.create(KeeperException.java:123)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:221)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:417)
at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65)
at org.apache.hadoop.hbase.security.token.TokenUtil.getAuthToken(TokenUtil.java:363)
... 9 more
... View more
Labels:
- Labels:
-
Apache HBase
03-07-2017
06:59 PM
Hi @Artem Ervits ... I have opened new thead in HCC. Below is the link, https://community.hortonworks.com/questions/87491/ambari-server-setup-v2410-error-cannot-import-name.html
... View more
03-07-2017
06:58 PM
I am facing issue while setting up Ambari server v2.4.1.0 and Python 2.6.6. I followed steps listed in below thread but that didn't helped. If install lower version of Ambari server v2.2.2 it is working fine, but any versiov 2.4.* failing with same error. https://community.hortonworks.com/questions/67196/ambari-server-setup-error-cannot-import-name-parse.html
... View more
Labels:
- Labels:
-
Apache Ambari
03-07-2017
06:33 PM
I also tried deploying ambari version 2.4.2.0 and also in different environement, still seeing the same issue.
... View more
03-07-2017
06:28 PM
Hi @Artem Ervits, I am facing similar issue while setting up ambari server v2.4.1.0 and python 2.6.6. I tried to uninstall and made sure /usr/lib/python2.6/site-packages/ambari_server is cleaned but still issue exists. Can you please let me know what else can be done to resolve this issue. Traceback (most recent call last):
File "/usr/sbin/ambari-server.py", line 33, in <module>
from ambari_server.dbConfiguration import DATABASE_NAMES, LINUX_DBMS_KEYS_LIST
File "/usr/lib/python2.6/site-packages/ambari_server/dbConfiguration.py", line 28, in <module>
from ambari_server.serverConfiguration import decrypt_password_for_alias, get_ambari_properties, get_is_secure, \
File "/usr/lib/python2.6/site-packages/ambari_server/serverConfiguration.py", line 36, in <module>
from ambari_commons.os_utils import run_os_command, search_file, set_file_permissions, parse_log4j_file
ImportError: cannot import name parse_log4j_file
... View more
01-18-2017
04:52 PM
@Robert Levas @Kuldeep Kulkarni @Vipin Rathor ... Thanks a lot for your responses. Initially our AD team was hesitant to create principals for LoadBalancer and thats the reason why I was looking at Ambari scripts to create that. Now they are convinced and created principal for loadbalancer in AD. I followed ktutil steps mentioned by @Vipin Rathor to merge keytabs as suggested by @Robert Levas. This has solved the issue and I am successfully able to sync policies.
... View more
01-18-2017
01:58 AM
I am trying to enable HA for Ranger Admin and for that need to add all of the Ranger Admin Hosts HTTP principals and LoadBalancer principal to the same spnego keytab file. Need instructions on creating AD user (hint to script which Ambari uses to create new principals and keytab files) and add principals into the single keytab file.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Ranger
12-01-2016
09:16 PM
Thanks @Binu Mathew
... View more
12-01-2016
08:08 PM
1 Kudo
Currently we are using DefaultResourceCalculator and planning to change that to DominanatResourceCalculator in HDP 2.4.0.0. Are there any challenges or known issues which we need to aware of before doing this change and also is it fully supported in the HDP version we are using. If so, what is the recommended values/formula to use to calculate below parameters. Percentage of physical CPU allocated for all containers on a node & Number of virtual cores
... View more
Labels:
- Labels:
-
Apache YARN
11-11-2016
05:49 PM
@Kuldeep KulkarniThanks for your inputs,
... View more
- « Previous
- Next »