Member since
10-04-2016
24
Posts
1
Kudos Received
0
Solutions
12-29-2018
12:16 PM
@Jay Kumar SenSharma I'm doing it by just downloading hbase-2.1.1 version and replacing it by existing hbase-2.0.0 at /usr/hdp/3.0.1.0-187/ location. Is it possible or not? Now Masters are started but regionservers not starting, showing following error, SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/3.0.1.0-187/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.0.1.0-187/hbase/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
1 [regionserver/ubuntu20:60020] ERROR org.apache.hadoop.hbase.regionserver.HRegionServer - ***** ABORTING region server ubuntu20.mcloud.com,60020,1546085700471: Unhandled: Found interface org.apache.hadoop.hdfs.protocol.HdfsFileStatus, but class was expected *****
java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.hdfs.protocol.HdfsFileStatus, but class was expected
at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createOutput(FanOutOneBlockAsyncDFSOutputHelper.java:768)
at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$400(FanOutOneBlockAsyncDFSOutputHelper.java:118)
at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$16.doCall(FanOutOneBlockAsyncDFSOutputHelper.java:848)
at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$16.doCall(FanOutOneBlockAsyncDFSOutputHelper.java:843)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createOutput(FanOutOneBlockAsyncDFSOutputHelper.java:856)
at org.apache.hadoop.hbase.io.asyncfs.AsyncFSOutputHelper.createOutput(AsyncFSOutputHelper.java:51)
at org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.initOutput(AsyncProtobufLogWriter.java:167)
at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:165)
at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:113)
at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:612)
at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:124)
at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:756)
at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:486)
at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.<init>(AsyncFSWAL.java:251)
at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createWAL(AsyncFSWALProvider.java:73)
at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createWAL(AsyncFSWALProvider.java:48)
at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:138)
at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:57)
at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:276)
at org.apache.hadoop.hbase.regionserver.HRegionServer.getWAL(HRegionServer.java:2100)
at org.apache.hadoop.hbase.regionserver.HRegionServer.buildServerLoad(HRegionServer.java:1311)
at org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:1193)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1013)
at java.lang.Thread.run(Thread.java:745)
4 [regionserver/ubuntu20:60020] ERROR org.apache.hadoop.hbase.regionserver.HRegionServer - RegionServer abort: loaded coprocessors are: []
101 [main] ERROR org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine - Region server exiting
java.lang.RuntimeException: HRegionServer Aborted
at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:67)
at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:3021)
... View more
12-29-2018
11:47 AM
@Jay Kumar SenSharma Thank you for your reply. Tried above all outputs, but still getting the same error. Do you have any other suggestion/solution over this?
... View more
12-29-2018
11:07 AM
Getting below error while starting Hbase service, SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/3.0.1.0-187/hbase/lib/phoenix-5.0.0.3.0.1.0-187-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.0.1.0-187/hbase/lib/phoenix-5.0.0.3.0.1.0-187-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.0.1.0-187/hbase/lib/phoenix-5.0.0.3.0.1.0-187-pig.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.0.1.0-187/hbase/lib/phoenix-5.0.0.3.0.1.0-187-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.0.1.0-187/hbase/lib/phoenix-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.0.1.0-187/hbase/lib/phoenix-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.0.1.0-187/hbase/lib/phoenix-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.0.1.0-187/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.0.1.0-187/hbase/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
Exception in thread "main" java.lang.NoSuchMethodError: com.ctc.wstx.io.StreamBootstrapper.getInstance(Ljava/lang/String;Lcom/ctc/wstx/io/SystemId;Ljava/io/InputStream;)Lcom/ctc/wstx/io/StreamBootstrapper;
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2918)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2901)
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2953)
at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2926)
at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2806)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:1200)
at org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1254)
at org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1660)
at org.apache.hadoop.hbase.HBaseConfiguration.checkDefaultsVersion(HBaseConfiguration.java:66)
at org.apache.hadoop.hbase.HBaseConfiguration.addHbaseResources(HBaseConfiguration.java:80)
at org.apache.hadoop.hbase.HBaseConfiguration.create(HBaseConfiguration.java:94)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3126) Please help me in this, Thanks in advance.
... View more
04-13-2018
11:10 AM
Hello, I just want to know if there is any method of moving whole data from one cluster to another cluster, like if I have two clusters one is in US and one is in India then how can I sync those databases so that I can have backup of same database.
... View more
04-13-2018
11:06 AM
@Venkata Sudheer Kumar M You can use --files parameter while deploying applications on Yarn like, spark-submit --class com.virtuslab.sparksql.MainClass--master yarn --deploy-mode cluster --files /etc/spark2/conf/hive-site.xml,/etc/spark2/conf/hbase-site.xml /tmp/spark-hive-test/spark_sql_under_the_hood-spark2.2.0.jar It worked in my case.
... View more
03-08-2018
10:22 AM
I've removed "hbase.regionserver.kerberos.principal" this line from code, still I am getting same error.
... View more
03-08-2018
10:22 AM
Output of #kadmin.local kadmin.local: listprincs
HTTP/ambari-devup.mstorm.com@MSTORM.COM
HTTP/dn1-devup.mstorm.com@MSTORM.COM
HTTP/dn2-devup.mstorm.com@MSTORM.COM
HTTP/dn3-devup.mstorm.com@MSTORM.COM
HTTP/dn4-devup.mstorm.com@MSTORM.COM
HTTP/hbase1-devup.mstorm.com@MSTORM.COM
HTTP/hbase2-devup.mstorm.com@MSTORM.COM
HTTP/snn-devup.mstorm.com@MSTORM.COM
HTTP/zk1-devup.mstorm.com@MSTORM.COM
HTTP/zk2-devup.mstorm.com@MSTORM.COM
HTTP/zk3-devup.mstorm.com@MSTORM.COM
K/M@MSTORM.COM
admin/admin@MSTORM.COM
ambari-qa-ambari_devup@MSTORM.COM
ambari-server-ambari_devup@MSTORM.COM
ambari-server@MSTORM.COM
dn/dn1-devup.mstorm.com@MSTORM.COM
dn/dn2-devup.mstorm.com@MSTORM.COM
dn/dn3-devup.mstorm.com@MSTORM.COM
dn/dn4-devup.mstorm.com@MSTORM.COM
hbase-ambari_devup@MSTORM.COM
hbase/dn1-devup.mstorm.com@MSTORM.COM
hbase/dn2-devup.mstorm.com@MSTORM.COM
hbase/dn3-devup.mstorm.com@MSTORM.COM
hbase/dn4-devup.mstorm.com@MSTORM.COM
hbase/hbase1-devup.mstorm.com@MSTORM.COM
hbase/hbase2-devup.mstorm.com@MSTORM.COM
hdfs-ambari_devup@MSTORM.COM
hdfs/ambari-devup.mstorm.com@MSTORM.COM
infra-solr/hbase2-devup.mstorm.com@MSTORM.COM
jhs/hbase1-devup.mstorm.com@MSTORM.COM
kadmin/admin@MSTORM.COM
kadmin/ambari-devup.mstorm.com@MSTORM.COM
kadmin/changepw@MSTORM.COM
kafka/zk1-devup.mstorm.com@MSTORM.COM
kafka/zk2-devup.mstorm.com@MSTORM.COM
kafka/zk3-devup.mstorm.com@MSTORM.COM
kiprop/ambari-devup.mstorm.com@MSTORM.COM
krbtgt/MSTORM.COM@MSTORM.COM
livy/ambari-devup.mstorm.com@MSTORM.COM
nfs/dn4-devup.mstorm.com@MSTORM.COM
nm/dn1-devup.mstorm.com@MSTORM.COM
nm/dn2-devup.mstorm.com@MSTORM.COM
nm/dn3-devup.mstorm.com@MSTORM.COM
nm/dn4-devup.mstorm.com@MSTORM.COM
nn/ambari-devup.mstorm.com@MSTORM.COM
nn/hbase2-devup.mstorm.com@MSTORM.COM
rm/ambari-devup.mstorm.com@MSTORM.COM
spark-ambari_devup@MSTORM.COM
yarn/snn-devup.mstorm.com@MSTORM.COM
zeppelin-ambari_devup@MSTORM.COM
zookeeper/zk1-devup.mstorm.com@MSTORM.COM
zookeeper/zk2-devup.mstorm.com@MSTORM.COM zookeeper/zk3-devup.mstorm.com@MSTORM.COM
... View more
03-08-2018
08:56 AM
Where is the below FIELD.HORTONWORKS.COM coming from? - I am trying with java client example to connect hbase Kerberos cluster, in that example, this field is mentioned. Also I've already installed jce on each node.
... View more
03-08-2018
08:52 AM
@Geoffrey Shelton Okot Now I got following error, at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Couldn't setup connection for hbase/hbase1-devup.mstorm.com@MSTORM.COM to hbase/hbase1-devup.mstorm.com@MSTORM.COM
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$1.run(RpcClientImpl.java:696)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.handleSaslConnectionFailure(RpcClientImpl.java:668)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:777)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:920)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:889)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1222)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32651)
at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:372)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:199)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:346)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:320)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:64)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.ipc.RemoteException: GSS initiate failed
at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.readStatus(HBaseSaslRpcClient.java:153)
at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:189)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:642)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.java:166)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:769)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:766)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:766)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:920)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:889)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1222)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32651)
at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:372)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:199)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:346)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:320)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:64)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Results :
Tests in error:
HBaseClientTest.testingggAuth:51 » RetriesExhausted Failed after attempts=36, ... Please let me know what will be the issue, Thanks in advance.
... View more
03-07-2018
01:17 PM
KrbException: Server not found in Kerberos database (7) - LOOKING_UP_SERVER
at sun.security.krb5.KrbTgsRep.<init>(KrbTgsRep.java:73)
at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:251)
at sun.security.krb5.KrbTgsReq.sendAndGetCreds(KrbTgsReq.java:262)
at sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:308)
at sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:126)
at sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:458)
at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:693)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
at org.apache.zookeeper.client.ZooKeeperSaslClient$2.run(ZooKeeperSaslClient.java:366)
at org.apache.zookeeper.client.ZooKeeperSaslClient$2.run(ZooKeeperSaslClient.java:363)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.zookeeper.client.ZooKeeperSaslClient.createSaslToken(ZooKeeperSaslClient.java:362)
at org.apache.zookeeper.client.ZooKeeperSaslClient.createSaslToken(ZooKeeperSaslClient.java:348)
at org.apache.zookeeper.client.ZooKeeperSaslClient.sendSaslPacket(ZooKeeperSaslClient.java:420)
at org.apache.zookeeper.client.ZooKeeperSaslClient.initialize(ZooKeeperSaslClient.java:458)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1013)
Caused by: KrbException: Identifier doesn't match expected value (906)
at sun.security.krb5.internal.KDCRep.init(KDCRep.java:140)
at sun.security.krb5.internal.TGSRep.init(TGSRep.java:65)
at sun.security.krb5.internal.TGSRep.<init>(TGSRep.java:60)
at sun.security.krb5.KrbTgsRep.<init>(KrbTgsRep.java:55)
... View more
- Tags:
- Hadoop Core
- hdp-2.6.0
Labels:
03-07-2018
10:26 AM
@Geoffrey Shelton Okot Thanks for the reply, I've done the same configuration again, still I'm getting the same error. I've enabled debug logs and I found out below error:- >>>Pre-Authentication Data: PA-DATA type = 136 >>>Pre-Authentication Data: PA-DATA type = 19 PA-ETYPE-INFO2 etype = 18, salt = MSTORM.COMhbasehbase1-devup.mstorm.com, s2kparams = null PA-ETYPE-INFO2 etype = 23, salt = MSTORM.COMhbasehbase1-devup.mstorm.com, s2kparams = null PA-ETYPE-INFO2 etype = 16, salt = MSTORM.COMhbasehbase1-devup.mstorm.com, s2kparams = null >>>Pre-Authentication Data: PA-DATA type = 2 PA-ENC-TIMESTAMP >>>Pre-Authentication Data: PA-DATA type = 133 >>> KdcAccessibility: remove ambari-devup.mstorm.com >>> KDCRep: init() encoding tag is 126 req type is 11 >>>KRBError: cTime is Sat Aug 28 17:12:22 UTC 2032 1977325942000 sTime is Wed Mar 07 10:15:19 UTC 2018 1520417719000 suSec is 507841 error code is 25 error Message is Additional pre-authentication required cname is hbase/hbase1-devup.mstorm.com@MSTORM.COM sname is krbtgt/MSTORM.COM@MSTORM.COM eData provided. msgType is 30 >>>Pre-Authentication Data: PA-DATA type = 136 >>>Pre-Authentication Data: PA-DATA type = 19 PA-ETYPE-INFO2 etype = 18, salt = MSTORM.COMhbasehbase1-devup.mstorm.com, s2kparams = null PA-ETYPE-INFO2 etype = 23, salt = MSTORM.COMhbasehbase1-devup.mstorm.com, s2kparams = null PA-ETYPE-INFO2 etype = 16, salt = MSTORM.COMhbasehbase1-devup.mstorm.com, s2kparams = null >>>Pre-Authentication Data: PA-DATA type = 2 PA-ENC-TIMESTAMP >>>Pre-Authentication Data: PA-DATA type = 133 KRBError received: NEEDED_PREAUTH KrbAsReqBuilder: PREAUTH FAILED/REQ, re-send AS-REQ Followed by above error, I'm using sample test example to check whether table present or not in HBase. Following is the example I am using. package com.hbase; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.ConnectionFactory; import org.apache.hadoop.security.UserGroupInformation; import org.apache.log4j.Level; import org.apache.log4j.Logger; import org.junit.Test; public class HBaseClientTest { @Test public void testingggAuth() throws Exception{ try { Logger.getRootLogger().setLevel(Level.DEBUG); Configuration configuration = HBaseConfiguration.create(); // Zookeeper quorum configuration.set("hbase.zookeeper.quorum", "node1,node2,node3"); configuration.set("hbase.master", "hbase_node:60000"); configuration.set("hbase.zookeeper.property.clientPort", "2181"); configuration.set("hadoop.security.authentication", "kerberos"); configuration.set("hbase.security.authentication", "kerberos"); configuration.set("zookeeper.znode.parent", "/hbase"); //configuration.set("hbase.cluster.distributed", "true"); // check this setting on HBase side //configuration.set("hbase.rpc.protection", "authentication"); //what principal the master/region. servers use. //configuration.set("hbase.regionserver.kerberos.principal", "hbase/_HOST@FIELD.HORTONWORKS.COM"); //configuration.set("hbase.regionserver.keytab.file", "src/hbase.service.keytab"); // // this is needed even if you connect over rpc/zookeeper //configuration.set("hbase.master.kerberos.principal", "_host@REALM"); //configuration.set("hbase.master.keytab.file", "/home/developers/Music/hbase.service.keytab"); System.setProperty("java.security.auth.login.config", "/path/to/hbase_master_jaas.conf"); System.setProperty("java.security.krb5.conf","/etc/krb5.conf"); // Enable/disable krb5 debugging System.setProperty("sun.security.krb5.debug", "true"); String principal = System.getProperty("kerberosPrincipal", "hbase/hbase1-devup.mstorm.com@MSTORM.COM"); String keytabLocation = System.getProperty("kerberosKeytab", "/path/to/hbase.service.keytab"); System.out.println("HEEHH 1111111111111111111"); // kinit with principal and keytab UserGroupInformation.setConfiguration(configuration); UserGroupInformation.loginUserFromKeytab(principal, keytabLocation); //UserGroupInformation.setConfiguration(conf); // UserGroupInformation userGroupInformation = UserGroupInformation.loginUserFromKeytabAndReturnUGI("hbase-ambari_devup@MSTORM.COM", "/path/to/hbase.headless.keytab" ); // UserGroupInformation.setLoginUser(userGroupInformation); System.out.println("HEEHH LOGINNNNNNNNNNNNNNNN1"); Connection connection = ConnectionFactory.createConnection(HBaseConfiguration.create(configuration)); System.out.println("CIONNNNNNNNNNNNNNNNNNNNNNNNNNNNNN"); System.out.println("STATTTTTTTTTTTTTC "+ connection.getAdmin().isTableAvailable(TableName.valueOf("table_name"))); System.out.println("GETDATTTTTTTTTTTTTTTTAAAAAAAAAAAAAAAAAAAAAAAA"); } catch (Exception e) { e.printStackTrace(); } } } Please help me in this as I am stuck with authenticating the remote connection to hbase in Kerberos enabled cluster. Thank you in advance.
... View more
03-07-2018
08:27 AM
Hello, While connecting to hbase in kerberos cluster I'm getting error below, aused by: java.io.IOException: Couldn't setup connection for hbase/hbase1-devup.mstorm.com@MSTORM.COM to null at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$1.run(RpcClientImpl.java:696) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.handleSaslConnectionFailure(RpcClientImpl.java:668) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:777) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:920) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:889) at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1222) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32651) at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:372) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:199) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:346) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:320) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126) ... 4 more Caused by: java.io.IOException: Failed to specify server's Kerberos principal name at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.<init>(HBaseSaslRpcClient.java:117) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:639) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.java:166) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:769) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:766) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:766) ... 17 more My conf files are:- /etc/krb5.conf [libdefaults] renew_lifetime = 7d
forwardable = true default_realm = EXAMPLE.COM ticket_lifetime = 24h dns_lookup_realm = false dns_lookup_kdc = false default_ccache_name = /tmp/krb5cc_%{uid} #default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5 #default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5 [logging] default = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log kdc = FILE:/var/log/krb5kdc.log [realms] EXAMPLE.COM = { admin_server = ambari-server.example.com kdc = ambari-server.example.com } Please help me in this.
... View more
12-18-2017
12:23 PM
Hello, I've just upgraded my stack to the latest version and starting grafana service in this. It got stuck and giving following errors 2017-12-18 12:20:26,924 - Connection to Grafana failed. Next retry in 20 seconds.
2017-12-18 12:20:26,925 - Connecting (GET) to ambarihdp-produp:3000/api/user
2017-12-18 12:20:46,940 - Connection to Grafana failed. Next retry in 20 seconds.
2017-12-18 12:20:46,940 - Connecting (GET) to ambarihdp-produp:3000/api/user
2017-12-18 12:21:06,958 - Connection to Grafana failed. Next retry in 20 seconds.
2017-12-18 12:21:06,959 - Connecting (GET) to ambarihdp-produp:3000/api/user
2017-12-18 12:21:26,979 - Connection to Grafana failed. Next retry in 20 seconds.
2017-12-18 12:21:26,980 - Connecting (GET) to ambarihdp-produp:3000/api/user
2017-12-18 12:21:47,000 - Connection to Grafana failed. Next retry in 20 seconds.
2017-12-18 12:21:47,000 - Connecting (GET) to ambarihdp-produp:3000/api/user
Please let me know what will be the issue regarding this.
... View more
Labels:
12-05-2017
04:57 AM
Hello All, When we do any operations on database(hbase), regionservers are taking too much ram and not releases until we restart them. Is there any parameter or property to release ram or to restrict regionservers to take this much of memory? Please respond if any solution you have.
... View more
09-29-2017
03:55 AM
java.io.EOFException
at java.io.DataInputStream.readShort(DataInputStream.java:315)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227)
at java.lang.Thread.run(Thread.java:748) Versions:- HDP 2.3 Hadoop- 2.7.1 Spark- 2.2.0 Please revert me as what kind of issue we're facing on server side and what kind of changes we need to perform?
... View more
- Tags:
- Hadoop Core
- Spark
- YARN
Labels:
08-29-2017
08:23 AM
1 Kudo
Thanks for the information, but I wanted to know is after deploying apps on YARN how this YARN allocates containers(executor) on node managers(workers) i.e. from which property or algorithm it is allocating container on node managers? So can you please tell me, from where we can change this property or algorithm of allocation of containers on node managers?
... View more
08-29-2017
06:42 AM
When I deploy Spark apps on YARN, I want to know how Yarn distributes containers on nodemanagers? If I've deployed 5 apps with 2 executors and 1 driver each then how Yarn take care of distribution or is there any algorithm for distribution? In my case it is not fairly distributed on all node managers i.e. I've 6 node managers running and I want my executors goes on all node managers for smooth working. Can anyone please tell me how can I achieve this?
... View more
Labels:
06-03-2017
06:59 AM
@Artem Ervits can you please tell us exact date, is it end of 2017 or before it. Just need to confirm. Thanks in advance.
... View more
03-14-2017
05:13 AM
We're looking forward to implement spark on regionserver and hbase-2.0.0-SNAPSHOT has this feature. That's why we're eager to know as when you're gonna release that version of hbase in hdp.
... View more
03-14-2017
05:11 AM
Hello , Can anyone tell me as when there will be release of hdp repo for ubuntu 16 and also for hbase-2.0.0 in hdp?
... View more
03-14-2017
05:09 AM
Hello, can anyone tell me as when you're going to release hdp repo for ubuntu 16 server and also hbase-2.0.0 in hdp stack?
... View more
02-13-2017
02:05 PM
Can you please tell me the date as when will be the release of hbase-2.0.0 in hdp stack?
... View more
10-19-2016
10:08 AM
Can anyone please tell that when you're going to release hbase-2.0.0 in hdp stack?
... View more
Labels:
10-04-2016
05:54 AM
When horton works release hbase-spark library for hbase, please reply here. , When hortonworks release stable version of hbase-spark library in hbase, please reply on this thread.
... View more