Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Hbase Soket TimeOut Exception

Hbase Soket TimeOut Exception

New Contributor

Hi,

I would like to ask for a little help on an error I have on Hbase 1.0.0.


I have :

- CDH 5.4 on Kerberized Cluster: 4 nodes
(Hdfs, YARN, Hive and Sentry services are working without issues)
- HBase 1.0.0-cdh5.4.4
(3 RegionServers, 1 Master Active, 1 REST Server, 1 Thrift Server)
- Zookeeper
(3 nodes)


When I connect with Java API to an  Hbase table (created via shell), if i try to execute any method apart getName(that instead works)  I get this exception :

Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=68248: row 'tb_name,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=myhostname,60020,1448983052715, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:64)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hostname/10.180.113.56:60020 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hostname/10.180.113.56:60020 is closing. Call id=9, waitTime=2
at org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1233)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1204)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:216)
atorg.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:300)

org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:31889)
at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:349)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:193)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:332)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:306)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
... 4 more
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hostname/10.180.113.56:60020 is closing. Call id=9, waitTime=2
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1033)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:840)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:568)

I already tried to increment the RPC timeout but didnt work.

The table contains only one row with one columnFamily.
Please let me know if someone have already seen this problem and can help.
Accessing hbase from the shell seem to work properly,I can scan the table, cerate new table and get the row using GET.

Thanks for your help

 

Ps

I want to add some more infos :

 

result from hbase hbck:

 

15/12/07 15:10:54 ERROR master.TableLockManager: Unexpected ZooKeeper error when listing children
org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /hbase/table-lock
at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1468)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getChildren(RecoverableZooKeeper.java:296)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.listChildrenNoWatch(ZKUtil.java:575)
at org.apache.hadoop.hbase.master.TableLockManager$ZKTableLockManager.getTableNames(TableLockManager.java:392)
at org.apache.hadoop.hbase.master.Table
LockManager$ZKTableLockManager.visitAllLocks(TableLockManager.java:379)
at org.apache.hadoop.hbase.util.hbck.TableLockChecker.checkTableLocks(TableLockChecker.java:76)
at org.apache.hadoop.hbase.util.HBaseFsck.checkAndFixTableLocks(HBaseFsck.java:3026)
at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:629)
at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:4440)
at org.apache.hadoop.hbase.util.HBaseFsck$HBaseFsckTool.run(HBaseFsck.java:4243)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:4231)

 

result from hbase zkcli:


[ERROR] Terminal initialization failed; falling back to unsupported
java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected
at jline.TerminalFactory.create(TerminalFactory.java:101)
at jline.TerminalFactory.get(TerminalFactory.java:159)
at jline.console.ConsoleReader.<init>(ConsoleReader.java:227)
at jline.console.ConsoleReader.<init>(ConsoleReader.java:219)
at jline.console.ConsoleReader.<init>(ConsoleReader.java:207)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImp
l.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.zookeeper.ZooKeeperMain.run(ZooKeeperMain.java:311)
at org.apache.zookeeper.ZooKeeperMain.main(ZooKeeperMain.java:282)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperMainServer.main(ZooKeeperMainServer.java:136)

JLine support is disabled
15/12/07 15:13:35 INFO zookeeper.Login: TGT valid starting at: Mon Dec 07 15:13:12 CET 2015
15/12/07 15:13:35 INFO zookeeper.Login: TGT expires: Tue Dec 08 01:13:15 CET 2015
15/12/07 15:13:35 INFO zookeeper.Login: TGT refresh sleeping until: Mon Dec 07 23:35:04 CET 2015
15/12/07 15:13:35 INFO zookeeper.ClientCnxn: Opening socket connection to server lmldrl38cds001.generali.it/10.180.113.55:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
15/12/07 15:13:35 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /10.180.113.54:53649, server: lmldrl38cds001.generali.it/10.180.113.55:2181
15/12/07 15:13:35 INFO zookeeper.ClientCnxn: Session establishment complete on server hostname/10.180.113.55:2181, sessionid = 0x3516cba6a2b174f, negotiated timeout = 30000
Hide full text
3:06:44.929 PM ERROR org.apache.zookeeper.server.auth.SaslServerCallbackHandler

 

Moreover from the log accesible from Cloudera manager i can see this line related to Kerberos Authentication:

3:06:44.929 PM ERROR org.apache.zookeeper.server.auth.SaslServerCallbackHandler Failed to set name based on Kerberos authentication rules.

 

 

Hopes this more infos can help to identify the issue

10 REPLIES 10

Re: Hbase Soket TimeOut Exception

Contributor

Were you able to find a way past this?

Re: Hbase Soket TimeOut Exception

New Contributor

I'm seeing a similar problem:

 

HBase throws a error when we query one large table. The error is as follows:

 

ERROR: Call id=58, waitTime=60001, operationTimeout=60000 expired.

 

Configuration property "HBase RegionServer Lease Period" is set to 3600000 ms, "HBase RegionServer Handler Count" is set to 60, "RPC Timeout" is set to 3600000 ms.

 

How to fix this problem? How do I set up "operationTimeout" for HBase?

Re: Hbase Soket TimeOut Exception

Master Guru

Re: Hbase Soket TimeOut Exception

New Contributor
That helped some, but hen I run a SQL query on each table in Apache Phoenix,
all queries work except one, which returns error:



java.net .SocketTimeoutException: callTimeout=60000,
callDuration=60304: row '1455896856429_192.87.106.229_3976241770750533' on
table 'rawnetflow'



Caused by: java.net .SocketTimeoutException:
callTimeout=60000, callDuration=60304: row
'1455896856429_192.87.106.229_3976241770750533' on table 'rawnetflow'



Configuration property "HBase RegionServer Lease Period" is set to 3600000
ms (60 mins), "HBase RegionServer Handler Count" is set to 60, "RPC Timeout"
is set to 3600000 ms (60 mins), "hbase.client.operation.timeout" is set to
1200000ms (20mins), "phoenix.query.timeoutMS" is set to 600000ms (10 mins),
"phoenix.query.KeepAliveMs" is set to 600000ms (10 mins).


Where is that 60000ms timer being set?


Re: Hbase Soket TimeOut Exception

New Contributor

Thanks, but like I posted last week, that only seems to address one of the timeouts.

There appear to be others that I do not know how to set.

 

Any ideas?

 

thanks,

Nelson

 

Re: Hbase Soket TimeOut Exception

Master Collaborator

@NelsonRonkin, would you be so kind as to start a new topic for your query?  I believe since it is slightly different from the original poster's issue, folks may be missing your replies.  This might help you get a faster response.

 

Thank you

Re: Hbase Soket TimeOut Exception

Expert Contributor

For the Original Poster:

Your Issue appears to be related to kerberos for zookeeper. This guide[1] might help

 

[1]http://www.cloudera.com/documentation/enterprise/5-4-x/topics/cm_sg_sec_troubleshooting.html

Highlighted

Re: Hbase Soket TimeOut Exception

Hi,

 

I am having the exactly same issue while getting the count on RDD with data loaded from HBase using newAPIHadoopRDD. Appreciate if you can share the solution if you had past the problem.

org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Tue Oct 18 11:35:56 CDT 2016, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68856: row 'devices,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=cdhhost,60020,1476120301429, seqNum=0

	at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:276)
	at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:207)
	at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
	at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
	at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)
	at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:295)
	at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:160)
	at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:155)
	at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:867)
	at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:193)
	at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:89)
	at org.apache.hadoop.hbase.client.MetaScanner.allTableRegions(MetaScanner.java:324)
	at org.apache.hadoop.hbase.client.HRegionLocator.getAllRegionLocations(HRegionLocator.java:88)
	at org.apache.hadoop.hbase.util.RegionSizeCalculator.init(RegionSizeCalculator.java:94)
	at org.apache.hadoop.hbase.util.RegionSizeCalculator.<init>(RegionSizeCalculator.java:81)
	at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:256)
	at org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:239)
	at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:120)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
	at scala.Option.getOrElse(Option.scala:120)
	at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1940)
	at org.apache.spark.rdd.RDD.count(RDD.scala:1157)
	at org.apache.spark.api.java.JavaRDDLike$class.count(JavaRDDLike.scala:440)
	at org.apache.spark.api.java.AbstractJavaRDDLike.count(JavaRDDLike.scala:46)
	at com.cisco.eng.sdaf.profiler.hbase.HbaseDataProfiler.sdafCatalogProfile(HbaseDataProfiler.java:227)
	at com.cisco.eng.sdaf.profiler.hbase.HbaseDataProfiler.main(HbaseDataProfiler.java:91)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:542)
Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=68856: row 'devices,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=cdh-host,60020,1476120301429, seqNum=0
	at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
	at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)

 

Re: Hbase Soket TimeOut Exception

New Contributor
We stopped using HBase.