Member since
06-27-2017
13
Posts
0
Kudos Received
0
Solutions
09-04-2018
03:18 PM
I have a simple HBase cluster with 3 nodes. Each node has only 16GB memory. I get below errors and then the Regionserver shutdown automatically. 2018-07-10 00:31:11,184 ERROR [RS_CLOSE_REGION-XXXX:16020-2] regionserver.HRegion: Memstore size is XXXXXX
2018-07-10 00:31:14,468 ERROR [sync.2] wal.FSHLog: Error syncing, request close of WAL
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try
Here is the log in full before shutdown hook triggered. 2018-07-10 00:31:11,183 INFO [RS_CLOSE_REGION-XXXX:16020-0] regionserver.HRegion: Closed ANX_X1234_ORG_XXXX_ANALYTICS_HUB_STREAM_PROCESSEDSTATISTICS,af255818-e3ab-b48b-4182-dd292c0156d5,152332828601
9.65928cb6d569e0c70ede8815dfbaa169.
2018-07-10 00:31:11,184 INFO [StoreCloserThread-ANX_X1000___INDEX_STAGING_DATA__1_23AC161C-6BFE-486E-83A6-1B6A2E7F32E7,81167a42-bb2b-a385-d70b-f8cd2c82befb,1523990247486.12ad2399cd12b097e416b3c08c836234.
-1] regionserver.HStore: Closed carbon-analytics-data
2018-07-10 00:31:11,184 INFO [StoreCloserThread-ANX_X1234_ORG_XXXX_ANALYTICS_HUB_STREAM_PROCESSEDSTATISTICS,1f936a13-5caf-44ee-1c0d-c7eee6adfa5d,1523186311575.4a1a3fd2b59c50af9e3c7cc6c5028121.-1] re
gionserver.HStore: Closed carbon-analytics-data
2018-07-10 00:31:11,184 ERROR [RS_CLOSE_REGION-XXXX:16020-1] regionserver.HRegion: Memstore size is 3733952
2018-07-10 00:31:11,184 INFO [RS_CLOSE_REGION-XXXX:16020-1] regionserver.HRegion: Closed ANX_X1234_ORG_XXXX_ANALYTICS_HUB_STREAM_PROCESSEDSTATISTICS,1f936a13-5caf-44ee-1c0d-c7eee6adfa5d,152318631157
5.4a1a3fd2b59c50af9e3c7cc6c5028121.
2018-07-10 00:31:11,184 ERROR [RS_CLOSE_REGION-XXXX:16020-2] regionserver.HRegion: Memstore size is 833544
2018-07-10 00:31:11,184 INFO [RS_CLOSE_REGION-XXXX:16020-2] regionserver.HRegion: Closed ANX_X1000___INDEX_STAGING_DATA__1_23AC161C-6BFE-486E-83A6-1B6A2E7F32E7,81167a42-bb2b-a385-d70b-f8cd2c82befb,152399
0247486.12ad2399cd12b097e416b3c08c836234.
2018-07-10 00:31:11,185 INFO [StoreCloserThread-ANX_X1234_ORG_WSO2_APIMGT_STATISTICS_RESPONSE,,1514542454206.42c97d2efc0cf0e55b79cae2c50b30be.-1] regionserver.HStore: Closed carbon-analytics-data
2018-07-10 00:31:11,185 INFO [RS_CLOSE_REGION-XXXX:16020-0] regionserver.HRegion: Closed ANX_X1234_ORG_WSO2_APIMGT_STATISTICS_RESPONSE,,1514542454206.42c97d2efc0cf0e55b79cae2c50b30be.
2018-07-10 00:31:11,185 INFO [StoreCloserThread-ANX_X1234_ORG_XXXX_ANALYTICS_HUB_STREAM_PROCESSEDSTATISTICS,ebc9678c-e35a-4096-bb26-c9e1cbcf82fd,1523522965118.07ef074275e9cc613334bb067098af41.-1] re
gionserver.HStore: Closed carbon-analytics-data
2018-07-10 00:31:11,186 ERROR [RS_CLOSE_REGION-XXXX:16020-2] regionserver.HRegion: Memstore size is 5080768
2018-07-10 00:31:11,186 INFO [RS_CLOSE_REGION-XXXX:16020-2] regionserver.HRegion: Closed ANX_X1234_ORG_XXXX_ANALYTICS_HUB_STREAM_PROCESSEDSTATISTICS,ebc9678c-e35a-4096-bb26-c9e1cbcf82fd,152352296511
8.07ef074275e9cc613334bb067098af41.
2018-07-10 00:31:11,186 INFO [StoreCloserThread-IDX_X1234_ORG_XXXX_ANALYTICS_HUB_STREAM_REPONSETIME_SUMMARY_PER_DAY,,1525628114343.c9c49d54c6e5a9fed1ec7d66f09133c2.-1] regionserver.HStore: Closed ca
rbon-analytics-index
2018-07-10 00:31:11,186 INFO [StoreCloserThread-ANX_X1234_ORG_XXXX_ANALYTICS_HUB_STREAM_OPERATOR_SUMMARY,,1514542511538.c548a01476c09769345b1318f41b1b3b.-1] regionserver.HStore: Closed carbon-analyt
ics-data
2018-07-10 00:31:11,186 INFO [RS_CLOSE_REGION-XXXX:16020-0] regionserver.HRegion: Closed IDX_X1234_ORG_XXXX_ANALYTICS_HUB_STREAM_REPONSETIME_SUMMARY_PER_DAY,,1525628114343.c9c49d54c6e5a9fed1ec7d66f0
9133c2.
2018-07-10 00:31:11,187 ERROR [RS_CLOSE_REGION-XXXX:16020-1] regionserver.HRegion: Memstore size is 488
2018-07-10 00:31:11,187 INFO [RS_CLOSE_REGION-XXXX:16020-1] regionserver.HRegion: Closed ANX_X1234_ORG_XXXX_ANALYTICS_HUB_STREAM_OPERATOR_SUMMARY,,1514542511538.c548a01476c09769345b1318f41b1b3b.
2018-07-10 00:31:11,201 INFO [regionserver/XXXX.hbase02.com/172.26.76.78:16020] regionserver.HRegionServer: stopping server XXXX.hbase02.com,16020,1523886890292; all regions closed.
2018-07-10 00:31:14,449 WARN [RS_OPEN_META-XXXX:16020-0-MetaLogRoller] hdfs.DFSClient: Slow waitForAckedSeqno took 63148ms (threshold=30000ms)
2018-07-10 00:31:14,460 INFO [RS_OPEN_META-XXXX:16020-0-MetaLogRoller] wal.FSHLog: Slow sync cost: 63158 ms, current pipeline: [DatanodeInfoWithStorage[172.26.76.78:50010,DS-c8bdc5a8-0cef-4dc8-b652-b7142
9e2f728,DISK], DatanodeInfoWithStorage[172.26.76.79:50010,DS-d86c5793-44d8-4826-a233-744319ef269d,DISK]]
2018-07-10 00:31:14,468 ERROR [sync.2] wal.FSHLog: Error syncing, request close of WAL
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[172.26.76.78:50010,DS-c8bdc5a8
-0cef-4dc8-b652-b71429e2f728,DISK], DatanodeInfoWithStorage[172.26.76.79:50010,DS-d86c5793-44d8-4826-a233-744319ef269d,DISK]], original=[DatanodeInfoWithStorage[172.26.76.78:50010,DS-c8bdc5a8-0cef-4dc8-b6
52-b71429e2f728,DISK], DatanodeInfoWithStorage[172.26.76.79:50010,DS-d86c5793-44d8-4826-a233-744319ef269d,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure thi
s via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:1059)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1122)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1280)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1005)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:512)
2018-07-10 00:31:14,535 WARN [RS_OPEN_META-XXXX:16020-0-MetaLogRoller] wal.FSHLog: Failed sync-before-close but no outstanding appends; closing WAL: java.io.IOException: Failed to replace a bad datanode
on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[172.26.76.78:50010,DS-c8bdc5a8-0cef-4dc8-b652-b71429e2f728,DISK], DatanodeInfoWithSt
orage[172.26.76.79:50010,DS-d86c5793-44d8-4826-a233-744319ef269d,DISK]], original=[DatanodeInfoWithStorage[172.26.76.78:50010,DS-c8bdc5a8-0cef-4dc8-b652-b71429e2f728,DISK], DatanodeInfoWithStorage[172.26.
76.79:50010,DS-d86c5793-44d8-4826-a233-744319ef269d,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-fail
ure.policy' in its configuration.
2018-07-10 00:31:14,536 WARN [RS_OPEN_META-XXXX:16020-0-MetaLogRoller] wal.ProtobufLogWriter: Failed to write trailer, non-fatal, continuing...
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[172.26.76.78:50010,DS-c8bdc5a8
-0cef-4dc8-b652-b71429e2f728,DISK], DatanodeInfoWithStorage[172.26.76.79:50010,DS-d86c5793-44d8-4826-a233-744319ef269d,DISK]], original=[DatanodeInfoWithStorage[172.26.76.78:50010,DS-c8bdc5a8-0cef-4dc8-b6
52-b71429e2f728,DISK], DatanodeInfoWithStorage[172.26.76.79:50010,DS-d86c5793-44d8-4826-a233-744319ef269d,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure thi
s via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:1059)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1122)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1280)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1005)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:512)
2018-07-10 00:31:14,536 WARN [RS_OPEN_META-XXXX:16020-0-MetaLogRoller] wal.FSHLog: Riding over failed WAL close of hdfs://XXXX/apps/hbase/data/WALs/XXXX.hbase02.com,16020,1523886890292/XXXX.hbase02.com%2
C16020%2C1523886890292..meta.1531159209078.meta, cause="Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithSto
rage[172.26.76.78:50010,DS-c8bdc5a8-0cef-4dc8-b652-b71429e2f728,DISK], DatanodeInfoWithStorage[172.26.76.79:50010,DS-d86c5793-44d8-4826-a233-744319ef269d,DISK]], original=[DatanodeInfoWithStorage[172.26.7
6.78:50010,DS-c8bdc5a8-0cef-4dc8-b652-b71429e2f728,DISK], DatanodeInfoWithStorage[172.26.76.79:50010,DS-d86c5793-44d8-4826-a233-744319ef269d,DISK]]). The current failed datanode replacement policy is DEFA
ULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.", errors=1; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK
2018-07-10 00:31:14,542 INFO [RS_OPEN_META-XXXX:16020-0-MetaLogRoller] wal.FSHLog: Rolled WAL /apps/hbase/data/WALs/XXXX.hbase02.com,16020,1523886890292/XXXX.hbase02.com%2C16020%2C1523886890292..meta.153
1159209078.meta with entries=0, filesize=83 B; new WAL /apps/hbase/data/WALs/XXXX.hbase02.com,16020,1523886890292/XXXX.hbase02.com%2C16020%2C1523886890292..meta.1531162811268.meta
2018-07-10 00:31:14,543 INFO [RS_OPEN_META-XXXX:16020-0-MetaLogRoller] wal.FSHLog: Archiving hdfs://XXXX/apps/hbase/data/WALs/XXXX.hbase02.com,16020,1523886890292/XXXX.hbase02.com%2C16020%2C1523886890292
..meta.1531159209078.meta to hdfs://XXXX/apps/hbase/data/oldWALs/XXXX.hbase02.com%2C16020%2C1523886890292..meta.1531159209078.meta
2018-07-10 00:31:14,554 INFO [RS_OPEN_META-XXXX:16020-0-MetaLogRoller] regionserver.LogRoller: LogRoller exiting.
2018-07-10 00:31:14,592 WARN [regionserver/XXXX.hbase02.com/172.26.76.78:16020] wal.ProtobufLogWriter: Failed to write trailer, non-fatal, continuing...
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[172.26.76.78:50010,DS-c8bdc5a8
-0cef-4dc8-b652-b71429e2f728,DISK], DatanodeInfoWithStorage[172.26.76.79:50010,DS-d86c5793-44d8-4826-a233-744319ef269d,DISK]], original=[DatanodeInfoWithStorage[172.26.76.78:50010,DS-c8bdc5a8-0cef-4dc8-b6
52-b71429e2f728,DISK], DatanodeInfoWithStorage[172.26.76.79:50010,DS-d86c5793-44d8-4826-a233-744319ef269d,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure thi
s via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:1059)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1122)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1280)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1005)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:512)
2018-07-10 00:31:14,593 ERROR [regionserver/XXXX.hbase02.com/172.26.76.78:16020] regionserver.HRegionServer: Shutdown / close of WAL failed: java.io.IOException: Failed to replace a bad datanode on the ex
isting pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[172.26.76.78:50010,DS-c8bdc5a8-0cef-4dc8-b652-b71429e2f728,DISK], DatanodeInfoWithStorage[172
.26.76.79:50010,DS-d86c5793-44d8-4826-a233-744319ef269d,DISK]], original=[DatanodeInfoWithStorage[172.26.76.78:50010,DS-c8bdc5a8-0cef-4dc8-b652-b71429e2f728,DISK], DatanodeInfoWithStorage[172.26.76.79:500
10,DS-d86c5793-44d8-4826-a233-744319ef269d,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.polic
y' in its configuration.
2018-07-10 00:31:14,603 INFO [regionserver/XXXX.hbase02.com/172.26.76.78:16020] regionserver.Leases: regionserver/XXXX.hbase02.com/172.26.76.78:16020 closing leases
2018-07-10 00:31:14,675 INFO [regionserver/XXXX.hbase02.com/172.26.76.78:16020] regionserver.Leases: regionserver/XXXX.hbase02.com/172.26.76.78:16020 closed leases
2018-07-10 00:31:14,682 INFO [regionserver/XXXX.hbase02.com/172.26.76.78:16020] hbase.ChoreService: Chore service for: XXXX.hbase02.com,16020,1523886890292 had [[ScheduledChore: Name: MovedRegionsCleaner
for region XXXX.hbase02.com,16020,1523886890292 Period: 120000 Unit: MILLISECONDS]] on shutdown
2018-07-10 00:31:14,816 INFO [regionserver/XXXX.hbase02.com/172.26.76.78:16020] ipc.RpcServer: Stopping server on 16020
2018-07-10 00:31:14,816 INFO [RpcServer.listener,port=16020] ipc.RpcServer: RpcServer.listener,port=16020: stopping
2018-07-10 00:31:14,821 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopped
2018-07-10 00:31:14,821 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopping
2018-07-10 00:31:14,893 INFO [regionserver/XXXX.hbase02.com/172.26.76.78:16020] zookeeper.ZooKeeper: Session: 0x1629a093dc7019b closed
2018-07-10 00:31:14,893 INFO [regionserver/XXXX.hbase02.com/172.26.76.78:16020] regionserver.HRegionServer: stopping server XXXX.hbase02.com,16020,1523886890292; zookeeper connection closed.
2018-07-10 00:31:14,893 INFO [regionserver/XXXX.hbase02.com/172.26.76.78:16020] regionserver.HRegionServer: regionserver/XXXX.hbase02.com/172.26.76.78:16020 exiting
2018-07-10 00:31:14,894 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down
2018-07-10 00:31:15,670 INFO [HBase-Metrics2-1] impl.MetricsSystemImpl: Stopping HBase metrics system...
2018-07-10 00:31:15,670 INFO [timeline] impl.MetricsSinkAdapter: timeline thread interrupted.
2018-07-10 00:31:16,184 INFO [HBase-Metrics2-1] impl.MetricsSystemImpl: HBase metrics system stopped.
2018-07-10 00:31:16,458 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting
java.lang.RuntimeException: HRegionServer Aborted
at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:68)
at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2832)
2018-07-10 00:31:16,687 INFO [HBase-Metrics2-1] impl.MetricsConfig: loaded properties from hadoop-metrics2-hbase.properties
2018-07-10 00:31:16,819 INFO [pool-4-thread-1] regionserver.ShutdownHook: Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@324c64cd
2018-07-10 00:31:16,819 INFO [pool-4-thread-1] regionserver.ShutdownHook: Starting fs shutdown hook thread.
2018-07-10 00:31:16,873 INFO [pool-4-thread-1] regionserver.ShutdownHook: Shutdown hook finished.
Please help me to find out the root cause for this.
... View more
Labels:
- Labels:
-
Apache HBase
08-15-2017
11:18 AM
The issue was fixed after I set higher value in the /etc/security/limits.conf file. Below are the new values. * soft nofile 4096 * hard nofile 128000 hdfs hard nofile 128000 hbase hard nofile 128000
mapred hard nofile 32768 * soft nproc 20000 * hard nproc 65536 hdfs soft/hard nproc 65536 hbase soft/hard nproc 65536 mapred soft/hard nproc 65536
... View more
08-11-2017
04:05 AM
I checked the GC logs and it seems the young gen size has the problem. 2017-08-11T09:27:53.681+0530: 5383.060: [GC (Allocation Failure) 2017-08-11T09:27:53.682+0530: 5383.061: [ParNew: 71464K->3524K(77440K), 0.0072992 secs] 85111K->17172K(249472K), 0.0076683 secs] [Times: user=0.03 sys=0.00, real=0.01 secs] These were my memory settings: HBase Master Maximum Memory 1G HBase RegionServer Maximum Memory 2G RegionServers maximum value for -Xmn 4000MB RegionServers -Xmn in -Xmx ratio 0.2 I updated HBase Master Maximum Memory to 4G and HBase RegionServer Maximum Memory to 6G, but the problem still exists. Should I increase the RegionServers maximum value for -Xmn and RegionServers -Xmn in -Xmx ratio as well? These are 16GB servers BTW.
... View more
08-11-2017
03:07 AM
The Cluster has 16GB servers and these are the current memory settings. HBase Master Maximum Memory 1G HBase RegionServer Maximum Memory 2G RegionServers maximum value for -Xmn 4000MB RegionServers -Xmn in -Xmx ratio 0.2 Do you think the RegionServers maximum value for -Xmn and RegionServers -Xmn in -Xmx ratio should also be changed?
... View more
08-10-2017
06:27 PM
I've set up a HBase cluster using ambari. However my hbase client throws the below error. java.lang.RuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:208)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:326)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:301)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:166)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:161)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:794)
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:193)
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:89)
at org.apache.hadoop.hbase.client.MetaScanner.listTableRegionLocations(MetaScanner.java:343)
at org.apache.hadoop.hbase.client.HRegionLocator.listRegionLocations(HRegionLocator.java:142)
at org.apache.hadoop.hbase.client.HRegionLocator.getStartEndKeys(HRegionLocator.java:118)
at org.wso2.carbon.analytics.datasource.hbase.HBaseAnalyticsRecordStore.computeRegionSplits(HBaseAnalyticsRecordStore.java:371)
at org.wso2.carbon.analytics.datasource.hbase.HBaseAnalyticsRecordStore.get(HBaseAnalyticsRecordStore.java:304)
at org.wso2.carbon.analytics.dataservice.core.indexing.StagingIndexDataStore.loadEntries(StagingIndexDataStore.java:113)
at org.wso2.carbon.analytics.dataservice.core.indexing.IndexNodeCoordinator$StagingDataIndexWorker.run(IndexNodeCoordinator.java:994)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:717)
at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:950)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1357)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService.submit(ResultBoundedCompletionService.java:146)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.addCallsForCurrentReplica(ScannerCallableWithReplicas.java:283)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:170)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) Ambari shows the below configs: hdfs_user_nofile_limit 128000 hdfs_user_nproc_limit 65536 hbase_user_nofile_limit 32000 hbase_user_nproc_limit 16000 mapred_user_nofile_limit 32768 mapred_user_nproc_limit 65536 And in the /etc/security/limits.conf I set below values: * soft nofile 4096 * hard nofile 128000 * soft nproc 20000 * hard nproc 65536 What could be the reason for this error? Should I increase the hbase_user_nofile_limit value further?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase