Created 08-10-2017 06:27 PM
I've set up a HBase cluster using ambari. However my hbase client throws the below error.
java.lang.RuntimeException: java.lang.OutOfMemoryError: unable to create new native thread at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:208) at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:326) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:301) at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:166) at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:161) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:794) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:193) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:89) at org.apache.hadoop.hbase.client.MetaScanner.listTableRegionLocations(MetaScanner.java:343) at org.apache.hadoop.hbase.client.HRegionLocator.listRegionLocations(HRegionLocator.java:142) at org.apache.hadoop.hbase.client.HRegionLocator.getStartEndKeys(HRegionLocator.java:118) at org.wso2.carbon.analytics.datasource.hbase.HBaseAnalyticsRecordStore.computeRegionSplits(HBaseAnalyticsRecordStore.java:371) at org.wso2.carbon.analytics.datasource.hbase.HBaseAnalyticsRecordStore.get(HBaseAnalyticsRecordStore.java:304) at org.wso2.carbon.analytics.dataservice.core.indexing.StagingIndexDataStore.loadEntries(StagingIndexDataStore.java:113) at org.wso2.carbon.analytics.dataservice.core.indexing.IndexNodeCoordinator$StagingDataIndexWorker.run(IndexNodeCoordinator.java:994) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:717) at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:950) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1357) at org.apache.hadoop.hbase.client.ResultBoundedCompletionService.submit(ResultBoundedCompletionService.java:146) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.addCallsForCurrentReplica(ScannerCallableWithReplicas.java:283) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:170) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
Ambari shows the below configs:
hdfs_user_nofile_limit 128000
hdfs_user_nproc_limit 65536
hbase_user_nofile_limit 32000
hbase_user_nproc_limit 16000
mapred_user_nofile_limit 32768
mapred_user_nproc_limit 65536
And in the /etc/security/limits.conf I set below values:
* soft nofile 4096
* hard nofile 128000
* soft nproc 20000
* hard nproc 65536
What could be the reason for this error? Should I increase the hbase_user_nofile_limit value further?
Created 08-10-2017 06:40 PM
What are your memory settings for hbase? they seem too low.
Can you give the settings for Ambari UI--->Hbase--->Configs--->settings-
HBase RegionServer HBase Master Max.Server
Cheers
Created 08-11-2017 03:07 AM
The Cluster has 16GB servers and these are the current memory settings.
HBase Master Maximum Memory 1G
HBase RegionServer Maximum Memory 2G
RegionServers maximum value for -Xmn 4000MB
RegionServers -Xmn in -Xmx ratio 0.2
Do you think the RegionServers maximum value for -Xmn and RegionServers -Xmn in -Xmx ratio should also be changed?
Created 08-10-2017 06:40 PM
You likely need to increase the number of open files past 4K. This is a client application -- the limits on the server processes are not relevant in this case.
You need to ensure that the user running your client application has sufficient resources to run.
Created 08-10-2017 06:51 PM
the error you have given shows:
java.lang.RuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
it looks the region server is not having the enough memory to startup the service, you can check for both HBase Master and region servers heap space's from Ambari --> HBase --> Configs --> Settings you can start with as low as 2GB. You can check for the GC (Garbage Collector) logs for memory allocation failures.
Created 08-11-2017 04:05 AM
I checked the GC logs and it seems the young gen size has the problem.
2017-08-11T09:27:53.681+0530: 5383.060: [GC (Allocation Failure) 2017-08-11T09:27:53.682+0530: 5383.061: [ParNew: 71464K->3524K(77440K), 0.0072992 secs] 85111K->17172K(249472K), 0.0076683 secs] [Times: user=0.03 sys=0.00, real=0.01 secs]
These were my memory settings:
HBase Master Maximum Memory 1G
HBase RegionServer Maximum Memory 2G
RegionServers maximum value for -Xmn 4000MB
RegionServers -Xmn in -Xmx ratio 0.2
I updated HBase Master Maximum Memory to 4G and HBase RegionServer Maximum Memory to 6G, but the problem still exists. Should I increase the RegionServers maximum value for -Xmn and RegionServers -Xmn in -Xmx ratio as well?
These are 16GB servers BTW.
Created 08-15-2017 11:18 AM
The issue was fixed after I set higher value in the /etc/security/limits.conf file. Below are the new values.
* soft nofile 4096
* hard nofile 128000
hdfs hard nofile 128000
hbase hard nofile 128000
mapred hard nofile 32768
* soft nproc 20000
* hard nproc 65536
hdfs soft/hard nproc 65536
hbase soft/hard nproc 65536
mapred soft/hard nproc 65536
Created 08-15-2017 11:34 AM
Good to know ..YES the problem was memory allocation.
So you can now close the thread by rewarding the best answer