Support Questions

Find answers, ask questions, and share your expertise

Hbase master crashing after startup

avatar
New Contributor

I wander if any one has a solution or at least point me in the right direction.

Recent;y I did an upgrade of entire stack; Ambari from v. 2.2.2.0 to 2.4.2.0 and HPD from 2.4.2.0 to 2.5.3.0

Upgrade went more or less ok, but now my Hbase master falls over after about 30-35 sec.

this is the message I get in the log

2017-01-05 11:20:54,000 INFO [main] util.ByteBufferArray: Allocating buffers total=10 GB, sizePerBuffer=4 MB, count=2560, direct=true

2017-01-05 11:20:54,695 ERROR [main] master.HMasterCommandLine: Master exiting java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2515) at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:235) at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2529) Caused by: java.lang.OutOfMemoryError: Direct buffer memory at java.nio.Bits.reserveMemory(Bits.java:658) at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123) at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) at org.apache.hadoop.hbase.util.ByteBufferArray.<init>(ByteBufferArray.java:65) at org.apache.hadoop.hbase.io.hfile.bucket.ByteBufferIOEngine.<init>(ByteBufferIOEngine.java:47) at org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getIOEngineFromName(BucketCache.java:307) at org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.<init>(BucketCache.java:217) at org.apache.hadoop.hbase.io.hfile.CacheConfig.getBucketCache(CacheConfig.java:614) at org.apache.hadoop.hbase.io.hfile.CacheConfig.getL2(CacheConfig.java:553) at org.apache.hadoop.hbase.io.hfile.CacheConfig.instantiateBlockCache(CacheConfig.java:637) at org.apache.hadoop.hbase.io.hfile.CacheConfig.<init>(CacheConfig.java:231) at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:564) at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:411) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:422) at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:25

this is exactly the same as https://issues.apache.org/jira/browse/AMBARI-18238?jql=project%20%3D%20AMBARI%20AND%20fixVersion%20%...

Having said that the bug reports stated it was addressed at 2.4.1 and I'm at 2.4.2

Thanks

1 ACCEPTED SOLUTION

avatar
Contributor

Hello. If you're not using Off-heap memory (Bucketcache) you can try disabling the 3 configuration properties and 1 environment variable setting that gets added during the Ambari upgrade.

Using Ambari, modify your Hbase configuration and blank the following:
hbase.bucketcache.size
hbase.bucketcache.ioengine
hbase.bucketcache.percentage.in.combinedcache

Modify hbase-env template:
comment out line: export HBASE_REGIONSERVER_OPTS = ... -XX:MaxDirectMemorySize

Restart all affected

View solution in original post

3 REPLIES 3

avatar
Contributor

Hello. If you're not using Off-heap memory (Bucketcache) you can try disabling the 3 configuration properties and 1 environment variable setting that gets added during the Ambari upgrade.

Using Ambari, modify your Hbase configuration and blank the following:
hbase.bucketcache.size
hbase.bucketcache.ioengine
hbase.bucketcache.percentage.in.combinedcache

Modify hbase-env template:
comment out line: export HBASE_REGIONSERVER_OPTS = ... -XX:MaxDirectMemorySize

Restart all affected

avatar
New Contributor

That did the trick... thanks

avatar
Contributor

I'm happy to hear that worked out for you. Feel free to accept the answer if you're happy with it.

Thanks!