Created 05-06-2016 01:27 PM
While trying to start Hbase as
[hbase@hdpclient1 ~]$ /usr/hdp/current/hbase-master/bin/hbase-daemon.sh start master; sleep 25
getting the below error.
Error: Could not find or load main class org.apache.hadoop.hbase.master.HMaster.
Please help me. Thanks in advance.
Created 05-09-2016 06:00 AM
When you try to start Hbase manually, the classpath needs to be correctly set. In this case it looks like the classpath for hbase is not correctly set. You could review hbase-env in Ambari Hbase configs to check what you need to set before actually running the command manually. Or /etc/hbase/conf/hbase-env.sh in the node where hbase is installed.
We should focus on the reason for the Hbase to go down after you start from Ambari. Please check /var/log/hbase/hbase*master*log to check why it failed after the service start up came up good. That would help in resolving your issue.
Created 05-06-2016 02:37 PM
Can you try below command on same shell and run the script again?
export CLASSPATH=$CLASSPATH:`hbase classpath`
Also, we should start/stop services from Ambari always.
Created 05-06-2016 03:34 PM
What happens if you start hbase from Ambari UI? I believe ranger is not mandatory for hbase .
Created 05-09-2016 05:03 AM
Hi Jitendra,
It is starting in ambari UI but stopping immediately.unable to even start the same in terminal.
Also the same for Hbase RegionServer.
Thank you
Created 05-09-2016 10:30 AM
Can you please share the log messages which you are seeing?
Created 05-06-2016 03:08 PM
Hi Jitendra,
Thanks for your quick help on this.
I was able to start HBase Master and Region servers separately.
But they are just stopping when both started.
The error log says that "Ranger admin not installed ". Is Ranger admin required for this ?
Created 05-08-2016 06:33 PM
Is this a new installation?
You should start HBASE from Ambari. HBASE requires zookeeper to be available.
Paste the error log for HBASE Master.
Created 05-09-2016 05:10 AM
Hi Pranay,
This is a new installation. Zookeeper already running in all 3 client nodes.
Is Zoopkeeper server needs to run in all client nodes ?
Created 05-09-2016 06:00 AM
When you try to start Hbase manually, the classpath needs to be correctly set. In this case it looks like the classpath for hbase is not correctly set. You could review hbase-env in Ambari Hbase configs to check what you need to set before actually running the command manually. Or /etc/hbase/conf/hbase-env.sh in the node where hbase is installed.
We should focus on the reason for the Hbase to go down after you start from Ambari. Please check /var/log/hbase/hbase*master*log to check why it failed after the service start up came up good. That would help in resolving your issue.
Created 10-24-2018 11:34 AM
I am also facing similar issue. I am getting error
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff} span.s1 {font-variant-ligatures: no-common-ligatures}
[root@ip-172-31-47-215 hbase]# cat hbase-root-master-ip-172-31-47-215.us-west-2.compute.internal.out
Error: Could not find or load main class exists
If i look into the log file i am just getting following message. There is no error report there
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff} span.s1 {font-variant-ligatures: no-common-ligatures}
Wed Oct 24 08:51:53 UTC 2018 Starting master on ip-172-31-47-215.us-west-2.compute.internal
core file size(blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63362
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files(-n) 1024
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority(-r) 0
stack size(kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes(-u) 63362
virtual memory(kbytes, -v) unlimited
file locks(-x) unlimited
Below is the output of the gc log
[root@ip-172-31-47-215 hbase]# cat gc.log-201810240851
Java HotSpot(TM) 64-Bit Server VM (25.181-b13) for linux-amd64 JRE (1.8.0_181-b13), built on Jul7 2018 00:56:38 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16265880k(388408k free), swap 0k(0k free)
CommandLine flags: -XX:ErrorFile=/var/log/hbase/hs_err_pid%p.log -XX:InitialHeapSize=260254080 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=348966912 -XX:MaxTenuringThreshold=6 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError=kill -9 %p -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
Heap
par new generation total 76800K, used 10973K [0x00000000c0000000, 0x00000000c5350000, 0x00000000d4cc0000)
eden space 68288K,16% used [0x00000000c0000000, 0x00000000c0ab7608, 0x00000000c42b0000)
from space 8512K, 0% used [0x00000000c42b0000, 0x00000000c42b0000, 0x00000000c4b00000)
to space 8512K, 0% used [0x00000000c4b00000, 0x00000000c4b00000, 0x00000000c5350000)
concurrent mark-sweep generation total 170688K, used 0K [0x00000000d4cc0000, 0x00000000df370000, 0x0000000100000000)
Metaspace used 2984K, capacity 4480K, committed 4480K, reserved 1056768K
class spaceused 313K, capacity 384K, committed 384K, reserved 1048576K
Any idea how to fix this ?