Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

HBASE issue during cluster setup

avatar

Trying to install our first cluster. All components seemed to install OK however during the start phase we received an indication on one node that the Check HBase component had apparently failed although this didn't indicate the entire installation had failed.

We are unsure therefore if we can proceed with the installation as we also received issues with Flume however none of the checks or starts for flume returned any output, whether stdout or stderr. A number of components after Flume didnt receive any output either.

The error message is as below:

stderr: 
Python script has been killed due to timeout after waiting 300 secs
 stdout:
2016-05-23 09:11:41,600 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-05-23 09:11:41,607 - File['/var/lib/ambari-agent/tmp/hbaseSmokeVerify.sh'] {'content': StaticFile('hbaseSmokeVerify.sh'), 'mode': 0755}
2016-05-23 09:11:41,609 - Writing File['/var/lib/ambari-agent/tmp/hbaseSmokeVerify.sh'] because it doesn't exist
2016-05-23 09:11:41,609 - Changing permission for /var/lib/ambari-agent/tmp/hbaseSmokeVerify.sh from 644 to 755
2016-05-23 09:11:41,612 - File['/var/lib/ambari-agent/tmp/hbase-smoke.sh'] {'content': Template('hbase-smoke.sh.j2'), 'mode': 0755}
2016-05-23 09:11:41,612 - Writing File['/var/lib/ambari-agent/tmp/hbase-smoke.sh'] because it doesn't exist
2016-05-23 09:11:41,612 - Changing permission for /var/lib/ambari-agent/tmp/hbase-smoke.sh from 644 to 755
2016-05-23 09:11:41,613 - Execute[' 
/usr/hdp/current/hbase-client/bin/hbase --config 
/usr/hdp/current/hbase-client/conf shell /var/lib/ambari-agent/tmp/hbase-smoke.sh && /var/lib/ambari-agent/tmp/hbaseSmokeVerify.sh
 /usr/hdp/current/hbase-client/conf id170a6e31_date112316 
/usr/hdp/current/hbase-client/bin/hbase'] {'logoutput': True, 'tries': 
6, 'user': 'ambari-qa', 'try_sleep': 5}
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/hdp/2.4.2.0-258/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
ERROR: Can't get the locations
Here is some help for this command:
Start disable of named table:
  hbase> disable 't1'
  hbase> disable 'ns1:t1'
ERROR: Can't get the locations
Here is some help for this command:
Drop the named table. Table must first be disabled:
  hbase> drop 't1'
  hbase> drop 'ns1:t1'
ERROR: Can't get master address from ZooKeeper; znode data == null
Here is some help for this command:
Creates a table. Pass a table name, and a set of column family
specifications (at least one), and, optionally, table configuration.
Column specification can be a simple string (name), or a dictionary
(dictionaries are described below in main help output), necessarily 
including NAME attribute. 
Examples:
Create a table with namespace=ns1 and table qualifier=t1
  hbase> create 'ns1:t1', {NAME => 'f1', VERSIONS => 5}
Create a table with namespace=default and table qualifier=t1
  hbase> create 't1', {NAME => 'f1'}, {NAME => 'f2'}, {NAME => 'f3'}
  hbase> # The above in shorthand would be the following:
  hbase> create 't1', 'f1', 'f2', 'f3'
  hbase> create 't1', {NAME => 'f1', VERSIONS => 1, TTL => 2592000, BLOCKCACHE => true}
  hbase> create 't1', {NAME => 'f1', CONFIGURATION => {'hbase.hstore.blockingStoreFiles' => '10'}}
Table configuration options can be put at the end.
Examples:
  hbase> create 'ns1:t1', 'f1', SPLITS => ['10', '20', '30', '40']
  hbase> create 't1', 'f1', SPLITS => ['10', '20', '30', '40']
  hbase> create 't1', 'f1', SPLITS_FILE => 'splits.txt', OWNER => 'johndoe'
  hbase> create 't1', {NAME => 'f1', VERSIONS => 5}, METADATA => { 'mykey' => 'myvalue' }
  hbase> # Optionally pre-split the table into NUMREGIONS, using
  hbase> # SPLITALGO ("HexStringSplit", "UniformSplit" or classname)
  hbase> create 't1', 'f1', {NUMREGIONS => 15, SPLITALGO => 'HexStringSplit'}
  hbase> create 't1', 'f1', {NUMREGIONS => 15, SPLITALGO =>
 'HexStringSplit', REGION_REPLICATION => 2, CONFIGURATION => 
{'hbase.hregion.scan.loadColumnFamiliesOnDemand' => 'true'}}
You can also keep around a reference to the created table:
  hbase> t1 = create 't1', 'f1'
Which gives you a reference to the table named 't1', on which you can then
call methods.
1 ACCEPTED SOLUTION

avatar

Thanks for replies everyone - issue sorted. During Falcon installation, root filesystem managed to fill up. Falcon seemed to be installed ok according to RPM however it wasn't clean.

Increased space and restarted the relevant components and all now ok. Thanks all

View solution in original post

7 REPLIES 7

avatar

can you check whether your master is running or not?

ERROR: Can't get master address from ZooKeeper; znode data == null 

avatar

This is during initial setup. I can check the zookeeper instances however the issue seems to be earlier than the zookeeper entry

ERROR: Can't get the locations Here is some help for this command: Start disable of named table: hbase> disable 't1' hbase> disable 'ns1:t1'

avatar
Super Guru

It sounds like the initial start of HBase just didn't actually happen successfully, but Ambari didn't notice it right away. It thought the processes started, but when it tried to run a smoke-test to verify that HBase is up and running, it failed (this error is indicative of the Master not actually running like Ted hints at)

avatar
Master Collaborator

Can you check master log to see if there is more clue there ?

Thanks

avatar
Master Collaborator

You can view status of hbase using Ambari:

x.y.z:8080/#/main/services/HBASE/summary

where x.y.z is the Ambari gateway.

avatar

Thanks for replies everyone - issue sorted. During Falcon installation, root filesystem managed to fill up. Falcon seemed to be installed ok according to RPM however it wasn't clean.

Increased space and restarted the relevant components and all now ok. Thanks all

avatar
Contributor

I know this is an older thread but I had this issue recently with HDP 2.6.0.

This error can also happen because region servers failed to start.

A common cause of this is that the system clock on one or more region servers does not match the system clock on the HBase master service, so the region server service will not start.