Support Questions

Find answers, ask questions, and share your expertise

org.apache.hadoop.hbase.PleaseHoldException: Master is initializing error

avatar
Explorer
hello,
i am installing hdp 2.4 on 3 servers on aws, all thing go well, but in the last step of deployment, there is problem of check hbase with following log:


2016-09-01 09:20:07,334 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-09-01 09:20:07,345 - File['/var/lib/ambari-agent/tmp/hbaseSmokeVerify.sh'] {'content': StaticFile('hbaseSmokeVerify.sh'), 'mode': 0755}
2016-09-01 09:20:07,357 - File['/var/lib/ambari-agent/tmp/hbase-smoke.sh'] {'content': Template('hbase-smoke.sh.j2'), 'mode': 0755}
2016-09-01 09:20:07,358 - Writing File['/var/lib/ambari-agent/tmp/hbase-smoke.sh'] because contents don't match
2016-09-01 09:20:07,359 - Execute[' /usr/hdp/current/hbase-client/bin/hbase --config /usr/hdp/current/hbase-client/conf shell /var/lib/ambari-agent/tmp/hbase-smoke.sh && /var/lib/ambari-agent/tmp/hbaseSmokeVerify.sh /usr/hdp/current/hbase-client/conf id1faca91a_date200116 /usr/hdp/current/hbase-client/bin/hbase'] {'logoutput': True, 'tries': 6, 'user': 'ambari-qa', 'try_sleep': 5}
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

ERROR: Can't get the locations

Here is some help for this command:
Start disable of named table:
  hbase> disable 't1'
  hbase> disable 'ns1:t1'
ERROR: Can't get the locations

Here is some help for this command:
Drop the named table. Table must first be disabled:
  hbase> drop 't1'
  hbase> drop 'ns1:t1'
ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
	at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
	at org.apache.hadoop.hbase.master.HMaster.checkNamespaceManagerReady(HMaster.java:2329)
	at org.apache.hadoop.hbase.master.HMaster.ensureNamespaceExists(HMaster.java:2522)
	at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1527)
	at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:454)
	at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55401)
	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
	at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
	at java.lang.Thread.run(Thread.java:745)

Here is some help for this command:
Creates a table. Pass a table name, and a set of column family
specifications (at least one), and, optionally, table configuration.
Column specification can be a simple string (name), or a dictionary
(dictionaries are described below in main help output), necessarily 
including NAME attribute. 
Examples:

Create a table with namespace=ns1 and table qualifier=t1
  hbase> create 'ns1:t1', {NAME => 'f1', VERSIONS => 5}

Create a table with namespace=default and table qualifier=t1
  hbase> create 't1', {NAME => 'f1'}, {NAME => 'f2'}, {NAME => 'f3'}
  hbase> # The above in shorthand would be the following:
  hbase> create 't1', 'f1', 'f2', 'f3'
  hbase> create 't1', {NAME => 'f1', VERSIONS => 1, TTL => 2592000, BLOCKCACHE => true}
  hbase> create 't1', {NAME => 'f1', CONFIGURATION => {'hbase.hstore.blockingStoreFiles' => '10'}}
  
Table configuration options can be put at the end.
Examples:

  hbase> create 'ns1:t1', 'f1', SPLITS => ['10', '20', '30', '40']
  hbase> create 't1', 'f1', SPLITS => ['10', '20', '30', '40']
  hbase> create 't1', 'f1', SPLITS_FILE => 'splits.txt', OWNER => 'johndoe'
  hbase> create 't1', {NAME => 'f1', VERSIONS => 5}, METADATA => { 'mykey' => 'myvalue' }
  hbase> # Optionally pre-split the table into NUMREGIONS, using
  hbase> # SPLITALGO ("HexStringSplit", "UniformSplit" or classname)
  hbase> create 't1', 'f1', {NUMREGIONS => 15, SPLITALGO => 'HexStringSplit'}
  hbase> create 't1', 'f1', {NUMREGIONS => 15, SPLITALGO => 'HexStringSplit', REGION_REPLICATION => 2, CONFIGURATION => {'hbase.hregion.scan.loadColumnFamiliesOnDemand' => 'true'}}

You can also keep around a reference to the created table:

  hbase> t1 = create 't1', 'f1'

Which gives you a reference to the table named 't1', on which you can then
call methods.





with this problem, i can still finish the deployment and go into the web dashboard of ambari and see hbase is well started.

but when i go into hbase shell(successfully go into shell), and then try to create a table(or use status), there is problem of :

ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing

and i check hbase log in my server, it says :

ERROR [Thread-68] master.HMaster: Master failed to complete initialization after 900000ms. Please consider submitting a bug report including a thread dump of this process.

I've already try some method mentioned on the internet like:

1.

Please stop hbase on your cluster first. And restart them with
certain sequences: first regionserver on all nodes, then hmaster.

2.

stop hbase and zookeeper

wipe out datadir: version-2 of zookeeper

restart zookeeper and then restart hbase

with these two solution, i still have the same problem, can someone help me?

19 REPLIES 19

avatar
Master Collaborator

@dengke li

Can you attach master log ?

You should find it under /grid/0/log/hbase/

Consider increasing hbase.master.namespace.init.timeout

Default is 300000 ms

avatar
Explorer

hello,

i use hdp 2.4, and the hbase master log is under /var/log/hbase, not under /

grid/0/log/hbase/, and in hbase master log is :

2016-09-01 14:19:49,843 ERROR [Thread-68] master.HMaster: Master failed to complete initialization after 900000ms. Please consider submitting a bug report including a thread dump of this process.

how to increase

hbase.master.namespace.init.timeout,

in which configuration file?

avatar
Explorer

and in hbase log there is also

2016-09-01 14:38:37,876 WARN [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://http://ec2-52-29-246-252.eu-central-1.compute.amazonaws.com:6188/ws/v1/timeline/metrics

but i think it is not problem of hbase

avatar
Master Collaborator

Can you attach the master log so that we can better help you ?

You can add hbase.master.namespace.init.timeout to hbase-site.xml by using Ambari.

Still, finding the root cause is desirable.

avatar
Explorer

here is the log of hbase master:

/var/log/hbase$ tail -f hbase-hbase-master-ec2-13-26-256-150.eu-central-1.compute.amazonaws.com.log 2016-09-01 16:17:34,702 WARN [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://http://ec2-52-29-246-252.eu-central-1.compute.amazonaws.com:6188/ws/v1/timeline/metrics 2016-09-01 16:17:34,703 WARN [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://http://ec2-52-29-246-252.eu-central-1.compute.amazonaws.com:6188/ws/v1/timeline/metrics 2016-09-01 16:18:14,705 WARN [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://http://ec2-52-29-246-252.eu-central-1.compute.amazonaws.com:6188/ws/v1/timeline/metrics 2016-09-01 16:18:14,706 WARN [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://http://ec2-52-29-246-252.eu-central-1.compute.amazonaws.com:6188/ws/v1/timeline/metrics 2016-09-01 16:18:34,702 WARN [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://http://ec2-52-29-246-252.eu-central-1.compute.amazonaws.com:6188/ws/v1/timeline/metrics 2016-09-01 16:18:34,703 WARN [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://http://ec2-52-29-246-252.eu-central-1.compute.amazonaws.com:6188/ws/v1/timeline/metrics 2016-09-01 16:18:34,705 WARN [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://http://ec2-52-29-246-252.eu-central-1.compute.amazonaws.com:6188/ws/v1/timeline/metrics 2016-09-01 16:18:34,709 WARN [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://http://ec2-52-29-246-252.eu-central-1.compute.amazonaws.com:6188/ws/v1/timeline/metrics 2016-09-01 16:18:44,702 WARN [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://http://ec2-52-29-246-252.eu-central-1.compute.amazonaws.com:6188/ws/v1/timeline/metrics 2016-09-01 16:18:54,707 WARN [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://http://ec2-52-29-246-252.eu-central-1.compute.amazonaws.com:6188/ws/v1/timeline/metrics

avatar
Explorer

in fact, i firstly use command :

sudo -u hdfs hdfs dfsadmin -safemode forceExit

to let namenode leave safemode, i don't know if it will affect hbase or not

avatar
Master Collaborator

We need more of the master log beyond what tail showed us.

Please attach master log.

avatar
Explorer

hello,

when i install hdp, there is error for hbase check, the log is :

can some one help me?

2016-09-10 12:34:21,313 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-09-10 12:34:21,320 - File['/var/lib/ambari-agent/tmp/hbaseSmokeVerify.sh'] {'content': StaticFile('hbaseSmokeVerify.sh'), 'mode': 0755}
2016-09-10 12:34:21,326 - File['/var/lib/ambari-agent/tmp/hbase-smoke.sh'] {'content': Template('hbase-smoke.sh.j2'), 'mode': 0755}
2016-09-10 12:34:21,327 - Writing File['/var/lib/ambari-agent/tmp/hbase-smoke.sh'] because contents don't match
2016-09-10 12:34:21,327 - Execute[' /usr/hdp/current/hbase-client/bin/hbase --config /usr/hdp/current/hbase-client/conf shell /var/lib/ambari-agent/tmp/hbase-smoke.sh && /var/lib/ambari-agent/tmp/hbaseSmokeVerify.sh /usr/hdp/current/hbase-client/conf id1fac351e_date341016 /usr/hdp/current/hbase-client/bin/hbase'] {'logoutput': True, 'tries': 6, 'user': 'ambari-qa', 'try_sleep': 5}
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

ERROR: Table ambarismoketest does not exist.

Here is some help for this command:
Start disable of named table:
  hbase> disable 't1'
  hbase> disable 'ns1:t1'
ERROR: Table ambarismoketest does not exist.

Here is some help for this command:
Drop the named table. Table must first be disabled:
  hbase> drop 't1'
  hbase> drop 'ns1:t1'
0 row(s) in 28.2970 seconds

2016-09-10 12:34:58,865 ERROR [main] client.AsyncProcess: Failed to get region location 
org.apache.hadoop.hbase.client.NoServerForRegionException: No server address listed in hbase:meta for region ambarismoketest,,1473510866427.7b288aad9960fe5563740fc3a901985d. containing row row01
	at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1299)
	at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1162)
	at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370)
	at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:321)
	at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:206)
	at org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:183)
	at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1449)
	at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1040)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:450)
	at org.jruby.javasupport.JavaMethod.invokeDirect(JavaMethod.java:311)
	at org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:59)
	at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:312)
	at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:169)
	at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57)
	at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
	at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
	at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
	at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
	at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:134)
	at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:174)
	at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:282)
	at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:71)
	at org.jruby.ast.CallManyArgsNode.interpret(CallManyArgsNode.java:59)
	at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
	at org.jruby.evaluator.ASTInterpreter.INTERPRET_BLOCK(ASTInterpreter.java:111)
	at org.jruby.runtime.InterpretedBlock.evalBlockBody(InterpretedBlock.java:374)
	at org.jruby.runtime.InterpretedBlock.yield(InterpretedBlock.java:295)
	at org.jruby.runtime.InterpretedBlock.yieldSpecific(InterpretedBlock.java:229)
	at org.jruby.runtime.Block.yieldSpecific(Block.java:99)
	at org.jruby.ast.ZYieldNode.interpret(ZYieldNode.java:25)
	at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
	at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
	at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
	at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:169)
	at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:191)
	at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:302)
	at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:144)
	at org.jruby.runtime.callsite.CachingCallSite.callIter(CachingCallSite.java:153)
	at org.jruby.ast.FCallNoArgBlockNode.interpret(FCallNoArgBlockNode.java:32)
	at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
	at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
	at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
	at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:134)
	at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:174)
	at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:282)
	at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:71)
	at org.jruby.ast.FCallManyArgsNode.interpret(FCallManyArgsNode.java:60)
	at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
	at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
	at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
	at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:165)
	at org.jruby.RubyClass.finvoke(RubyClass.java:573)
	at org.jruby.RubyBasicObject.send(RubyBasicObject.java:2801)
	at org.jruby.RubyKernel.send(RubyKernel.java:2117)
	at org.jruby.RubyKernel$s$send.call(RubyKernel$s$send.gen:65535)
	at org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:181)
	at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:282)
	at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:71)
	at org.jruby.ast.FCallSpecialArgNode.interpret(FCallSpecialArgNode.java:45)
	at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
	at org.jruby.evaluator.ASTInterpreter.INTERPRET_BLOCK(ASTInterpreter.java:111)
	at org.jruby.runtime.InterpretedBlock.evalBlockBody(InterpretedBlock.java:374)
	at org.jruby.runtime.InterpretedBlock.yield(InterpretedBlock.java:295)
	at org.jruby.runtime.InterpretedBlock.yieldSpecific(InterpretedBlock.java:229)
	at org.jruby.runtime.Block.yieldSpecific(Block.java:99)
	at org.jruby.ast.ZYieldNode.interpret(ZYieldNode.java:25)
	at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
	at org.jruby.ast.RescueNode.executeBody(RescueNode.java:216)
	at org.jruby.ast.RescueNode.interpretWithJavaExceptions(RescueNode.java:120)
	at org.jruby.ast.RescueNode.interpret(RescueNode.java:110)
	at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
	at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
	at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:165)
	at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:272)
	at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:80)
	at org.jruby.runtime.callsite.CachingCallSite.callIter(CachingCallSite.java:89)
	at org.jruby.ast.FCallSpecialArgBlockNode.interpret(FCallSpecialArgBlockNode.java:42)
	at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
	at org.jruby.ast.RescueNode.executeBody(RescueNode.java:216)
	at org.jruby.ast.RescueNode.interpretWithJavaExceptions(RescueNode.java:120)
	at org.jruby.ast.RescueNode.interpret(RescueNode.java:110)
	at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
	at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
	at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:134)
	at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:174)
	at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:282)
	at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:71)
	at org.jruby.ast.CallSpecialArgNode.interpret(CallSpecialArgNode.java:73)
	at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
	at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
	at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
	at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:134)
	at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:174)
	at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:69)
	at org.jruby.ast.FCallSpecialArgNode.interpret(FCallSpecialArgNode.java:45)
	at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
	at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
	at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
	at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:134)
	at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:174)
	at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:282)
	at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:71)
	at org.jruby.ast.CallSpecialArgNode.interpret(CallSpecialArgNode.java:73)
	at org.jruby.ast.LocalAsgnNode.interpret(LocalAsgnNode.java:123)
	at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
	at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
	at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
	at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
	at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:134)
	at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:174)
	at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:282)
	at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:71)
	at org.jruby.ast.FCallManyArgsNode.interpret(FCallManyArgsNode.java:60)
	at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
	at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
	at org.jruby.ast.RootNode.interpret(RootNode.java:129)
	at org.jruby.evaluator.ASTInterpreter.INTERPRET_ROOT(ASTInterpreter.java:119)
	at org.jruby.Ruby.runInterpreter(Ruby.java:724)
	at org.jruby.Ruby.loadFile(Ruby.java:2489)
	at org.jruby.runtime.load.ExternalScript.load(ExternalScript.java:66)
	at org.jruby.runtime.load.LoadService.load(LoadService.java:270)
	at org.jruby.RubyKernel.loadCommon(RubyKernel.java:1105)
	at org.jruby.RubyKernel.load(RubyKernel.java:1087)
	at org.jruby.RubyKernel$s$0$1$load.call(RubyKernel$s$0$1$load.gen:65535)
	at org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:211)
	at org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:207)
	at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:312)
	at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:169)
	at usr.hdp.$2_dot_4_dot_2_dot_0_minus_258.hbase.bin.hirb.__file__(/usr/hdp/2.4.2.0-258/hbase/bin/hirb.rb:177)
	at usr.hdp.$2_dot_4_dot_2_dot_0_minus_258.hbase.bin.hirb.load(/usr/hdp/2.4.2.0-258/hbase/bin/hirb.rb)
	at org.jruby.Ruby.runScript(Ruby.java:697)
	at org.jruby.Ruby.runScript(Ruby.java:690)
	at org.jruby.Ruby.runNormally(Ruby.java:597)
	at org.jruby.Ruby.runFromMain(Ruby.java:446)
	at org.jruby.Main.doRunFromMain(Main.java:369)
	at org.jruby.Main.internalRun(Main.java:258)
	at org.jruby.Main.run(Main.java:224)
	at org.jruby.Main.run(Main.java:208)
	at org.jruby.Main.main(Main.java:188)

ERROR: Failed 1 action: No server address listed in hbase:meta for region ambarismoketest,,1473510866427.7b288aad9960fe5563740fc3a901985d. containing row row01: 1 time, 

Here is some help for this command:
Put a cell 'value' at specified table/row/column and optionally
timestamp coordinates.  To put a cell value into table 'ns1:t1' or 't1'
at row 'r1' under column 'c1' marked with the time 'ts1', do:

  hbase> put 'ns1:t1', 'r1', 'c1', 'value'
  hbase> put 't1', 'r1', 'c1', 'value'
  hbase> put 't1', 'r1', 'c1', 'value', ts1
  hbase> put 't1', 'r1', 'c1', 'value', {ATTRIBUTES=>{'mykey'=>'myvalue'}}
  hbase> put 't1', 'r1', 'c1', 'value', ts1, {ATTRIBUTES=>{'mykey'=>'myvalue'}}
  hbase> put 't1', 'r1', 'c1', 'value', ts1, {VISIBILITY=>'PRIVATE|SECRET'}

The same commands also can be run on a table reference. Suppose you had a reference
t to table 't1', the corresponding command would be:

  hbase> t.put 'r1', 'c1', 'value', ts1, {ATTRIBUTES=>{'mykey'=>'myvalue'}}
ROW  COLUMN+CELL

ERROR: No server address listed in hbase:meta for region ambarismoketest,,1473510866427.7b288aad9960fe5563740fc3a901985d. containing row 

Here is some help for this command:
Scan a table; pass table name and optionally a dictionary of scanner
specifications.  Scanner specifications may include one or more of:
TIMERANGE, FILTER, LIMIT, STARTROW, STOPROW, ROWPREFIXFILTER, TIMESTAMP,
MAXLENGTH or COLUMNS, CACHE or RAW, VERSIONS

If no columns are specified, all columns will be scanned.
To scan all members of a column family, leave the qualifier empty as in
'col_family:'.

The filter can be specified in two ways:
1. Using a filterString - more information on this is available in the
Filter Language document attached to the HBASE-4176 JIRA
2. Using the entire package name of the filter.

Some examples:

  hbase> scan 'hbase:meta'
  hbase> scan 'hbase:meta', {COLUMNS => 'info:regioninfo'}
  hbase> scan 'ns1:t1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'}
  hbase> scan 't1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'}
  hbase> scan 't1', {COLUMNS => 'c1', TIMERANGE => [1303668804, 1303668904]}
  hbase> scan 't1', {REVERSED => true}
  hbase> scan 't1', {ROWPREFIXFILTER => 'row2', FILTER => "
    (QualifierFilter (>=, 'binary:xyz')) AND (TimestampsFilter ( 123, 456))"}
  hbase> scan 't1', {FILTER =>
    org.apache.hadoop.hbase.filter.ColumnPaginationFilter.new(1, 0)}
  hbase> scan 't1', {CONSISTENCY => 'TIMELINE'}
For setting the Operation Attributes 
  hbase> scan 't1', { COLUMNS => ['c1', 'c2'], ATTRIBUTES => {'mykey' => 'myvalue'}}
  hbase> scan 't1', { COLUMNS => ['c1', 'c2'], AUTHORIZATIONS => ['PRIVATE','SECRET']}
For experts, there is an additional option -- CACHE_BLOCKS -- which
switches block caching for the scanner on (true) or off (false).  By
default it is enabled.  Examples:

  hbase> scan 't1', {COLUMNS => ['c1', 'c2'], CACHE_BLOCKS => false}

Also for experts, there is an advanced option -- RAW -- which instructs the
scanner to return all cells (including delete markers and uncollected deleted
cells). This option cannot be combined with requesting specific COLUMNS.
Disabled by default.  Example:

  hbase> scan 't1', {RAW => true, VERSIONS => 10}

Besides the default 'toStringBinary' format, 'scan' supports custom formatting
by column.  A user can define a FORMATTER by adding it to the column name in
the scan specification.  The FORMATTER can be stipulated: 

 1. either as a org.apache.hadoop.hbase.util.Bytes method name (e.g, toInt, toString)
 2. or as a custom class followed by method name: e.g. 'c(MyFormatterClass).format'.

Example formatting cf:qualifier1 and cf:qualifier2 both as Integers: 
  hbase> scan 't1', {COLUMNS => ['cf:qualifier1:toInt',
    'cf:qualifier2:c(org.apache.hadoop.hbase.util.Bytes).toInt'] } 

Note that you can specify a FORMATTER by column only (cf:qualifier).  You cannot
specify a FORMATTER for all columns of a column family.

Scan can also be used directly from a table, by first getting a reference to a
table, like such:

  hbase> t = get_table 't'
  hbase> t.scan

Note in the above situation, you can still provide all the filtering, columns,
options, etc as described above.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.1.2.2.4.2.0-258, rUnknown, Mon Apr 25 06:07:29 UTC 2016

scan 'ambarismoketest'
ROW  COLUMN+CELL

ERROR: No server address listed in hbase:meta for region ambarismoketest,,1473510866427.7b288aad9960fe5563740fc3a901985d. containing row 

Here is some help for this command:
Scan a table; pass table name and optionally a dictionary of scanner
specifications.  Scanner specifications may include one or more of:
TIMERANGE, FILTER, LIMIT, STARTROW, STOPROW, ROWPREFIXFILTER, TIMESTAMP,
MAXLENGTH or COLUMNS, CACHE or RAW, VERSIONS

If no columns are specified, all columns will be scanned.
To scan all members of a column family, leave the qualifier empty as in
'col_family:'.

The filter can be specified in two ways:
1. Using a filterString - more information on this is available in the
Filter Language document attached to the HBASE-4176 JIRA
2. Using the entire package name of the filter.

Some examples:

  hbase> scan 'hbase:meta'
  hbase> scan 'hbase:meta', {COLUMNS => 'info:regioninfo'}
  hbase> scan 'ns1:t1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'}
  hbase> scan 't1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'}
  hbase> scan 't1', {COLUMNS => 'c1', TIMERANGE => [1303668804, 1303668904]}
  hbase> scan 't1', {REVERSED => true}
  hbase> scan 't1', {ROWPREFIXFILTER => 'row2', FILTER => "
    (QualifierFilter (>=, 'binary:xyz')) AND (TimestampsFilter ( 123, 456))"}
  hbase> scan 't1', {FILTER =>
    org.apache.hadoop.hbase.filter.ColumnPaginationFilter.new(1, 0)}
  hbase> scan 't1', {CONSISTENCY => 'TIMELINE'}
For setting the Operation Attributes 
  hbase> scan 't1', { COLUMNS => ['c1', 'c2'], ATTRIBUTES => {'mykey' => 'myvalue'}}
  hbase> scan 't1', { COLUMNS => ['c1', 'c2'], AUTHORIZATIONS => ['PRIVATE','SECRET']}
For experts, there is an additional option -- CACHE_BLOCKS -- which
switches block caching for the scanner on (true) or off (false).  By
default it is enabled.  Examples:

  hbase> scan 't1', {COLUMNS => ['c1', 'c2'], CACHE_BLOCKS => false}

Also for experts, there is an advanced option -- RAW -- which instructs the
scanner to return all cells (including delete markers and uncollected deleted
cells). This option cannot be combined with requesting specific COLUMNS.
Disabled by default.  Example:

  hbase> scan 't1', {RAW => true, VERSIONS => 10}

Besides the default 'toStringBinary' format, 'scan' supports custom formatting
by column.  A user can define a FORMATTER by adding it to the column name in
the scan specification.  The FORMATTER can be stipulated: 

 1. either as a org.apache.hadoop.hbase.util.Bytes method name (e.g, toInt, toString)
 2. or as a custom class followed by method name: e.g. 'c(MyFormatterClass).format'.

Example formatting cf:qualifier1 and cf:qualifier2 both as Integers: 
  hbase> scan 't1', {COLUMNS => ['cf:qualifier1:toInt',
    'cf:qualifier2:c(org.apache.hadoop.hbase.util.Bytes).toInt'] } 

Note that you can specify a FORMATTER by column only (cf:qualifier).  You cannot
specify a FORMATTER for all columns of a column family.

Scan can also be used directly from a table, by first getting a reference to a
table, like such:

  hbase> t = get_table 't'
  hbase> t.scan

Note in the above situation, you can still provide all the filtering, columns,
options, etc as described above.
Looking for id1fac351e_date341016
2016-09-10 12:35:12,865 - Retrying after 5 seconds. Reason: Execution of ' /usr/hdp/current/hbase-client/bin/hbase --config /usr/hdp/current/hbase-client/conf shell /var/lib/ambari-agent/tmp/hbase-smoke.sh && /var/lib/ambari-agent/tmp/hbaseSmokeVerify.sh /usr/hdp/current/hbase-client/conf id1fac351e_date341016 /usr/hdp/current/hbase-client/bin/hbase' returned 1. SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

ERROR: Table ambarismoketest does not exist.

Here is some help for this command:
Start disable of named table:
  hbase> disable 't1'
  hbase> disable 'ns1:t1'
ERROR: Table ambarismoketest does not exist.

Here is some help for this command:
Drop the named table. Table must first be disabled:
  hbase> drop 't1'
  hbase> drop 'ns1:t1'
0 row(s) in 28.2970 seconds

2016-09-10 12:34:58,865 ERROR [main] client.AsyncProcess: Failed to get region location 
org.apache.hadoop.hbase.client.NoServerForRegionException: No server address listed in hbase:meta for region ambarismoketest,,1473510866427.7b288aad9960fe5563740fc3a901985d. containing row row01
	at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1299)
	at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1162)
	at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370)
	at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:321)
	at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:206)
	at org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:183)
	at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1449)
	at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1040)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:450)
	at org.jruby.javasupport.JavaMethod.invokeDirect(JavaMethod.java:311)
	at org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:59)
	at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:312)
	at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:169)
	at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57)
	at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
	at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
	at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
	at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
	at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:134)
	at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:174)
	at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:282)
	at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:71)
	at org.jruby.ast.CallManyArgsNode.interpret(CallManyArgsNode.java:59)
	at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
	at org.jruby.evaluator.ASTInterpreter.INTERPRET_BLOCK(ASTInterpreter.java:111)
	at org.jruby.runtime.InterpretedBlock.evalBlockBody(InterpretedBlock.java:374)
	at org.jruby.runtime.InterpretedBlock.yield(InterpretedBlock.java:295)
	at org.jruby.runtime.InterpretedBlock.yieldSpecific(InterpretedBlock.java:229)
	at org.jruby.runtime.Block.yieldSpecific(Block.java:99)
	at org.jruby.ast.ZYieldNode.interpret(ZYieldNode.java:25)
	at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
	at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
	at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
	at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:169)
	at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:191)
	at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:302)
	at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:144)
	at org.jruby.runtime.callsite.CachingCallSite.callIter(CachingCallSite.java:153)
	at org.jruby.ast.FCallNoArgBlockNode.interpret(FCallNoArgBlockNode.java:32)
	at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
	at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
	at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
	at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:134)
	at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:174)
	at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:282)
	at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:71)
	at org.jruby.ast.FCallManyArgsNode.interpret(FCallManyArgsNode.java:60)
	at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
	at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
	at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
	at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:165)
	at org.jruby.RubyClass.finvoke(RubyClass.java:573)
	at org.jruby.RubyBasicObject.send(RubyBasicObject.java:2801)
	at org.jruby.RubyKernel.send(RubyKernel.java:2117)
	at org.jruby.RubyKernel$s$send.call(RubyKernel$s$send.gen:65535)
	at org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:181)
	at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:282)
	at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:71)
	at org.jruby.ast.FCallSpecialArgNode.interpret(FCallSpecialArgNode.java:45)
	at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
	at org.jruby.evaluator.ASTInterpreter.INTERPRET_BLOCK(ASTInterpreter.java:111)
	at org.jruby.runtime.InterpretedBlock.evalBlockBody(InterpretedBlock.java:374)
	at org.jruby.runtime.InterpretedBlock.yield(InterpretedBlock.java:295)
	at org.jruby.runtime.InterpretedBlock.yieldSpecific(InterpretedBlock.java:229)
	at org.jruby.runtime.Block.yieldSpecific(Block.java:99)
	at org.jruby.ast.ZYieldNode.interpret(ZYieldNode.java:25)
	at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
	at org.jruby.ast.RescueNode.executeBody(RescueNode.java:216)
	at org.jruby.ast.RescueNode.interpretWithJavaExceptions(RescueNode.java:120)
	at org.jruby.ast.RescueNode.interpret(RescueNode.java:110)
	at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
	at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
	at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:165)
	at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:272)
	at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:80)
	at org.jruby.runtime.callsite.CachingCallSite.callIter(CachingCallSite.java:89)
	at org.jruby.ast.FCallSpecialArgBlockNode.interpret(FCallSpecialArgBlockNode.java:42)
	at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
	at org.jruby.ast.RescueNode.executeBody(RescueNode.java:216)
	at org.jruby.ast.RescueNode.interpretWithJavaExceptions(RescueNode.java:120)
	at org.jruby.ast.RescueNode.interpret(RescueNode.java:110)
	at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
	at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
	at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:134)
	at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:174)
	at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:282)
	at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:71)
	at org.jruby.ast.CallSpecialArgNode.interpret(CallSpecialArgNode.java:73)
	at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
	at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
	at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
	at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:134)
	at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:174)
	at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:69)
	at org.jruby.ast.FCallSpecialArgNode.interpret(FCallSpecialArgNode.java:45)
	at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
	at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
	at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
	at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:134)
	at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:174)
	at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:282)
	at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:71)
	at org.jruby.ast.CallSpecialArgNode.interpret(CallSpecialArgNode.java:73)
	at org.jruby.ast.LocalAsgnNode.interpret(LocalAsgnNode.java:123)
	at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
	at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
	at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
	at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
	at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:134)
	at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:174)
	at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:282)
	at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:71)
	at org.jruby.ast.FCallManyArgsNode.interpret(FCallManyArgsNode.java:60)
	at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
	at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
	at org.jruby.ast.RootNode.interpret(RootNode.java:129)
	at org.jruby.evaluator.ASTInterpreter.INTERPRET_ROOT(ASTInterpreter.java:119)
	at org.jruby.Ruby.runInterpreter(Ruby.java:724)
	at org.jruby.Ruby.loadFile(Ruby.java:2489)
	at org.jruby.runtime.load.ExternalScript.load(ExternalScript.java:66)
	at org.jruby.runtime.load.LoadService.load(LoadService.java:270)
	at org.jruby.RubyKernel.loadCommon(RubyKernel.java:1105)
	at org.jruby.RubyKernel.load(RubyKernel.java:1087)
	at org.jruby.RubyKernel$s$0$1$load.call(RubyKernel$s$0$1$load.gen:65535)
	at org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:211)
	at org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:207)
	at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:312)
	at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:169)
	at usr.hdp.$2_dot_4_dot_2_dot_0_minus_258.hbase.bin.hirb.__file__(/usr/hdp/2.4.2.0-258/hbase/bin/hirb.rb:177)
	at usr.hdp.$2_dot_4_dot_2_dot_0_minus_258.hbase.bin.hirb.load(/usr/hdp/2.4.2.0-258/hbase/bin/hirb.rb)
	at org.jruby.Ruby.runScript(Ruby.java:697)
	at org.jruby.Ruby.runScript(Ruby.java:690)
	at org.jruby.Ruby.runNormally(Ruby.java:597)
	at org.jruby.Ruby.runFromMain(Ruby.java:446)
	at org.jruby.Main.doRunFromMain(Main.java:369)
	at org.jruby.Main.internalRun(Main.java:258)
	at org.jruby.Main.run(Main.java:224)
	at org.jruby.Main.run(Main.java:208)
	at org.jruby.Main.main(Main.java:188)

ERROR: Failed 1 action: No server address listed in hbase:meta for region ambarismoketest,,1473510866427.7b288aad9960fe5563740fc3a901985d. containing row row01: 1 time, 

Here is some help for this command:
Put a cell 'value' at specified table/row/column and optionally
timestamp coordinates.  To put a cell value into table 'ns1:t1' or 't1'
at row 'r1' under column 'c1' marked with the time 'ts1', do:

  hbase> put 'ns1:t1', 'r1', 'c1', 'value'
  hbase> put 't1', 'r1', 'c1', 'value'
  hbase> put 't1', 'r1', 'c1', 'value', ts1
  hbase> put 't1', 'r1', 'c1', 'value', {ATTRIBUTES=>{'mykey'=>'myvalue'}}
  hbase> put 't1', 'r1', 'c1', 'value', ts1, {ATTRIBUTES=>{'mykey'=>'myvalue'}}
  hbase> put 't1', 'r1', 'c1', 'value', ts1, {VISIBILITY=>'PRIVATE|SECRET'}

The same commands also can be run on a table reference. Suppose you had a reference
t to table 't1', the corresponding command would be:

  hbase> t.put 'r1', 'c1', 'value', ts1, {ATTRIBUTES=>{'mykey'=>'myvalue'}}
ROW  COLUMN+CELL

ERROR: No server address listed in hbase:meta for region ambarismoketest,,1473510866427.7b288aad9960fe5563740fc3a901985d. containing row 

Here is some help for this command:
Scan a table; pass table name and optionally a dictionary of scanner
specifications.  Scanner specifications may include one or more of:
TIMERANGE, FILTER, LIMIT, STARTROW, STOPROW, ROWPREFIXFILTER, TIMESTAMP,
MAXLENGTH or COLUMNS, CACHE or RAW, VERSIONS

If no columns are specified, all columns will be scanned.
To scan all members of a column family, leave the qualifier empty as in
'col_family:'.

The filter can be specified in two ways:
1. Using a filterString - more information on this is available in the
Filter Language document attached to the HBASE-4176 JIRA
2. Using the entire package name of the filter.

Some examples:

  hbase> scan 'hbase:meta'
  hbase> scan 'hbase:meta', {COLUMNS => 'info:regioninfo'}
  hbase> scan 'ns1:t1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'}
  hbase> scan 't1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'}
  hbase> scan 't1', {COLUMNS => 'c1', TIMERANGE => [1303668804, 1303668904]}
  hbase> scan 't1', {REVERSED => true}
  hbase> scan 't1', {ROWPREFIXFILTER => 'row2', FILTER => "
    (QualifierFilter (>=, 'binary:xyz')) AND (TimestampsFilter ( 123, 456))"}
  hbase> scan 't1', {FILTER =>
    org.apache.hadoop.hbase.filter.ColumnPaginationFilter.new(1, 0)}
  hbase> scan 't1', {CONSISTENCY => 'TIMELINE'}
For setting the Operation Attributes 
  hbase> scan 't1', { COLUMNS => ['c1', 'c2'], ATTRIBUTES => {'mykey' => 'myvalue'}}
  hbase> scan 't1', { COLUMNS => ['c1', 'c2'], AUTHORIZATIONS => ['PRIVATE','SECRET']}
For experts, there is an additional option -- CACHE_BLOCKS -- which
switches block caching for the scanner on (true) or off (false).  By
default it is enabled.  Examples:

  hbase> scan 't1', {COLUMNS => ['c1', 'c2'], CACHE_BLOCKS => false}

Also for experts, there is an advanced option -- RAW -- which instructs the
scanner to return all cells (including delete markers and uncollected deleted
cells). This option cannot be combined with requesting specific COLUMNS.
Disabled by default.  Example:

  hbase> scan 't1', {RAW => true, VERSIONS => 10}

Besides the default 'toStringBinary' format, 'scan' supports custom formatting
by column.  A user can define a FORMATTER by adding it to the column name in
the scan specification.  The FORMATTER can be stipulated: 

 1. either as a org.apache.hadoop.hbase.util.Bytes method name (e.g, toInt, toString)
 2. or as a custom class followed by method name: e.g. 'c(MyFormatterClass).format'.

Example formatting cf:qualifier1 and cf:qualifier2 both as Integers: 
  hbase> scan 't1', {COLUMNS => ['cf:qualifier1:toInt',
    'cf:qualifier2:c(org.apache.hadoop.hbase.util.Bytes).toInt'] } 

Note that you can specify a FORMATTER by column only (cf:qualifier).  You cannot
specify a FORMATTER for all columns of a column family.

Scan can also be used directly from a table, by first getting a reference to a
table, like such:

  hbase> t = get_table 't'
  hbase> t.scan

Note in the above situation, you can still provide all the filtering, columns,
options, etc as described above.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.1.2.2.4.2.0-258, rUnknown, Mon Apr 25 06:07:29 UTC 2016

scan 'ambarismoketest'
ROW  COLUMN+CELL

ERROR: No server address listed in hbase:meta for region ambarismoketest,,1473510866427.7b288aad9960fe5563740fc3a901985d. containing row 

Here is some help for this command:
Scan a table; pass table name and optionally a dictionary of scanner
specifications.  Scanner specifications may include one or more of:
TIMERANGE, FILTER, LIMIT, STARTROW, STOPROW, ROWPREFIXFILTER, TIMESTAMP,
MAXLENGTH or COLUMNS, CACHE or RAW, VERSIONS

If no columns are specified, all columns will be scanned.
To scan all members of a column family, leave the qualifier empty as in
'col_family:'.

The filter can be specified in two ways:
1. Using a filterString - more information on this is available in the
Filter Language document attached to the HBASE-4176 JIRA
2. Using the entire package name of the filter.

Some examples:

  hbase> scan 'hbase:meta'
  hbase> scan 'hbase:meta', {COLUMNS => 'info:regioninfo'}
  hbase> scan 'ns1:t1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'}
  hbase> scan 't1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'}
  hbase> scan 't1', {COLUMNS => 'c1', TIMERANGE => [1303668804, 1303668904]}
  hbase> scan 't1', {REVERSED => true}
  hbase> scan 't1', {ROWPREFIXFILTER => 'row2', FILTER => "
    (QualifierFilter (>=, 'binary:xyz')) AND (TimestampsFilter ( 123, 456))"}
  hbase> scan 't1', {FILTER =>
    org.apache.hadoop.hbase.filter.ColumnPaginationFilter.new(1, 0)}
  hbase> scan 't1', {CONSISTENCY => 'TIMELINE'}
For setting the Operation Attributes 
  hbase> scan 't1', { COLUMNS => ['c1', 'c2'], ATTRIBUTES => {'mykey' => 'myvalue'}}
  hbase> scan 't1', { COLUMNS => ['c1', 'c2'], AUTHORIZATIONS => ['PRIVATE','SECRET']}
For experts, there is an additional option -- CACHE_BLOCKS -- which
switches block caching for the scanner on (true) or off (false).  By
default it is enabled.  Examples:

  hbase> scan 't1', {COLUMNS => ['c1', 'c2'], CACHE_BLOCKS => false}

Also for experts, there is an advanced option -- RAW -- which instructs the
scanner to return all cells (including delete markers and uncollected deleted
cells). This option cannot be combined with requesting specific COLUMNS.
Disabled by default.  Example:

  hbase> scan 't1', {RAW => true, VERSIONS => 10}

Besides the default 'toStringBinary' format, 'scan' supports custom formatting
by column.  A user can define a FORMATTER by adding it to the column name in
the scan specification.  The FORMATTER can be stipulated: 

 1. either as a org.apache.hadoop.hbase.util.Bytes method name (e.g, toInt, toString)
 2. or as a custom class followed by method name: e.g. 'c(MyFormatterClass).format'.

Example formatting cf:qualifier1 and cf:qualifier2 both as Integers: 
  hbase> scan 't1', {COLUMNS => ['cf:qualifier1:toInt',
    'cf:qualifier2:c(org.apache.hadoop.hbase.util.Bytes).toInt'] } 

Note that you can specify a FORMATTER by column only (cf:qualifier).  You cannot
specify a FORMATTER for all columns of a column family.

Scan can also be used directly from a table, by first getting a reference to a
table, like such:

  hbase> t = get_table 't'
  hbase> t.scan

Note in the above situation, you can still provide all the filtering, columns,
options, etc as described above.
Looking for id1fac351e_date341016
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
0 row(s) in 1.5310 seconds


avatar
Contributor

This worked for me. (Do this at your own risk)

Restart Zookeeper and HBASE

Clean Wal and zookeeper hbase-unsecure

$ sudo -u hdfs hdfs dfs -rm -r /apps/hbase/data/WALs/

$ zookeeper-client rmr /hbase-unsecure/rs

Restart HBase

avatar
Contributor

Hi @linuslukia or anyone else... can you PLEASE tell us what is the solution for this problem?

I tried and last advice with deleteing the WALS and rm from /hbase-unsecure/rs and restarted zookeeper and hbase and it didn't work.