Member since
08-17-2016
11
Posts
0
Kudos Received
0
Solutions
09-10-2016
12:42 PM
hello, when i install hdp, there is error for hbase check, the log is : can some one help me? 2016-09-10 12:34:21,313 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-09-10 12:34:21,320 - File['/var/lib/ambari-agent/tmp/hbaseSmokeVerify.sh'] {'content': StaticFile('hbaseSmokeVerify.sh'), 'mode': 0755}
2016-09-10 12:34:21,326 - File['/var/lib/ambari-agent/tmp/hbase-smoke.sh'] {'content': Template('hbase-smoke.sh.j2'), 'mode': 0755}
2016-09-10 12:34:21,327 - Writing File['/var/lib/ambari-agent/tmp/hbase-smoke.sh'] because contents don't match
2016-09-10 12:34:21,327 - Execute[' /usr/hdp/current/hbase-client/bin/hbase --config /usr/hdp/current/hbase-client/conf shell /var/lib/ambari-agent/tmp/hbase-smoke.sh && /var/lib/ambari-agent/tmp/hbaseSmokeVerify.sh /usr/hdp/current/hbase-client/conf id1fac351e_date341016 /usr/hdp/current/hbase-client/bin/hbase'] {'logoutput': True, 'tries': 6, 'user': 'ambari-qa', 'try_sleep': 5}
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
ERROR: Table ambarismoketest does not exist.
Here is some help for this command:
Start disable of named table:
hbase> disable 't1'
hbase> disable 'ns1:t1'
ERROR: Table ambarismoketest does not exist.
Here is some help for this command:
Drop the named table. Table must first be disabled:
hbase> drop 't1'
hbase> drop 'ns1:t1'
0 row(s) in 28.2970 seconds
2016-09-10 12:34:58,865 ERROR [main] client.AsyncProcess: Failed to get region location
org.apache.hadoop.hbase.client.NoServerForRegionException: No server address listed in hbase:meta for region ambarismoketest,,1473510866427.7b288aad9960fe5563740fc3a901985d. containing row row01
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1299)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1162)
at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370)
at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:321)
at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:206)
at org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:183)
at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1449)
at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1040)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:450)
at org.jruby.javasupport.JavaMethod.invokeDirect(JavaMethod.java:311)
at org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:59)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:312)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:169)
at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:134)
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:174)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:282)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:71)
at org.jruby.ast.CallManyArgsNode.interpret(CallManyArgsNode.java:59)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_BLOCK(ASTInterpreter.java:111)
at org.jruby.runtime.InterpretedBlock.evalBlockBody(InterpretedBlock.java:374)
at org.jruby.runtime.InterpretedBlock.yield(InterpretedBlock.java:295)
at org.jruby.runtime.InterpretedBlock.yieldSpecific(InterpretedBlock.java:229)
at org.jruby.runtime.Block.yieldSpecific(Block.java:99)
at org.jruby.ast.ZYieldNode.interpret(ZYieldNode.java:25)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:169)
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:191)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:302)
at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:144)
at org.jruby.runtime.callsite.CachingCallSite.callIter(CachingCallSite.java:153)
at org.jruby.ast.FCallNoArgBlockNode.interpret(FCallNoArgBlockNode.java:32)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:134)
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:174)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:282)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:71)
at org.jruby.ast.FCallManyArgsNode.interpret(FCallManyArgsNode.java:60)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:165)
at org.jruby.RubyClass.finvoke(RubyClass.java:573)
at org.jruby.RubyBasicObject.send(RubyBasicObject.java:2801)
at org.jruby.RubyKernel.send(RubyKernel.java:2117)
at org.jruby.RubyKernel$s$send.call(RubyKernel$s$send.gen:65535)
at org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:181)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:282)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:71)
at org.jruby.ast.FCallSpecialArgNode.interpret(FCallSpecialArgNode.java:45)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_BLOCK(ASTInterpreter.java:111)
at org.jruby.runtime.InterpretedBlock.evalBlockBody(InterpretedBlock.java:374)
at org.jruby.runtime.InterpretedBlock.yield(InterpretedBlock.java:295)
at org.jruby.runtime.InterpretedBlock.yieldSpecific(InterpretedBlock.java:229)
at org.jruby.runtime.Block.yieldSpecific(Block.java:99)
at org.jruby.ast.ZYieldNode.interpret(ZYieldNode.java:25)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.ast.RescueNode.executeBody(RescueNode.java:216)
at org.jruby.ast.RescueNode.interpretWithJavaExceptions(RescueNode.java:120)
at org.jruby.ast.RescueNode.interpret(RescueNode.java:110)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:165)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:272)
at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:80)
at org.jruby.runtime.callsite.CachingCallSite.callIter(CachingCallSite.java:89)
at org.jruby.ast.FCallSpecialArgBlockNode.interpret(FCallSpecialArgBlockNode.java:42)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.ast.RescueNode.executeBody(RescueNode.java:216)
at org.jruby.ast.RescueNode.interpretWithJavaExceptions(RescueNode.java:120)
at org.jruby.ast.RescueNode.interpret(RescueNode.java:110)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:134)
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:174)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:282)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:71)
at org.jruby.ast.CallSpecialArgNode.interpret(CallSpecialArgNode.java:73)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:134)
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:174)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:69)
at org.jruby.ast.FCallSpecialArgNode.interpret(FCallSpecialArgNode.java:45)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:134)
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:174)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:282)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:71)
at org.jruby.ast.CallSpecialArgNode.interpret(CallSpecialArgNode.java:73)
at org.jruby.ast.LocalAsgnNode.interpret(LocalAsgnNode.java:123)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:134)
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:174)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:282)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:71)
at org.jruby.ast.FCallManyArgsNode.interpret(FCallManyArgsNode.java:60)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
at org.jruby.ast.RootNode.interpret(RootNode.java:129)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_ROOT(ASTInterpreter.java:119)
at org.jruby.Ruby.runInterpreter(Ruby.java:724)
at org.jruby.Ruby.loadFile(Ruby.java:2489)
at org.jruby.runtime.load.ExternalScript.load(ExternalScript.java:66)
at org.jruby.runtime.load.LoadService.load(LoadService.java:270)
at org.jruby.RubyKernel.loadCommon(RubyKernel.java:1105)
at org.jruby.RubyKernel.load(RubyKernel.java:1087)
at org.jruby.RubyKernel$s$0$1$load.call(RubyKernel$s$0$1$load.gen:65535)
at org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:211)
at org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:207)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:312)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:169)
at usr.hdp.$2_dot_4_dot_2_dot_0_minus_258.hbase.bin.hirb.__file__(/usr/hdp/2.4.2.0-258/hbase/bin/hirb.rb:177)
at usr.hdp.$2_dot_4_dot_2_dot_0_minus_258.hbase.bin.hirb.load(/usr/hdp/2.4.2.0-258/hbase/bin/hirb.rb)
at org.jruby.Ruby.runScript(Ruby.java:697)
at org.jruby.Ruby.runScript(Ruby.java:690)
at org.jruby.Ruby.runNormally(Ruby.java:597)
at org.jruby.Ruby.runFromMain(Ruby.java:446)
at org.jruby.Main.doRunFromMain(Main.java:369)
at org.jruby.Main.internalRun(Main.java:258)
at org.jruby.Main.run(Main.java:224)
at org.jruby.Main.run(Main.java:208)
at org.jruby.Main.main(Main.java:188)
ERROR: Failed 1 action: No server address listed in hbase:meta for region ambarismoketest,,1473510866427.7b288aad9960fe5563740fc3a901985d. containing row row01: 1 time,
Here is some help for this command:
Put a cell 'value' at specified table/row/column and optionally
timestamp coordinates. To put a cell value into table 'ns1:t1' or 't1'
at row 'r1' under column 'c1' marked with the time 'ts1', do:
hbase> put 'ns1:t1', 'r1', 'c1', 'value'
hbase> put 't1', 'r1', 'c1', 'value'
hbase> put 't1', 'r1', 'c1', 'value', ts1
hbase> put 't1', 'r1', 'c1', 'value', {ATTRIBUTES=>{'mykey'=>'myvalue'}}
hbase> put 't1', 'r1', 'c1', 'value', ts1, {ATTRIBUTES=>{'mykey'=>'myvalue'}}
hbase> put 't1', 'r1', 'c1', 'value', ts1, {VISIBILITY=>'PRIVATE|SECRET'}
The same commands also can be run on a table reference. Suppose you had a reference
t to table 't1', the corresponding command would be:
hbase> t.put 'r1', 'c1', 'value', ts1, {ATTRIBUTES=>{'mykey'=>'myvalue'}}
ROW COLUMN+CELL
ERROR: No server address listed in hbase:meta for region ambarismoketest,,1473510866427.7b288aad9960fe5563740fc3a901985d. containing row
Here is some help for this command:
Scan a table; pass table name and optionally a dictionary of scanner
specifications. Scanner specifications may include one or more of:
TIMERANGE, FILTER, LIMIT, STARTROW, STOPROW, ROWPREFIXFILTER, TIMESTAMP,
MAXLENGTH or COLUMNS, CACHE or RAW, VERSIONS
If no columns are specified, all columns will be scanned.
To scan all members of a column family, leave the qualifier empty as in
'col_family:'.
The filter can be specified in two ways:
1. Using a filterString - more information on this is available in the
Filter Language document attached to the HBASE-4176 JIRA
2. Using the entire package name of the filter.
Some examples:
hbase> scan 'hbase:meta'
hbase> scan 'hbase:meta', {COLUMNS => 'info:regioninfo'}
hbase> scan 'ns1:t1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'}
hbase> scan 't1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'}
hbase> scan 't1', {COLUMNS => 'c1', TIMERANGE => [1303668804, 1303668904]}
hbase> scan 't1', {REVERSED => true}
hbase> scan 't1', {ROWPREFIXFILTER => 'row2', FILTER => "
(QualifierFilter (>=, 'binary:xyz')) AND (TimestampsFilter ( 123, 456))"}
hbase> scan 't1', {FILTER =>
org.apache.hadoop.hbase.filter.ColumnPaginationFilter.new(1, 0)}
hbase> scan 't1', {CONSISTENCY => 'TIMELINE'}
For setting the Operation Attributes
hbase> scan 't1', { COLUMNS => ['c1', 'c2'], ATTRIBUTES => {'mykey' => 'myvalue'}}
hbase> scan 't1', { COLUMNS => ['c1', 'c2'], AUTHORIZATIONS => ['PRIVATE','SECRET']}
For experts, there is an additional option -- CACHE_BLOCKS -- which
switches block caching for the scanner on (true) or off (false). By
default it is enabled. Examples:
hbase> scan 't1', {COLUMNS => ['c1', 'c2'], CACHE_BLOCKS => false}
Also for experts, there is an advanced option -- RAW -- which instructs the
scanner to return all cells (including delete markers and uncollected deleted
cells). This option cannot be combined with requesting specific COLUMNS.
Disabled by default. Example:
hbase> scan 't1', {RAW => true, VERSIONS => 10}
Besides the default 'toStringBinary' format, 'scan' supports custom formatting
by column. A user can define a FORMATTER by adding it to the column name in
the scan specification. The FORMATTER can be stipulated:
1. either as a org.apache.hadoop.hbase.util.Bytes method name (e.g, toInt, toString)
2. or as a custom class followed by method name: e.g. 'c(MyFormatterClass).format'.
Example formatting cf:qualifier1 and cf:qualifier2 both as Integers:
hbase> scan 't1', {COLUMNS => ['cf:qualifier1:toInt',
'cf:qualifier2:c(org.apache.hadoop.hbase.util.Bytes).toInt'] }
Note that you can specify a FORMATTER by column only (cf:qualifier). You cannot
specify a FORMATTER for all columns of a column family.
Scan can also be used directly from a table, by first getting a reference to a
table, like such:
hbase> t = get_table 't'
hbase> t.scan
Note in the above situation, you can still provide all the filtering, columns,
options, etc as described above.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.1.2.2.4.2.0-258, rUnknown, Mon Apr 25 06:07:29 UTC 2016
scan 'ambarismoketest'
ROW COLUMN+CELL
ERROR: No server address listed in hbase:meta for region ambarismoketest,,1473510866427.7b288aad9960fe5563740fc3a901985d. containing row
Here is some help for this command:
Scan a table; pass table name and optionally a dictionary of scanner
specifications. Scanner specifications may include one or more of:
TIMERANGE, FILTER, LIMIT, STARTROW, STOPROW, ROWPREFIXFILTER, TIMESTAMP,
MAXLENGTH or COLUMNS, CACHE or RAW, VERSIONS
If no columns are specified, all columns will be scanned.
To scan all members of a column family, leave the qualifier empty as in
'col_family:'.
The filter can be specified in two ways:
1. Using a filterString - more information on this is available in the
Filter Language document attached to the HBASE-4176 JIRA
2. Using the entire package name of the filter.
Some examples:
hbase> scan 'hbase:meta'
hbase> scan 'hbase:meta', {COLUMNS => 'info:regioninfo'}
hbase> scan 'ns1:t1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'}
hbase> scan 't1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'}
hbase> scan 't1', {COLUMNS => 'c1', TIMERANGE => [1303668804, 1303668904]}
hbase> scan 't1', {REVERSED => true}
hbase> scan 't1', {ROWPREFIXFILTER => 'row2', FILTER => "
(QualifierFilter (>=, 'binary:xyz')) AND (TimestampsFilter ( 123, 456))"}
hbase> scan 't1', {FILTER =>
org.apache.hadoop.hbase.filter.ColumnPaginationFilter.new(1, 0)}
hbase> scan 't1', {CONSISTENCY => 'TIMELINE'}
For setting the Operation Attributes
hbase> scan 't1', { COLUMNS => ['c1', 'c2'], ATTRIBUTES => {'mykey' => 'myvalue'}}
hbase> scan 't1', { COLUMNS => ['c1', 'c2'], AUTHORIZATIONS => ['PRIVATE','SECRET']}
For experts, there is an additional option -- CACHE_BLOCKS -- which
switches block caching for the scanner on (true) or off (false). By
default it is enabled. Examples:
hbase> scan 't1', {COLUMNS => ['c1', 'c2'], CACHE_BLOCKS => false}
Also for experts, there is an advanced option -- RAW -- which instructs the
scanner to return all cells (including delete markers and uncollected deleted
cells). This option cannot be combined with requesting specific COLUMNS.
Disabled by default. Example:
hbase> scan 't1', {RAW => true, VERSIONS => 10}
Besides the default 'toStringBinary' format, 'scan' supports custom formatting
by column. A user can define a FORMATTER by adding it to the column name in
the scan specification. The FORMATTER can be stipulated:
1. either as a org.apache.hadoop.hbase.util.Bytes method name (e.g, toInt, toString)
2. or as a custom class followed by method name: e.g. 'c(MyFormatterClass).format'.
Example formatting cf:qualifier1 and cf:qualifier2 both as Integers:
hbase> scan 't1', {COLUMNS => ['cf:qualifier1:toInt',
'cf:qualifier2:c(org.apache.hadoop.hbase.util.Bytes).toInt'] }
Note that you can specify a FORMATTER by column only (cf:qualifier). You cannot
specify a FORMATTER for all columns of a column family.
Scan can also be used directly from a table, by first getting a reference to a
table, like such:
hbase> t = get_table 't'
hbase> t.scan
Note in the above situation, you can still provide all the filtering, columns,
options, etc as described above.
Looking for id1fac351e_date341016
2016-09-10 12:35:12,865 - Retrying after 5 seconds. Reason: Execution of ' /usr/hdp/current/hbase-client/bin/hbase --config /usr/hdp/current/hbase-client/conf shell /var/lib/ambari-agent/tmp/hbase-smoke.sh && /var/lib/ambari-agent/tmp/hbaseSmokeVerify.sh /usr/hdp/current/hbase-client/conf id1fac351e_date341016 /usr/hdp/current/hbase-client/bin/hbase' returned 1. SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
ERROR: Table ambarismoketest does not exist.
Here is some help for this command:
Start disable of named table:
hbase> disable 't1'
hbase> disable 'ns1:t1'
ERROR: Table ambarismoketest does not exist.
Here is some help for this command:
Drop the named table. Table must first be disabled:
hbase> drop 't1'
hbase> drop 'ns1:t1'
0 row(s) in 28.2970 seconds
2016-09-10 12:34:58,865 ERROR [main] client.AsyncProcess: Failed to get region location
org.apache.hadoop.hbase.client.NoServerForRegionException: No server address listed in hbase:meta for region ambarismoketest,,1473510866427.7b288aad9960fe5563740fc3a901985d. containing row row01
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1299)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1162)
at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:370)
at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:321)
at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:206)
at org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:183)
at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1449)
at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1040)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:450)
at org.jruby.javasupport.JavaMethod.invokeDirect(JavaMethod.java:311)
at org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:59)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:312)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:169)
at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:134)
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:174)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:282)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:71)
at org.jruby.ast.CallManyArgsNode.interpret(CallManyArgsNode.java:59)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_BLOCK(ASTInterpreter.java:111)
at org.jruby.runtime.InterpretedBlock.evalBlockBody(InterpretedBlock.java:374)
at org.jruby.runtime.InterpretedBlock.yield(InterpretedBlock.java:295)
at org.jruby.runtime.InterpretedBlock.yieldSpecific(InterpretedBlock.java:229)
at org.jruby.runtime.Block.yieldSpecific(Block.java:99)
at org.jruby.ast.ZYieldNode.interpret(ZYieldNode.java:25)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:169)
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:191)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:302)
at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:144)
at org.jruby.runtime.callsite.CachingCallSite.callIter(CachingCallSite.java:153)
at org.jruby.ast.FCallNoArgBlockNode.interpret(FCallNoArgBlockNode.java:32)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:134)
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:174)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:282)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:71)
at org.jruby.ast.FCallManyArgsNode.interpret(FCallManyArgsNode.java:60)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:165)
at org.jruby.RubyClass.finvoke(RubyClass.java:573)
at org.jruby.RubyBasicObject.send(RubyBasicObject.java:2801)
at org.jruby.RubyKernel.send(RubyKernel.java:2117)
at org.jruby.RubyKernel$s$send.call(RubyKernel$s$send.gen:65535)
at org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:181)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:282)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:71)
at org.jruby.ast.FCallSpecialArgNode.interpret(FCallSpecialArgNode.java:45)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_BLOCK(ASTInterpreter.java:111)
at org.jruby.runtime.InterpretedBlock.evalBlockBody(InterpretedBlock.java:374)
at org.jruby.runtime.InterpretedBlock.yield(InterpretedBlock.java:295)
at org.jruby.runtime.InterpretedBlock.yieldSpecific(InterpretedBlock.java:229)
at org.jruby.runtime.Block.yieldSpecific(Block.java:99)
at org.jruby.ast.ZYieldNode.interpret(ZYieldNode.java:25)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.ast.RescueNode.executeBody(RescueNode.java:216)
at org.jruby.ast.RescueNode.interpretWithJavaExceptions(RescueNode.java:120)
at org.jruby.ast.RescueNode.interpret(RescueNode.java:110)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:165)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:272)
at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:80)
at org.jruby.runtime.callsite.CachingCallSite.callIter(CachingCallSite.java:89)
at org.jruby.ast.FCallSpecialArgBlockNode.interpret(FCallSpecialArgBlockNode.java:42)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.ast.RescueNode.executeBody(RescueNode.java:216)
at org.jruby.ast.RescueNode.interpretWithJavaExceptions(RescueNode.java:120)
at org.jruby.ast.RescueNode.interpret(RescueNode.java:110)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:134)
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:174)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:282)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:71)
at org.jruby.ast.CallSpecialArgNode.interpret(CallSpecialArgNode.java:73)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:134)
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:174)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:69)
at org.jruby.ast.FCallSpecialArgNode.interpret(FCallSpecialArgNode.java:45)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:134)
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:174)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:282)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:71)
at org.jruby.ast.CallSpecialArgNode.interpret(CallSpecialArgNode.java:73)
at org.jruby.ast.LocalAsgnNode.interpret(LocalAsgnNode.java:123)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:120)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:134)
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:174)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:282)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:71)
at org.jruby.ast.FCallManyArgsNode.interpret(FCallManyArgsNode.java:60)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
at org.jruby.ast.RootNode.interpret(RootNode.java:129)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_ROOT(ASTInterpreter.java:119)
at org.jruby.Ruby.runInterpreter(Ruby.java:724)
at org.jruby.Ruby.loadFile(Ruby.java:2489)
at org.jruby.runtime.load.ExternalScript.load(ExternalScript.java:66)
at org.jruby.runtime.load.LoadService.load(LoadService.java:270)
at org.jruby.RubyKernel.loadCommon(RubyKernel.java:1105)
at org.jruby.RubyKernel.load(RubyKernel.java:1087)
at org.jruby.RubyKernel$s$0$1$load.call(RubyKernel$s$0$1$load.gen:65535)
at org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:211)
at org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:207)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:312)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:169)
at usr.hdp.$2_dot_4_dot_2_dot_0_minus_258.hbase.bin.hirb.__file__(/usr/hdp/2.4.2.0-258/hbase/bin/hirb.rb:177)
at usr.hdp.$2_dot_4_dot_2_dot_0_minus_258.hbase.bin.hirb.load(/usr/hdp/2.4.2.0-258/hbase/bin/hirb.rb)
at org.jruby.Ruby.runScript(Ruby.java:697)
at org.jruby.Ruby.runScript(Ruby.java:690)
at org.jruby.Ruby.runNormally(Ruby.java:597)
at org.jruby.Ruby.runFromMain(Ruby.java:446)
at org.jruby.Main.doRunFromMain(Main.java:369)
at org.jruby.Main.internalRun(Main.java:258)
at org.jruby.Main.run(Main.java:224)
at org.jruby.Main.run(Main.java:208)
at org.jruby.Main.main(Main.java:188)
ERROR: Failed 1 action: No server address listed in hbase:meta for region ambarismoketest,,1473510866427.7b288aad9960fe5563740fc3a901985d. containing row row01: 1 time,
Here is some help for this command:
Put a cell 'value' at specified table/row/column and optionally
timestamp coordinates. To put a cell value into table 'ns1:t1' or 't1'
at row 'r1' under column 'c1' marked with the time 'ts1', do:
hbase> put 'ns1:t1', 'r1', 'c1', 'value'
hbase> put 't1', 'r1', 'c1', 'value'
hbase> put 't1', 'r1', 'c1', 'value', ts1
hbase> put 't1', 'r1', 'c1', 'value', {ATTRIBUTES=>{'mykey'=>'myvalue'}}
hbase> put 't1', 'r1', 'c1', 'value', ts1, {ATTRIBUTES=>{'mykey'=>'myvalue'}}
hbase> put 't1', 'r1', 'c1', 'value', ts1, {VISIBILITY=>'PRIVATE|SECRET'}
The same commands also can be run on a table reference. Suppose you had a reference
t to table 't1', the corresponding command would be:
hbase> t.put 'r1', 'c1', 'value', ts1, {ATTRIBUTES=>{'mykey'=>'myvalue'}}
ROW COLUMN+CELL
ERROR: No server address listed in hbase:meta for region ambarismoketest,,1473510866427.7b288aad9960fe5563740fc3a901985d. containing row
Here is some help for this command:
Scan a table; pass table name and optionally a dictionary of scanner
specifications. Scanner specifications may include one or more of:
TIMERANGE, FILTER, LIMIT, STARTROW, STOPROW, ROWPREFIXFILTER, TIMESTAMP,
MAXLENGTH or COLUMNS, CACHE or RAW, VERSIONS
If no columns are specified, all columns will be scanned.
To scan all members of a column family, leave the qualifier empty as in
'col_family:'.
The filter can be specified in two ways:
1. Using a filterString - more information on this is available in the
Filter Language document attached to the HBASE-4176 JIRA
2. Using the entire package name of the filter.
Some examples:
hbase> scan 'hbase:meta'
hbase> scan 'hbase:meta', {COLUMNS => 'info:regioninfo'}
hbase> scan 'ns1:t1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'}
hbase> scan 't1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'}
hbase> scan 't1', {COLUMNS => 'c1', TIMERANGE => [1303668804, 1303668904]}
hbase> scan 't1', {REVERSED => true}
hbase> scan 't1', {ROWPREFIXFILTER => 'row2', FILTER => "
(QualifierFilter (>=, 'binary:xyz')) AND (TimestampsFilter ( 123, 456))"}
hbase> scan 't1', {FILTER =>
org.apache.hadoop.hbase.filter.ColumnPaginationFilter.new(1, 0)}
hbase> scan 't1', {CONSISTENCY => 'TIMELINE'}
For setting the Operation Attributes
hbase> scan 't1', { COLUMNS => ['c1', 'c2'], ATTRIBUTES => {'mykey' => 'myvalue'}}
hbase> scan 't1', { COLUMNS => ['c1', 'c2'], AUTHORIZATIONS => ['PRIVATE','SECRET']}
For experts, there is an additional option -- CACHE_BLOCKS -- which
switches block caching for the scanner on (true) or off (false). By
default it is enabled. Examples:
hbase> scan 't1', {COLUMNS => ['c1', 'c2'], CACHE_BLOCKS => false}
Also for experts, there is an advanced option -- RAW -- which instructs the
scanner to return all cells (including delete markers and uncollected deleted
cells). This option cannot be combined with requesting specific COLUMNS.
Disabled by default. Example:
hbase> scan 't1', {RAW => true, VERSIONS => 10}
Besides the default 'toStringBinary' format, 'scan' supports custom formatting
by column. A user can define a FORMATTER by adding it to the column name in
the scan specification. The FORMATTER can be stipulated:
1. either as a org.apache.hadoop.hbase.util.Bytes method name (e.g, toInt, toString)
2. or as a custom class followed by method name: e.g. 'c(MyFormatterClass).format'.
Example formatting cf:qualifier1 and cf:qualifier2 both as Integers:
hbase> scan 't1', {COLUMNS => ['cf:qualifier1:toInt',
'cf:qualifier2:c(org.apache.hadoop.hbase.util.Bytes).toInt'] }
Note that you can specify a FORMATTER by column only (cf:qualifier). You cannot
specify a FORMATTER for all columns of a column family.
Scan can also be used directly from a table, by first getting a reference to a
table, like such:
hbase> t = get_table 't'
hbase> t.scan
Note in the above situation, you can still provide all the filtering, columns,
options, etc as described above.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.1.2.2.4.2.0-258, rUnknown, Mon Apr 25 06:07:29 UTC 2016
scan 'ambarismoketest'
ROW COLUMN+CELL
ERROR: No server address listed in hbase:meta for region ambarismoketest,,1473510866427.7b288aad9960fe5563740fc3a901985d. containing row
Here is some help for this command:
Scan a table; pass table name and optionally a dictionary of scanner
specifications. Scanner specifications may include one or more of:
TIMERANGE, FILTER, LIMIT, STARTROW, STOPROW, ROWPREFIXFILTER, TIMESTAMP,
MAXLENGTH or COLUMNS, CACHE or RAW, VERSIONS
If no columns are specified, all columns will be scanned.
To scan all members of a column family, leave the qualifier empty as in
'col_family:'.
The filter can be specified in two ways:
1. Using a filterString - more information on this is available in the
Filter Language document attached to the HBASE-4176 JIRA
2. Using the entire package name of the filter.
Some examples:
hbase> scan 'hbase:meta'
hbase> scan 'hbase:meta', {COLUMNS => 'info:regioninfo'}
hbase> scan 'ns1:t1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'}
hbase> scan 't1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'}
hbase> scan 't1', {COLUMNS => 'c1', TIMERANGE => [1303668804, 1303668904]}
hbase> scan 't1', {REVERSED => true}
hbase> scan 't1', {ROWPREFIXFILTER => 'row2', FILTER => "
(QualifierFilter (>=, 'binary:xyz')) AND (TimestampsFilter ( 123, 456))"}
hbase> scan 't1', {FILTER =>
org.apache.hadoop.hbase.filter.ColumnPaginationFilter.new(1, 0)}
hbase> scan 't1', {CONSISTENCY => 'TIMELINE'}
For setting the Operation Attributes
hbase> scan 't1', { COLUMNS => ['c1', 'c2'], ATTRIBUTES => {'mykey' => 'myvalue'}}
hbase> scan 't1', { COLUMNS => ['c1', 'c2'], AUTHORIZATIONS => ['PRIVATE','SECRET']}
For experts, there is an additional option -- CACHE_BLOCKS -- which
switches block caching for the scanner on (true) or off (false). By
default it is enabled. Examples:
hbase> scan 't1', {COLUMNS => ['c1', 'c2'], CACHE_BLOCKS => false}
Also for experts, there is an advanced option -- RAW -- which instructs the
scanner to return all cells (including delete markers and uncollected deleted
cells). This option cannot be combined with requesting specific COLUMNS.
Disabled by default. Example:
hbase> scan 't1', {RAW => true, VERSIONS => 10}
Besides the default 'toStringBinary' format, 'scan' supports custom formatting
by column. A user can define a FORMATTER by adding it to the column name in
the scan specification. The FORMATTER can be stipulated:
1. either as a org.apache.hadoop.hbase.util.Bytes method name (e.g, toInt, toString)
2. or as a custom class followed by method name: e.g. 'c(MyFormatterClass).format'.
Example formatting cf:qualifier1 and cf:qualifier2 both as Integers:
hbase> scan 't1', {COLUMNS => ['cf:qualifier1:toInt',
'cf:qualifier2:c(org.apache.hadoop.hbase.util.Bytes).toInt'] }
Note that you can specify a FORMATTER by column only (cf:qualifier). You cannot
specify a FORMATTER for all columns of a column family.
Scan can also be used directly from a table, by first getting a reference to a
table, like such:
hbase> t = get_table 't'
hbase> t.scan
Note in the above situation, you can still provide all the filtering, columns,
options, etc as described above.
Looking for id1fac351e_date341016
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
0 row(s) in 1.5310 seconds
... View more
09-01-2016
04:23 PM
here is the log of hbase master: /var/log/hbase$ tail -f hbase-hbase-master-ec2-13-26-256-150.eu-central-1.compute.amazonaws.com.log
2016-09-01 16:17:34,702 WARN [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://http://ec2-52-29-246-252.eu-central-1.compute.amazonaws.com:6188/ws/v1/timeline/metrics
2016-09-01 16:17:34,703 WARN [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://http://ec2-52-29-246-252.eu-central-1.compute.amazonaws.com:6188/ws/v1/timeline/metrics
2016-09-01 16:18:14,705 WARN [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://http://ec2-52-29-246-252.eu-central-1.compute.amazonaws.com:6188/ws/v1/timeline/metrics
2016-09-01 16:18:14,706 WARN [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://http://ec2-52-29-246-252.eu-central-1.compute.amazonaws.com:6188/ws/v1/timeline/metrics
2016-09-01 16:18:34,702 WARN [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://http://ec2-52-29-246-252.eu-central-1.compute.amazonaws.com:6188/ws/v1/timeline/metrics
2016-09-01 16:18:34,703 WARN [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://http://ec2-52-29-246-252.eu-central-1.compute.amazonaws.com:6188/ws/v1/timeline/metrics
2016-09-01 16:18:34,705 WARN [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://http://ec2-52-29-246-252.eu-central-1.compute.amazonaws.com:6188/ws/v1/timeline/metrics
2016-09-01 16:18:34,709 WARN [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://http://ec2-52-29-246-252.eu-central-1.compute.amazonaws.com:6188/ws/v1/timeline/metrics
2016-09-01 16:18:44,702 WARN [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://http://ec2-52-29-246-252.eu-central-1.compute.amazonaws.com:6188/ws/v1/timeline/metrics
2016-09-01 16:18:54,707 WARN [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://http://ec2-52-29-246-252.eu-central-1.compute.amazonaws.com:6188/ws/v1/timeline/metrics
... View more
09-01-2016
04:18 PM
in fact, i firstly use command : sudo -u hdfs hdfs dfsadmin -safemode forceExit to let namenode leave safemode, i don't know if it will affect hbase or not
... View more
09-01-2016
03:10 PM
and in hbase log there is also 2016-09-01 14:38:37,876 WARN [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://http://ec2-52-29-246-252.eu-central-1.compute.amazonaws.com:6188/ws/v1/timeline/metrics but i think it is not problem of hbase
... View more
09-01-2016
02:40 PM
hello, i use hdp 2.4, and the hbase master log is under /var/log/hbase, not under / grid/0/log/hbase/, and in hbase master log is : 2016-09-01 14:19:49,843 ERROR [Thread-68] master.HMaster: Master failed to complete initialization after 900000ms. Please consider submitting a bug report including a thread dump of this process. how to increase hbase.master.namespace.init.timeout, in which configuration file?
... View more
09-01-2016
12:46 PM
hello, i use the java in /usr/jdk64/jdk1.8.0_60/bin then use command: sudo -u hbase jstack 13711 >~/1.txt and the content in 1.txt is: 1.txt
... View more
09-01-2016
12:07 PM
so if i use sudo jstack -F 13711 > ~/1.txt result is:
Attaching to process ID 13711, please wait...
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.tools.jstack.JStack.runJStackTool(JStack.java:136)
at sun.tools.jstack.JStack.main(JStack.java:102)
Caused by: sun.jvm.hotspot.runtime.VMVersionMismatchException: Supported versions are 24.95-b01. Target VM is 25.60-b23
at sun.jvm.hotspot.runtime.VM.checkVMVersion(VM.java:234)
at sun.jvm.hotspot.runtime.VM.<init>(VM.java:297)
at sun.jvm.hotspot.runtime.VM.initialize(VM.java:368)
at sun.jvm.hotspot.bugspot.BugSpotAgent.setupVM(BugSpotAgent.java:598)
at sun.jvm.hotspot.bugspot.BugSpotAgent.go(BugSpotAgent.java:493)
at sun.jvm.hotspot.bugspot.BugSpotAgent.attach(BugSpotAgent.java:331)
at sun.jvm.hotspot.tools.Tool.start(Tool.java:163)
at sun.jvm.hotspot.tools.JStack.main(JStack.java:86)
... 6 more
... View more
09-01-2016
12:02 PM
I use ps aux | grep HMaster to show processid of hmaster, i can see: hbase 13711 37.9 3.3 2844668 274072 ? Sl 09:19 59:23 /usr/jdk64/jdk1.8.0_60/bin/java -Dproc_master -XX:OnOutOfMemoryError=kill -9 %p -Dhdp.version=2.4.2.0-258 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hbase/hs_err_pid%p.log -Djava.io.tmpdir=/tmp -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/var/log/hbase/gc.log-201609010919 -Xmx1024m -Dhbase.log.dir=/var/log/hbase -Dhbase.log.file=hbase-hbase-master-ec2-13-26-256-150.eu-central-1.compute.amazonaws.com.log -Dhbase.home.dir=/usr/hdp/current/hbase-master/bin/.. -Dhbase.id.str=hbase -Dhbase.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.4.2.0-258/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.2.0-258/hadoop/lib/native -Dhbase.security.logger=INFO,RFAS org.apache.hadoop.hbase.master.HMaster start
ubuntu 20760 0.0 0.0 10436 888 pts/2 S+ 11:56 0:00 grep --color=auto HMaster then i use both 13711, and 20760 like sudo jstack 13711 >~/1.txt result is:
13711: Unable to open socket file: target process not responding or HotSpot VM not loaded
The -F option can be used when the target process is not responding can you help me?
... View more
09-01-2016
10:05 AM
when i create table or tape status in hbase shell, i can see: ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
at org.apache.hadoop.hbase.master.HMaster.checkNamespaceManagerReady(HMaster.java:2329)
at org.apache.hadoop.hbase.master.HMaster.ensureNamespaceExists(HMaster.java:2522)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1527)
at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:454)
at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55401)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
Here is some help for this command:
Creates a table. Pass a table name, and a set of column family
specifications (at least one), and, optionally, table configuration.
Column specification can be a simple string (name), or a dictionary
(dictionaries are described below in main help output), necessarily
including NAME attribute.
Examples:
Create a table with namespace=ns1 and table qualifier=t1
hbase> create 'ns1:t1', {NAME => 'f1', VERSIONS => 5}
Create a table with namespace=default and table qualifier=t1
hbase> create 't1', {NAME => 'f1'}, {NAME => 'f2'}, {NAME => 'f3'}
hbase> # The above in shorthand would be the following:
hbase> create 't1', 'f1', 'f2', 'f3'
hbase> create 't1', {NAME => 'f1', VERSIONS => 1, TTL => 2592000, BLOCKCACHE => true}
hbase> create 't1', {NAME => 'f1', CONFIGURATION => {'hbase.hstore.blockingStoreFiles' => '10'}}
Table configuration options can be put at the end.
Examples:
hbase> create 'ns1:t1', 'f1', SPLITS => ['10', '20', '30', '40']
hbase> create 't1', 'f1', SPLITS => ['10', '20', '30', '40']
hbase> create 't1', 'f1', SPLITS_FILE => 'splits.txt', OWNER => 'johndoe'
hbase> create 't1', {NAME => 'f1', VERSIONS => 5}, METADATA => { 'mykey' => 'myvalue' }
hbase> # Optionally pre-split the table into NUMREGIONS, using
hbase> # SPLITALGO ("HexStringSplit", "UniformSplit" or classname)
hbase> create 't1', 'f1', {NUMREGIONS => 15, SPLITALGO => 'HexStringSplit'}
hbase> create 't1', 'f1', {NUMREGIONS => 15, SPLITALGO => 'HexStringSplit', REGION_REPLICATION => 2, CONFIGURATION => {'hbase.hregion.scan.loadColumnFamiliesOnDemand' => 'true'}}
You can also keep around a reference to the created table:
hbase> t1 = create 't1', 'f1'
Which gives you a reference to the table named 't1', on which you can then
call methods.
... View more
09-01-2016
10:02 AM
hello, as i've already indicate in the question, my hbase master log is following: ERROR [Thread-68] master.HMaster: Master failed to complete
initialization after 900000ms. Please consider submitting a bug report
including a thread dump of this process. and "multiple jstack during initilization of master." i don't know where i can get it?
... View more