Member since
01-06-2016
54
Posts
15
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
8050 | 06-24-2016 06:18 AM | |
807 | 03-18-2016 12:40 PM | |
6393 | 03-18-2016 06:28 AM | |
4528 | 03-08-2016 10:02 AM |
08-08-2018
06:03 AM
This error is because of symbolic link is broken for (Means hbase client jar is not present in the node) - ll /usr/hdp/current/hbase-client/lib/hbase-client.jar Put your hbase client jar from another node to the non working node- /usr/hdp/current/hbase-client/lib/hbase-client-*.jar Please note that, if you don't have other jars as well, then copy all the necessary missing jars from other node- /usr/hdp/current/
... View more
08-24-2016
01:43 PM
Thanks a lot @Victor Xu. All points are clear.
... View more
08-23-2016
09:05 AM
Hi @Victor Xu, Thanks. I understand your point. I have couple of questions here to understand the scenario more clearly- 1. If I put data in temporary hbase cluster during main hbase cluster downtime, then how I will merge data from temporary cluster to main cluster when main cluster will be up and running. 2. When I am restoring data from hdfs hfile location to new location, then how I will recover memstore data. 3. If I shutdown restart hbase service, is memstore data being flushed to hdfs hfile that time? Thanks, Raja
... View more
08-22-2016
03:15 PM
Hi @Victor Xu, I followed your steps. It is working fine. But i needed to restart hbase Can you please suggest me any other way where I don't need to restart hbase service. Thanks, Raja
... View more
08-22-2016
03:04 PM
Hi @Victor Xu, I followed your steps. It is working fine. But i needed to restart hbase Can you please suggest me any other way where I don't need to restart hbase service. Thanks, Raja
... View more
08-22-2016
06:36 AM
Thanks Victor. I will follow your steps and will let you know.
... View more
08-22-2016
04:01 AM
1 Kudo
My old hdfs data directory location - /apps/hbase/data My new hdfs data directory location - /apps/hbase/data2 Hbase table Name - CUTOFF2 create 'CUTOFF2', {NAME => '1'} I am doing following steps to recover data. But not working. Please tell me where I am wrong- hadoop fs -ls /apps/hbase/data/data/default/CUTOFF2/4c8d68c329cdb6d73d4094fd64e5e37d/1/d321dfcd3b1245d2b5cc2ec1aab3a9f2
hadoop fs -ls /apps/hbase/data2/data/default/CUTOFF2/8f1aff44991e1a08c6a6bbf9c2546cf6/1 put 'CUTOFF2' , 'samplerow', '1:1' , 'sampledata'
count 'CUTOFF2' su - hbase hadoop fs -cp /apps/hbase/data/data/default/CUTOFF2/4c8d68c329cdb6d73d4094fd64e5e37d/1/d321dfcd3b1245d2b5cc2ec1aab3a9f2 /apps/hbase/data2/data/default/CUTOFF2/8f1aff44991e1a08c6a6bbf9c2546cf6/1 major_compact 'CUTOFF2' Please correct my steps so recovery works.
... View more
Labels:
- Labels:
-
Apache HBase
08-08-2016
02:22 PM
Unable to scan hbase table. getting following error- how to recover table. scan 'CUTOFF8'- ERROR: org.apache.hadoop.hbase.NotServingRegionException: Region CUTOFF8,,1465897349742.2077c5dfbfb97d67f09120e4b9cdc15a. is not online on data1.corp.mirrorplus.com,16020,1470665536454
at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2898)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:947)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2235)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745) hbase master log- 2016-08-08 09:04:45,112 WARN [master1.corp.mirrorplus.com,16000,1470404494921_ChoreService_1] hdfs.DFSClient: DFS chooseDataNode: got # 3 IOException, will wait for 12866.800703353128 msec.
2016-08-08 09:04:57,979 WARN [master1.corp.mirrorplus.com,16000,1470404494921_ChoreService_1] hdfs.DFSClient: Could not obtain block: BP-838165258-10.1.1.94-1459246457024:blk_1073781564_40751 file=/apps/hbase/data1/.hbase-snapshot/S3LINKS-AIM-SNAPSHOT-NEW/.snapshotinfo No live nodes contain current block Block locations: Dead nodes: . Throwing a BlockMissingException
2016-08-08 09:04:57,979 WARN [master1.corp.mirrorplus.com,16000,1470404494921_ChoreService_1] hdfs.DFSClient: Could not obtain block: BP-838165258-10.1.1.94-1459246457024:blk_1073781564_40751 file=/apps/hbase/data1/.hbase-snapshot/S3LINKS-AIM-SNAPSHOT-NEW/.snapshotinfo No live nodes contain current block Block locations: Dead nodes: . Throwing a BlockMissingException
2016-08-08 09:04:57,979 WARN [master1.corp.mirrorplus.com,16000,1470404494921_ChoreService_1] hdfs.DFSClient: DFS Read
org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-838165258-10.1.1.94-1459246457024:blk_1073781564_40751 file=/apps/hbase/data1/.hbase-snapshot/S3LINKS-AIM-SNAPSHOT-NEW/.snapshotinfo
at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:945)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:604)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:844)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:896)
at java.io.DataInputStream.read(DataInputStream.java:100)
at com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:737)
at com.google.protobuf.CodedInputStream.isAtEnd(CodedInputStream.java:701)
at com.google.protobuf.CodedInputStream.readTag(CodedInputStream.java:99)
at org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$SnapshotDescription.<init>(HBaseProtos.java:10616)
at org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$SnapshotDescription.<init>(HBaseProtos.java:10580)
at org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$SnapshotDescription$1.parsePartialFrom(HBaseProtos.java:10694)
at org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$SnapshotDescription$1.parsePartialFrom(HBaseProtos.java:10689)
at com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:200)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:217)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:223)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
at org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$SnapshotDescription.parseFrom(HBaseProtos.java:11177)
at org.apache.hadoop.hbase.snapshot.SnapshotDescriptionUtils.readSnapshotInfo(SnapshotDescriptionUtils.java:307)
at org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.getHFileNames(SnapshotReferenceUtil.java:328)
at org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner$1.filesUnderSnapshot(SnapshotHFileCleaner.java:85)
at org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache.refreshCache(SnapshotFileCache.java:281)
at org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache.getUnreferencedFiles(SnapshotFileCache.java:187)
at org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner.getDeletableFiles(SnapshotHFileCleaner.java:62)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteFiles(CleanerChore.java:233)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteEntries(CleanerChore.java:157)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteDirectory(CleanerChore.java:180)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteEntries(CleanerChore.java:149)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteDirectory(CleanerChore.java:180)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteEntries(CleanerChore.java:149)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteDirectory(CleanerChore.java:180)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteEntries(CleanerChore.java:149)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteDirectory(CleanerChore.java:180)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteEntries(CleanerChore.java:149)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteDirectory(CleanerChore.java:180)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteEntries(CleanerChore.java:149)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.chore(CleanerChore.java:124)
at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:185)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
... View more
Labels:
- Labels:
-
Apache HBase
06-24-2016
06:18 AM
Hi, I found out the problem. One of the Data node got rebooted. That's why this ind of log was written. Thanks.
... View more