<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Unable to scan hbase table in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-scan-hbase-table/m-p/168941#M37200</link>
    <description>&lt;P&gt;
	A few things:&lt;/P&gt;&lt;P&gt;
	It looks like your HDFS is not healthy. Can you run `hdfs fsck` on the file "/apps/hbase/data1/.hbase-snapshot/S3LINKS-AIM-SNAPSHOT-NEW/.snapshotinfo"?&lt;/P&gt;&lt;P&gt;
	Regarding the offline region "CUTOFF8,,1465897349742.2077c5dfbfb97d67f09120e4b9cdc15a.", you can check the RegionServer logs on data1.corp.mirrorplus.com. Also, you can try to grep the HBase master log for "2077c5dfbfb97d67f09120e4b9cdc15a", looking for the last "OPEN" location. The HBase master UI might also tell you if this region is stuck in transition (RIT). If so, `hbase hbck` should be able to help you.&lt;/P&gt;</description>
    <pubDate>Mon, 08 Aug 2016 21:23:16 GMT</pubDate>
    <dc:creator>elserj</dc:creator>
    <dc:date>2016-08-08T21:23:16Z</dc:date>
    <item>
      <title>Unable to scan hbase table</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-scan-hbase-table/m-p/168940#M37199</link>
      <description>&lt;P&gt;Unable to scan hbase table. getting following error- &lt;/P&gt;&lt;P&gt;how to recover table. &lt;/P&gt;&lt;P&gt;scan 'CUTOFF8'- &lt;/P&gt;&lt;P&gt;ERROR: org.apache.hadoop.hbase.NotServingRegionException: Region CUTOFF8,,1465897349742.2077c5dfbfb97d67f09120e4b9cdc15a. is not online on data1.corp.mirrorplus.com,16020,1470665536454
at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2898)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:947)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2235)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745) &lt;/P&gt;&lt;P&gt;hbase master log- &lt;/P&gt;&lt;P&gt;2016-08-08 09:04:45,112 WARN  [master1.corp.mirrorplus.com,16000,1470404494921_ChoreService_1] hdfs.DFSClient: DFS chooseDataNode: got # 3 IOException, will wait for 12866.800703353128 msec.
2016-08-08 09:04:57,979 WARN  [master1.corp.mirrorplus.com,16000,1470404494921_ChoreService_1] hdfs.DFSClient: Could not obtain block: BP-838165258-10.1.1.94-1459246457024:blk_1073781564_40751 file=/apps/hbase/data1/.hbase-snapshot/S3LINKS-AIM-SNAPSHOT-NEW/.snapshotinfo No live nodes contain current block Block locations: Dead nodes: . Throwing a BlockMissingException
2016-08-08 09:04:57,979 WARN  [master1.corp.mirrorplus.com,16000,1470404494921_ChoreService_1] hdfs.DFSClient: Could not obtain block: BP-838165258-10.1.1.94-1459246457024:blk_1073781564_40751 file=/apps/hbase/data1/.hbase-snapshot/S3LINKS-AIM-SNAPSHOT-NEW/.snapshotinfo No live nodes contain current block Block locations: Dead nodes: . Throwing a BlockMissingException
2016-08-08 09:04:57,979 WARN  [master1.corp.mirrorplus.com,16000,1470404494921_ChoreService_1] hdfs.DFSClient: DFS Read
org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-838165258-10.1.1.94-1459246457024:blk_1073781564_40751 file=/apps/hbase/data1/.hbase-snapshot/S3LINKS-AIM-SNAPSHOT-NEW/.snapshotinfo
at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:945)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:604)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:844)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:896)
at java.io.DataInputStream.read(DataInputStream.java:100)
at com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:737)
at com.google.protobuf.CodedInputStream.isAtEnd(CodedInputStream.java:701)
at com.google.protobuf.CodedInputStream.readTag(CodedInputStream.java:99)
at org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$SnapshotDescription.&amp;lt;init&amp;gt;(HBaseProtos.java:10616)
at org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$SnapshotDescription.&amp;lt;init&amp;gt;(HBaseProtos.java:10580)
at org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$SnapshotDescription$1.parsePartialFrom(HBaseProtos.java:10694)
at org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$SnapshotDescription$1.parsePartialFrom(HBaseProtos.java:10689)
at com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:200)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:217)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:223)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
at org.apache.hadoop.hbase.protobuf.generated.HBaseProtos$SnapshotDescription.parseFrom(HBaseProtos.java:11177)
at org.apache.hadoop.hbase.snapshot.SnapshotDescriptionUtils.readSnapshotInfo(SnapshotDescriptionUtils.java:307)
at org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.getHFileNames(SnapshotReferenceUtil.java:328)
at org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner$1.filesUnderSnapshot(SnapshotHFileCleaner.java:85)
at org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache.refreshCache(SnapshotFileCache.java:281)
at org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache.getUnreferencedFiles(SnapshotFileCache.java:187)
at org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner.getDeletableFiles(SnapshotHFileCleaner.java:62)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteFiles(CleanerChore.java:233)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteEntries(CleanerChore.java:157)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteDirectory(CleanerChore.java:180)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteEntries(CleanerChore.java:149)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteDirectory(CleanerChore.java:180)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteEntries(CleanerChore.java:149)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteDirectory(CleanerChore.java:180)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteEntries(CleanerChore.java:149)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteDirectory(CleanerChore.java:180)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteEntries(CleanerChore.java:149)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteDirectory(CleanerChore.java:180)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteEntries(CleanerChore.java:149)
at org.apache.hadoop.hbase.master.cleaner.CleanerChore.chore(CleanerChore.java:124)
at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:185)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)&lt;/P&gt;</description>
      <pubDate>Mon, 08 Aug 2016 21:22:14 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-scan-hbase-table/m-p/168940#M37199</guid>
      <dc:creator>raja_ray</dc:creator>
      <dc:date>2016-08-08T21:22:14Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to scan hbase table</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-scan-hbase-table/m-p/168941#M37200</link>
      <description>&lt;P&gt;
	A few things:&lt;/P&gt;&lt;P&gt;
	It looks like your HDFS is not healthy. Can you run `hdfs fsck` on the file "/apps/hbase/data1/.hbase-snapshot/S3LINKS-AIM-SNAPSHOT-NEW/.snapshotinfo"?&lt;/P&gt;&lt;P&gt;
	Regarding the offline region "CUTOFF8,,1465897349742.2077c5dfbfb97d67f09120e4b9cdc15a.", you can check the RegionServer logs on data1.corp.mirrorplus.com. Also, you can try to grep the HBase master log for "2077c5dfbfb97d67f09120e4b9cdc15a", looking for the last "OPEN" location. The HBase master UI might also tell you if this region is stuck in transition (RIT). If so, `hbase hbck` should be able to help you.&lt;/P&gt;</description>
      <pubDate>Mon, 08 Aug 2016 21:23:16 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-scan-hbase-table/m-p/168941#M37200</guid>
      <dc:creator>elserj</dc:creator>
      <dc:date>2016-08-08T21:23:16Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to scan hbase table</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-scan-hbase-table/m-p/168942#M37201</link>
      <description>&lt;A rel="user" href="https://community.cloudera.com/users/1947/rajaray.html" nodeid="1947"&gt;@Raja Ray&lt;/A&gt;&lt;P&gt;How many nodes are in your cluster and how many are up? If nodes are up, then what about all HBase region server processes? Also follow Josh's suggestion and check the region server logs.&lt;/P&gt;</description>
      <pubDate>Mon, 08 Aug 2016 22:23:31 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-scan-hbase-table/m-p/168942#M37201</guid>
      <dc:creator>mqureshi</dc:creator>
      <dc:date>2016-08-08T22:23:31Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to scan hbase table</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-scan-hbase-table/m-p/168943#M37202</link>
      <description>&lt;P&gt;Thanks Josh.&lt;/P&gt;</description>
      <pubDate>Tue, 23 Aug 2016 16:09:41 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-scan-hbase-table/m-p/168943#M37202</guid>
      <dc:creator>raja_ray</dc:creator>
      <dc:date>2016-08-23T16:09:41Z</dc:date>
    </item>
  </channel>
</rss>

