Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Seeing this error in Hbase Region Server logs: "hdfs.BlockReaderFactory: I/O error constructing remote block reader" and "InvalidToken exception"

Highlighted

Seeing this error in Hbase Region Server logs: "hdfs.BlockReaderFactory: I/O error constructing remote block reader" and "InvalidToken exception"

New Contributor

Hi,

I'm seeing this error in RS logs. Can someone help me resolve the issue?

2018-12-12 18:28:54,407 INFO  [B.defaultRpcServer.handler=12,queue=0,port=16020] shortcircuit.ShortCircuitCache: ShortCircuitCache(0x3d630f8a): could not load 1093156401_BP-853897652-10.84.192.246-1489729943941 due to InvalidToken exception.
org.apache.hadoop.security.token.SecretManager$InvalidToken: access control error while attempting to set up short-circuit access to /apps/hbase/data/data/default/cyclops-edges/865c0549943a300f09f5dfcd63fbaa67/s/1979e37ca9294d24bd16cd580fb663ab



2018-12-12 18:09:57,326 INFO  [sync.2] wal.FSHLog: Slow sync cost: 105 ms, current pipeline: [DatanodeInfoWithStorage[10.84.197.254:50010,DS-159866b7-1b97-469f-9a22-4b03b3dbbe56,DISK], DatanodeInfoWithStorage[10.84.192.255:50010,DS-26463076-ed2d-4883-9013-2870ce87f281,DISK], DatanodeInfoWithStorage[10.84.192.76:50010,DS-b1b894fc-d1e2-4ccf-8cc7-7ccd649cd507,DISK]]
2018-12-12 18:09:57,391 WARN  [B.defaultRpcServer.handler=21,queue=0,port=16020] hdfs.BlockReaderFactory: I/O error constructing remote block reader.
java.io.IOException: Got error, status message opReadBlock BP-853897652-10.84.192.246-1489729943941:blk_1093248459_19508530 received exception org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Replica not found for BP-853897652-10.84.192.246-1489729943941:blk_1093248459_19508530, for OP_READ_BLOCK, self=/10.84.197.254:13978, remote=/10.84.192.246:50010, for file /apps/hbase/data/data/default/cyclops-audits-dedup/e44738e4889089bcecf58b66878a7501/l/457245b1ea724f7f80bab245ea6c0604, for pool BP-853897652-10.84.192.246-1489729943941 block 1093248459_19508530
        at org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:140)
        at org.apache.hadoop.hdfs.RemoteBlockReader2.checkSuccess(RemoteBlockReader2.java:456)
        at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:424)
        at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:816)
        at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:695)
        at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:355)
        at org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1181)
        at org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:1118)
        at org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1478)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1441)
        at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)
        at org.apache.hadoop.hbase.io.hfile.HFileBlock.positionalReadWithExtra(HFileBlock.java:722)
        at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1420)
        at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1625)
        at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1504)
        at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:441)
        at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269)
        at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:642)
at java.lang.Thread.run(Thread.java:745) 2018-12-12 18:09:57,391 WARN [B.defaultRpcServer.handler=21,queue=0,port=16020] hdfs.DFSClient: Connection failure: Failed to connect to /10.84.192.246:50010 for file /apps/hbase/data/data/default/cyclops-audits-dedup/e44738e4889089bcecf58b66878a7501/l/457245b1ea724f7f80bab245ea6c0604 for block BP-853897652-10.84.192.246-1489729943941:blk_1093248459_19508530:java.io.IOException: Got error, status message opReadBlock BP-853897652-10.84.192.246-1489729943941:blk_1093248459_19508530 received exception org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Replica not found for BP-853897652-10.84.192.246-1489729943941:blk_1093248459_19508530, for OP_READ_BLOCK, self=/10.84.197.254:13978, remote=/10.84.192.246:50010, for file /apps/hbase/data/data/default/cyclops-audits-dedup/e44738e4889089bcecf58b66878a7501/l/457245b1ea724f7f80bab245ea6c0604, for pool BP-853897652-10.84.192.246-1489729943941 block 1093248459_19508530 java.io.IOException: Got error, status message opReadBlock BP-853897652-10.84.192.246-1489729943941:blk_1093248459_19508530 received exception org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Replica not found for BP-853897652-10.84.192.246-1489729943941:blk_1093248459_19508530, for OP_READ_BLOCK, self=/10.84.197.254:13978, remote=/10.84.192.246:50010, for file /apps/hbase/data/data/default/cyclops-audits-dedup/e44738e4889089bcecf58b66878a7501/l/457245b1ea724f7f80bab245ea6c0604, for pool BP-853897652-10.84.192.246-1489729943941 block 1093248459_19508530 at org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:140)

The issue that we are facing is very slow Hbase compaction.

HDP Version: HDP-2.4.3.0-227

HBase: 1.1.2.2.4

Thanks,

Shesh