Created 08-28-2014 09:29 AM
I am getting a lot of I/O error constructing remote block reader. When performing batch file uploads to HBase
java.io.IOException: Got error for OP_READ_BLOCK, self=/10.2.4.24:43598, remote=/10.2.4.21:50010, for file /user/hbase/.staging/job_1407795783485_1084/libjars/hbase-server-0.98.1-cdh5.1.0.jar, for pool BP-504567843-10.1.1.148-1389898314433 block 1075088397_1099513328611 at org.apache.hadoop.hdfs.RemoteBlockReader2.checkSuccess(RemoteBlockReader2.java:432) at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:397) at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:786)
c1d001.in.wellcentive.com:50010:DataXceiver error processing READ_BLOCK operation src: /10.2.4.24:43598 dest: /10.2.4.21:50010 org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Replica not found for BP-504567843-10.1.1.148-1389898314433:blk_1075088397_1099513328611 at org.apache.hadoop.hdfs.server.datanode.BlockSender.getReplica(BlockSender.java:419) at org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:228) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:466) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:110) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:68) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229) at java.lang.Thread.run(Thread.java:745)
I don't seem to be getting errors during the app processing itself, so I am not sure if this is a problem to worry about, but I would like to know what is causing this so that I may keep my eyes peeled