Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

I/O error constructing remote block reader.

avatar
Explorer

I am getting a lot of I/O error constructing remote block reader. When performing batch file uploads to HBase

java.io.IOException: Got error for OP_READ_BLOCK, self=/10.2.4.24:43598, remote=/10.2.4.21:50010, for file /user/hbase/.staging/job_1407795783485_1084/libjars/hbase-server-0.98.1-cdh5.1.0.jar, for pool BP-504567843-10.1.1.148-1389898314433 block 1075088397_1099513328611 at org.apache.hadoop.hdfs.RemoteBlockReader2.checkSuccess(RemoteBlockReader2.java:432) at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:397) at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:786)

 

c1d001.in.wellcentive.com:50010:DataXceiver error processing READ_BLOCK operation src: /10.2.4.24:43598 dest: /10.2.4.21:50010 org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Replica not found for BP-504567843-10.1.1.148-1389898314433:blk_1075088397_1099513328611 at org.apache.hadoop.hdfs.server.datanode.BlockSender.getReplica(BlockSender.java:419) at org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:228) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:466) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:110) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:68) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229) at java.lang.Thread.run(Thread.java:745)

I don't seem to be getting errors during the app processing itself, so I am not sure if this is a problem to worry about, but I would like to know what is causing this so that I may keep my eyes peeled

2 REPLIES 2

avatar
Explorer

I had a similar problem using Sqoop. Can someone RESPOND to this please? It's been awhile.

 

Here is my error message:

 

2016-11-03 10:07:50,534 WARN org.apache.hadoop.hdfs.BlockReaderFactory: I/O error constructing remote block reader. java.io.IOException: Got error for OP_READ_BLOCK, status=ERROR, self=/192.168.1.31:58178, remote=/192.168.1.34:50010, for file /user/(user profile name)/.staging/job_1478124814973_0001/libjars/commons-math-2.1.jar, for pool BP-15528599-192.168.1.31-1472851278753 block 1074078887_338652 at org.apache.hadoop.hdfs.RemoteBlockReader2.checkSuccess(RemoteBlockReader2.java:467) at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:432) at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:881) at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:759) at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:376) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:662) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:889) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:942) at java.io.DataInputStream.read(DataInputStream.java:100) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119) at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:369) at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:265) at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:61) at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359) at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:357) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 2016-11-03 10:07:50,541 WARN org.apache.hadoop.hdfs.DFSClient: Failed to connect to /192.168.1.34:50010 for block, add to deadNodes and continue. java.io.IOException: Got error for OP_READ_BLOCK, status=ERROR, self=/192.168.1.31:58178, remote=/192.168.1.34:50010, for file /user/(user profile name)/.staging/job_1478124814973_0001/libjars/commons-math-2.1.jar, for pool BP-15528599-192.168.1.31-1472851278753 block 1074078887_338652 java.io.IOException: Got error for OP_READ_BLOCK, status=ERROR, self=/192.168.1.31:58178, remote=/192.168.1.34:50010, for file /user/(user profile name)/.staging/job_1478124814973_0001/libjars/commons-math-2.1.jar, for pool BP-15528599-192.168.1.31-1472851278753 block 1074078887_338652 at org.apache.hadoop.hdfs.RemoteBlockReader2.checkSuccess(RemoteBlockReader2.java:467) at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:432) at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:881) at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:759) at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:376) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:662) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:889) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:942) at java.io.DataInputStream.read(DataInputStream.java:100) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119) at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:369) at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:265) at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:61) at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359) at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:357) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 2016-11-03 10:07:50,543 INFO org.apache.hadoop.hdfs.DFSClient: Successfully connected to /192.168.1.33:50010 for BP-15528599-192.168.1.31-1472851278753:blk_1074078887_338652

 

While my Sqoop job (running sqoop 1) works this happens everytime I run a Sqoop job. My iptables is off, and port 50010 is listening on the server indicated. The node was added using the Cloudera wizard. Why is this happening and how does one fix it?

avatar
Champion

We had this exception for a while and it gone by itself.

as far as i am concerned this exception occurs when namenode block locations is not fresh .

check if you have HDFS Block skew condition . if you see this offten then its problem because it clearly denotes that it is missing some block otherwise you can ignore it.