Reply
New Contributor
Posts: 1
Registered: ‎05-10-2016

Datanode socket timeout setting

Hi.  I'm new to hadoop and Cloudera.  I have a 5 node cluster with 3 data nodes.  I have a third party client program that is opening hdfs files and sending them data as it arrive in a stream.  On a timer, every 10 min, the client closes the files and opens new ones for writing.  Before the close can happen, the datanode socket connection times out with this error:

2016-05-10 14:17:20,165 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-1298278955-172.31.1.79-1461125109305:blk_1073807048_66356
java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/172.31.15.196:50010 remote=/172.31.1.81:57017]
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at java.io.DataInputStream.read(DataInputStream.java:149)
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:500)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:894)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:794)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:246)

 

Question: How do I change the 60000 milis setting to a larger value?

 

I've tried dfs.datanode.socket.write.timeout and dfs.socket.timeout in hdfs config through Cloudera admin with config redeploy and cluster restart.  I've also tried adding these and dfs.client.socket-timeout in hdfs-client.xml on the client side.  Nothing seems to affect the used value.

 

Thanks in advance.

-Bruce

New Contributor
Posts: 5
Registered: ‎05-09-2016

Re: Datanode socket timeout setting

Hai,

 

Did you get any resolution for this ? Even am facing the same problem now.

 

Thanks

New Contributor
Posts: 10
Registered: ‎07-18-2018

Re: Datanode socket timeout setting

pls paster your solution i m facing the same issue

New Contributor
Posts: 10
Registered: ‎07-18-2018

Re: Datanode socket timeout setting

I am facing the same issue

Announcements