<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question WRITE_BLOCK Error in HDFS logs in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/WRITE-BLOCK-Error-in-HDFS-logs/m-p/154708#M20663</link>
    <description>&lt;P&gt;We are using HDP2.0. Recently we cannot write any new table to it. All components look healthy from the ambari webui. In the masternode hdfs logs we found the following error messages:&lt;/P&gt;&lt;PRE&gt;2016-02-23 17:25:09,985 INFO  datanode.DataNode (BlockReceiver.java:receiveBlock(698)) - Exception for BP-1706820793-10.86.36.8-1381941559687:blk_1080366074_6646021
java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.86.36.8:50010 remote=/10.80.27.210:54210]
	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
	at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
	at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
	at java.io.DataInputStream.read(DataInputStream.java:132)
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:429)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:668)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:564)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:102)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:66)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
	at java.lang.Thread.run(Thread.java:662)


2016-02-23 17:25:09,985 ERROR datanode.DataNode (DataXceiver.java:run(225)) - dn01.nor1solutions.com:50010:DataXceiver error processing WRITE_BLOCK operation  src: /10.80.27.210:54210 dest: /10.86.36.8:50010
java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.86.36.8:50010 remote=/10.80.27.210:54210]
	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
	at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
	at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
	at java.io.DataInputStream.read(DataInputStream.java:132)
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:429)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:668)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:564)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:102)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:66)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
	at java.lang.Thread.run(Thread.java:662)

Can anyone help fixing it?
Thanks!&lt;/PRE&gt;</description>
    <pubDate>Wed, 24 Feb 2016 04:39:30 GMT</pubDate>
    <dc:creator>jade_liu</dc:creator>
    <dc:date>2016-02-24T04:39:30Z</dc:date>
    <item>
      <title>WRITE_BLOCK Error in HDFS logs</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/WRITE-BLOCK-Error-in-HDFS-logs/m-p/154708#M20663</link>
      <description>&lt;P&gt;We are using HDP2.0. Recently we cannot write any new table to it. All components look healthy from the ambari webui. In the masternode hdfs logs we found the following error messages:&lt;/P&gt;&lt;PRE&gt;2016-02-23 17:25:09,985 INFO  datanode.DataNode (BlockReceiver.java:receiveBlock(698)) - Exception for BP-1706820793-10.86.36.8-1381941559687:blk_1080366074_6646021
java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.86.36.8:50010 remote=/10.80.27.210:54210]
	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
	at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
	at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
	at java.io.DataInputStream.read(DataInputStream.java:132)
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:429)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:668)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:564)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:102)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:66)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
	at java.lang.Thread.run(Thread.java:662)


2016-02-23 17:25:09,985 ERROR datanode.DataNode (DataXceiver.java:run(225)) - dn01.nor1solutions.com:50010:DataXceiver error processing WRITE_BLOCK operation  src: /10.80.27.210:54210 dest: /10.86.36.8:50010
java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.86.36.8:50010 remote=/10.80.27.210:54210]
	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
	at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
	at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
	at java.io.DataInputStream.read(DataInputStream.java:132)
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:429)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:668)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:564)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:102)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:66)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
	at java.lang.Thread.run(Thread.java:662)

Can anyone help fixing it?
Thanks!&lt;/PRE&gt;</description>
      <pubDate>Wed, 24 Feb 2016 04:39:30 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/WRITE-BLOCK-Error-in-HDFS-logs/m-p/154708#M20663</guid>
      <dc:creator>jade_liu</dc:creator>
      <dc:date>2016-02-24T04:39:30Z</dc:date>
    </item>
    <item>
      <title>Re: WRITE_BLOCK Error in HDFS logs</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/WRITE-BLOCK-Error-in-HDFS-logs/m-p/154709#M20664</link>
      <description>&lt;A rel="user" href="https://community.cloudera.com/users/2065/jadeliu.html" nodeid="2065"&gt;@Jade Liu&lt;/A&gt;&lt;P&gt; There is issue in writing ... See &lt;A target="_blank" href="http://comments.gmane.org/gmane.comp.jakarta.lucene.hadoop.user/47708"&gt;this&lt;/A&gt; thread &lt;/P&gt;&lt;P&gt;ERROR datanode.DataNode(DataXceiver.java:run(225))- dn01.nor1solutions.com:50010:DataXceiver error processing &lt;STRONG&gt;WRITE_BLOCK operation  s&lt;/STRONG&gt;rc:/10.80.27.210:54210 dest: /10.86.36.8:50010&lt;/P&gt;</description>
      <pubDate>Wed, 24 Feb 2016 07:51:31 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/WRITE-BLOCK-Error-in-HDFS-logs/m-p/154709#M20664</guid>
      <dc:creator>nsabharwal</dc:creator>
      <dc:date>2016-02-24T07:51:31Z</dc:date>
    </item>
    <item>
      <title>Re: WRITE_BLOCK Error in HDFS logs</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/WRITE-BLOCK-Error-in-HDFS-logs/m-p/154710#M20665</link>
      <description>&lt;P&gt;Thanks @&lt;A href="https://community.hortonworks.com/users/140/nsabharwal.html"&gt;Neeraj Sabharwal&lt;/A&gt;! I've checked all the nodes in RM web UI and all are healthy. I tried to restart the whole cluster but the same problem happened again. Did not see anything in the Resource Manager logs. Should I change any configuration as shown in this &lt;A href="https://issues.apache.org/jira/browse/HDFS-693"&gt;thread&lt;/A&gt;?&lt;A href="https://community.hortonworks.com/users/140/nsabharwal.html"&gt;&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 25 Feb 2016 04:21:12 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/WRITE-BLOCK-Error-in-HDFS-logs/m-p/154710#M20665</guid>
      <dc:creator>jade_liu</dc:creator>
      <dc:date>2016-02-25T04:21:12Z</dc:date>
    </item>
    <item>
      <title>Re: WRITE_BLOCK Error in HDFS logs</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/WRITE-BLOCK-Error-in-HDFS-logs/m-p/154711#M20666</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/2065/jadeliu.html" nodeid="2065"&gt;@Jade Liu&lt;/A&gt; &lt;/P&gt;&lt;P&gt;Can you check the setting for the following parameters?&lt;/P&gt;&lt;P&gt;In my case it's&lt;/P&gt;&lt;P&gt;dfs.datanode.max.transfer.threads = 4096 &lt;/P&gt;&lt;P&gt;
dfs.datanode.handler.count =10 &lt;/P&gt;&lt;P&gt;
dfs.client.file-block-storage-locations.num-threads = 10 &lt;/P&gt;</description>
      <pubDate>Thu, 25 Feb 2016 10:12:34 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/WRITE-BLOCK-Error-in-HDFS-logs/m-p/154711#M20666</guid>
      <dc:creator>nsabharwal</dc:creator>
      <dc:date>2016-02-25T10:12:34Z</dc:date>
    </item>
    <item>
      <title>Re: WRITE_BLOCK Error in HDFS logs</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/WRITE-BLOCK-Error-in-HDFS-logs/m-p/154712#M20667</link>
      <description>&lt;P&gt;Thanks @&lt;A href="https://community.hortonworks.com/users/140/nsabharwal.html"&gt;Neeraj Sabharwal ♦&lt;/A&gt;&lt;/P&gt;&lt;P&gt;dfs.datanode.max.transfer.threads = 1024&lt;/P&gt;&lt;P&gt;dfs.datanode.handler.count =100&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Did not set the property dfs.client.file-block-storage-locations.num-threads. &lt;/EM&gt;&lt;/P&gt;&lt;P&gt;dfs.blocksize = 134217728&lt;/P&gt;&lt;P&gt;Block replication = 3&lt;/P&gt;&lt;P&gt;Reserved space for HDFS = 1GB&lt;/P&gt;&lt;P&gt;io.file.buffer.size = 131072&lt;/P&gt;&lt;P&gt;Thanks!&lt;/P&gt;</description>
      <pubDate>Fri, 26 Feb 2016 01:27:08 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/WRITE-BLOCK-Error-in-HDFS-logs/m-p/154712#M20667</guid>
      <dc:creator>jade_liu</dc:creator>
      <dc:date>2016-02-26T01:27:08Z</dc:date>
    </item>
    <item>
      <title>Re: WRITE_BLOCK Error in HDFS logs</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/WRITE-BLOCK-Error-in-HDFS-logs/m-p/154713#M20668</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/2065/jadeliu.html" nodeid="2065"&gt;@Jade Liu&lt;/A&gt;  Can you setup the following values for those properties?&lt;/P&gt;&lt;P&gt;dfs.datanode.max.transfer.threads = 4096&lt;/P&gt;&lt;P&gt;dfs.datanode.handler.count =10&lt;/P&gt;&lt;P&gt;dfs.client.file-block-storage-locations.num-threads = 10    --&amp;gt; you can add this&lt;/P&gt;</description>
      <pubDate>Fri, 26 Feb 2016 01:50:58 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/WRITE-BLOCK-Error-in-HDFS-logs/m-p/154713#M20668</guid>
      <dc:creator>nsabharwal</dc:creator>
      <dc:date>2016-02-26T01:50:58Z</dc:date>
    </item>
    <item>
      <title>Re: WRITE_BLOCK Error in HDFS logs</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/WRITE-BLOCK-Error-in-HDFS-logs/m-p/154714#M20669</link>
      <description>&lt;P&gt;problem fixed. Turns out we have a sqoop job which keeps writing to the cluster and once we killed it, it was fixed. Thanks @&lt;A href="https://community.hortonworks.com/users/140/nsabharwal.html"&gt;Neeraj Sabharwal ♦&lt;/A&gt;!&lt;/P&gt;</description>
      <pubDate>Fri, 26 Feb 2016 02:17:12 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/WRITE-BLOCK-Error-in-HDFS-logs/m-p/154714#M20669</guid>
      <dc:creator>jade_liu</dc:creator>
      <dc:date>2016-02-26T02:17:12Z</dc:date>
    </item>
  </channel>
</rss>

