<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Reading zero bytes issue in apache hadoop in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Reading-zero-bytes-issue-in-apache-hadoop/m-p/392744#M248234</link>
    <description>&lt;P&gt;24/08/29 10:21:15 WARN DataStreamer: Exception for BP-942923949-172.20.0.202-1722967626672:blk_1074290600_560526&lt;BR /&gt;Aug 29 10:21:15 gws-siem-dnt-3 bash[2218954]: java.io.EOFException: Unexpected EOF while trying to read response from server&lt;BR /&gt;Aug 29 10:21:15 gws-siem-dnt-3 bash[2218954]: at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:521)&lt;BR /&gt;Aug 29 10:21:15 gws-siem-dnt-3 bash[2218954]: at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213)&lt;BR /&gt;Aug 29 10:21:15 gws-siem-dnt-3 bash[2218954]: at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1137)&lt;BR /&gt;Aug 29 10:27:30 gws-siem-dnt-3 bash[2218954]: 24/08/29 10:27:30 WARN DataStreamer: DataStreamer Exception&lt;BR /&gt;Aug 29 10:27:30 gws-siem-dnt-3 bash[2218954]: java.net.SocketTimeoutException: 495000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/172.20.0.194:40396 remote=/172.20.0.194:9866]&lt;BR /&gt;Aug 29 10:27:30 gws-siem-dnt-3 bash[2218954]: at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:163)&lt;BR /&gt;Aug 29 10:27:30 gws-siem-dnt-3 bash[2218954]: at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:158)&lt;BR /&gt;Aug 29 10:27:30 gws-siem-dnt-3 bash[2218954]: at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:116)&lt;BR /&gt;Aug 29 10:27:30 gws-siem-dnt-3 bash[2218954]: at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)&lt;BR /&gt;Aug 29 10:27:30 gws-siem-dnt-3 bash[2218954]: at java.io.DataOutputStream.write(DataOutputStream.java:107)&lt;BR /&gt;Aug 29 10:27:30 gws-siem-dnt-3 bash[2218954]: at org.apache.hadoop.hdfs.DFSPacket.writeTo(DFSPacket.java:193)&lt;BR /&gt;Aug 29 10:27:30 gws-siem-dnt-3 bash[2218954]: at org.apache.hadoop.hdfs.DataStreamer.sendPacket(DataStreamer.java:857)&lt;BR /&gt;Aug 29 10:27:30 gws-siem-dnt-3 bash[2218954]: at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:762)&lt;BR /&gt;Aug 29 10:27:30 gws-siem-dnt-3 bash[2218954]: 24/08/29 10:27:30 WARN DataStreamer: Error Recovery for &lt;STRONG&gt;BP-942923949-172.20.0.202-1722967626672:blk_1074290600_560526 in pipeline [DatanodeInfoWithStorage[172.20.0.194:9866,DS-d8020c9c-24d1-462a-b707-0d219ce5848a,DISK], DatanodeInfoWithStorage[172.20.0.204:9866,DS-5a287a01-fda4-4ad0-a3e6-9d24cc6d8568,DISK], DatanodeInfoWithStorage[172.20.0.196:9866,DS-90f354a6-9531-49dd-97c4-af388f697c34,DISK]]: datanode 0(DatanodeInfoWithStorage[172.20.0.194:9866,DS-d8020c9c-24d1-462a-b707-0d219ce5848a,DISK]) is bad.&lt;/STRONG&gt;&lt;BR /&gt;last 2 months we have had the issue please help me due to this our script got stuck and the lag has increased&lt;BR /&gt;Please help me&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Thu, 29 Aug 2024 05:17:12 GMT</pubDate>
    <dc:creator>HadoopCommunity</dc:creator>
    <dc:date>2024-08-29T05:17:12Z</dc:date>
    <item>
      <title>Reading zero bytes issue in apache hadoop</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Reading-zero-bytes-issue-in-apache-hadoop/m-p/392744#M248234</link>
      <description>&lt;P&gt;24/08/29 10:21:15 WARN DataStreamer: Exception for BP-942923949-172.20.0.202-1722967626672:blk_1074290600_560526&lt;BR /&gt;Aug 29 10:21:15 gws-siem-dnt-3 bash[2218954]: java.io.EOFException: Unexpected EOF while trying to read response from server&lt;BR /&gt;Aug 29 10:21:15 gws-siem-dnt-3 bash[2218954]: at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:521)&lt;BR /&gt;Aug 29 10:21:15 gws-siem-dnt-3 bash[2218954]: at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213)&lt;BR /&gt;Aug 29 10:21:15 gws-siem-dnt-3 bash[2218954]: at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1137)&lt;BR /&gt;Aug 29 10:27:30 gws-siem-dnt-3 bash[2218954]: 24/08/29 10:27:30 WARN DataStreamer: DataStreamer Exception&lt;BR /&gt;Aug 29 10:27:30 gws-siem-dnt-3 bash[2218954]: java.net.SocketTimeoutException: 495000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/172.20.0.194:40396 remote=/172.20.0.194:9866]&lt;BR /&gt;Aug 29 10:27:30 gws-siem-dnt-3 bash[2218954]: at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:163)&lt;BR /&gt;Aug 29 10:27:30 gws-siem-dnt-3 bash[2218954]: at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:158)&lt;BR /&gt;Aug 29 10:27:30 gws-siem-dnt-3 bash[2218954]: at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:116)&lt;BR /&gt;Aug 29 10:27:30 gws-siem-dnt-3 bash[2218954]: at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)&lt;BR /&gt;Aug 29 10:27:30 gws-siem-dnt-3 bash[2218954]: at java.io.DataOutputStream.write(DataOutputStream.java:107)&lt;BR /&gt;Aug 29 10:27:30 gws-siem-dnt-3 bash[2218954]: at org.apache.hadoop.hdfs.DFSPacket.writeTo(DFSPacket.java:193)&lt;BR /&gt;Aug 29 10:27:30 gws-siem-dnt-3 bash[2218954]: at org.apache.hadoop.hdfs.DataStreamer.sendPacket(DataStreamer.java:857)&lt;BR /&gt;Aug 29 10:27:30 gws-siem-dnt-3 bash[2218954]: at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:762)&lt;BR /&gt;Aug 29 10:27:30 gws-siem-dnt-3 bash[2218954]: 24/08/29 10:27:30 WARN DataStreamer: Error Recovery for &lt;STRONG&gt;BP-942923949-172.20.0.202-1722967626672:blk_1074290600_560526 in pipeline [DatanodeInfoWithStorage[172.20.0.194:9866,DS-d8020c9c-24d1-462a-b707-0d219ce5848a,DISK], DatanodeInfoWithStorage[172.20.0.204:9866,DS-5a287a01-fda4-4ad0-a3e6-9d24cc6d8568,DISK], DatanodeInfoWithStorage[172.20.0.196:9866,DS-90f354a6-9531-49dd-97c4-af388f697c34,DISK]]: datanode 0(DatanodeInfoWithStorage[172.20.0.194:9866,DS-d8020c9c-24d1-462a-b707-0d219ce5848a,DISK]) is bad.&lt;/STRONG&gt;&lt;BR /&gt;last 2 months we have had the issue please help me due to this our script got stuck and the lag has increased&lt;BR /&gt;Please help me&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 29 Aug 2024 05:17:12 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Reading-zero-bytes-issue-in-apache-hadoop/m-p/392744#M248234</guid>
      <dc:creator>HadoopCommunity</dc:creator>
      <dc:date>2024-08-29T05:17:12Z</dc:date>
    </item>
    <item>
      <title>Re: Reading zero bytes issue in apache hadoop</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Reading-zero-bytes-issue-in-apache-hadoop/m-p/392918#M248282</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/107837"&gt;@HadoopCommunity&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This job is failing due to issues retrieving the block from datanodes. Please check the datanode logs during the issue timestamp to check the actual cause of the issue.&lt;/P&gt;</description>
      <pubDate>Sat, 31 Aug 2024 15:25:47 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Reading-zero-bytes-issue-in-apache-hadoop/m-p/392918#M248282</guid>
      <dc:creator>shubham_sharma</dc:creator>
      <dc:date>2024-08-31T15:25:47Z</dc:date>
    </item>
    <item>
      <title>Re: Reading zero bytes issue in apache hadoop</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Reading-zero-bytes-issue-in-apache-hadoop/m-p/393127#M248345</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Hi&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/107837"&gt;@HadoopCommunity&lt;/a&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Was your question answered? Please take some time to click on "Accept as Solution" -- If you find a reply useful, say thanks by clicking on the thumbs up button&amp;nbsp;&lt;/EM&gt;&lt;EM&gt;below this post.&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 05 Sep 2024 22:04:19 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Reading-zero-bytes-issue-in-apache-hadoop/m-p/393127#M248345</guid>
      <dc:creator>shubham_sharma</dc:creator>
      <dc:date>2024-09-05T22:04:19Z</dc:date>
    </item>
  </channel>
</rss>

