<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: data nodes evicted randomly and cluster marks node for decomm in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/data-nodes-evicted-randomly-and-cluster-marks-node-for/m-p/79957#M28091</link>
    <description>&lt;P&gt;I see below error in log:&lt;/P&gt;&lt;P&gt;&lt;FONT color="#ff0000"&gt;&lt;SPAN&gt;java.lang.OutOfMemoryError: Java heap space&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000"&gt;&lt;SPAN&gt;So i would like to know&lt;/SPAN&gt;&lt;/FONT&gt;&lt;FONT color="#000000"&gt;&lt;SPAN&gt; the heap memory you have allocated right now?&amp;nbsp;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000"&gt;Can you&lt;/FONT&gt; &lt;FONT color="#000000"&gt;&lt;SPAN&gt;try &lt;/SPAN&gt;&lt;/FONT&gt;&lt;FONT color="#000000"&gt;&lt;SPAN&gt;increasing heap size of datanode.&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;</description>
    <pubDate>Tue, 18 Sep 2018 07:04:19 GMT</pubDate>
    <dc:creator>sid2707</dc:creator>
    <dc:date>2018-09-18T07:04:19Z</dc:date>
    <item>
      <title>data nodes evicted randomly and cluster marks node for decomm</title>
      <link>https://community.cloudera.com/t5/Support-Questions/data-nodes-evicted-randomly-and-cluster-marks-node-for/m-p/21809#M28089</link>
      <description>&lt;P&gt;Hello all,&lt;/P&gt;&lt;P&gt;I am working on a gig where data nodes evicted randomly and cluster marks node for decom.&amp;nbsp; The data nodes processes have to be killed and restarted.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This is a random event and difficult to replicate,&amp;nbsp; I am attaching error log from hadoop-hdfs data node.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;2014-11-19 07:35:13,847 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: mdata07:50010:DataXceiver error processing WRITE_BLOCK operation&amp;nbsp; src: /10.10.10.103:46686 dest: /10.10.10.107:50010&lt;BR /&gt;java.lang.OutOfMemoryError: Java heap space&lt;BR /&gt;&amp;nbsp;at sun.nio.ch.EPollArrayWrapper.&amp;lt;init&amp;gt;(EPollArrayWrapper.java:120)&lt;BR /&gt;&amp;nbsp;at sun.nio.ch.EPollSelectorImpl.&amp;lt;init&amp;gt;(EPollSelectorImpl.java:68)&lt;BR /&gt;&amp;nbsp;at sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36)&lt;BR /&gt;&amp;nbsp;at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.get(SocketIOWithTimeout.java:409)&lt;BR /&gt;&amp;nbsp;at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:325)&lt;BR /&gt;&amp;nbsp;at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:203)&lt;BR /&gt;&amp;nbsp;at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)&lt;BR /&gt;&amp;nbsp;at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)&lt;BR /&gt;&amp;nbsp;at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:623)&lt;BR /&gt;&amp;nbsp;at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)&lt;BR /&gt;&amp;nbsp;at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)&lt;BR /&gt;&amp;nbsp;at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)&lt;BR /&gt;&amp;nbsp;at java.lang.Thread.run(Thread.java:744)&lt;BR /&gt;2014-11-19 07:36:16,565 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode{data=FSDataset{dirpath='[/data-mount/hadoop/dfs/dn/current]'}, localName='mdata07:50010', datanodeUuid='7181ecc9-ab8e-491a-b37b-b5be724701af', xmitsInProgress=0}:Exception transfering block BP-2015128538-10.10.10.10-1403613223603:blk_1088831462_15096140 to mirror 10.10.10.101:50010: java.io.EOFException: Premature EOF: no length prefix available&lt;BR /&gt;2014-11-19 07:36:15,690 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.10.10.10.107, datanodeUuid=7181ecc9-ab8e-491a-b37b-b5be724701af, infoPort=50075, ipcPort=50020, storageInfo=lv=-55;cid=CID-5ee917ca-3875-4db6-bfef-2ebdc160b420;nsid=185559117;c=0):Exception writing BP-2015128538-10.10.10.10.10-1403613223603:blk_1088831431_15096109 to mirror 10.10.10.10.105:50010&lt;BR /&gt;java.io.IOException: Broken pipe&lt;BR /&gt;&amp;nbsp;at sun.nio.ch.FileDispatcherImpl.write0(Native Method)&lt;BR /&gt;&amp;nbsp;at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)&lt;BR /&gt;&amp;nbsp;at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)&lt;BR /&gt;&amp;nbsp;at sun.nio.ch.IOUtil.write(IOUtil.java:65)&lt;BR /&gt;&amp;nbsp;at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487)&lt;BR /&gt;&amp;nbsp;at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)&lt;BR /&gt;&amp;nbsp;at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)&lt;BR /&gt;&amp;nbsp;at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)&lt;BR /&gt;&amp;nbsp;at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)&lt;BR /&gt;&amp;nbsp;at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)&lt;BR /&gt;&amp;nbsp;at java.io.DataOutputStream.write(DataOutputStream.java:107)&lt;BR /&gt;&amp;nbsp;at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.mirrorPacketTo(PacketReceiver.java:200)&lt;BR /&gt;&amp;nbsp;at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:494)&lt;BR /&gt;&amp;nbsp;at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)&lt;BR /&gt;&amp;nbsp;at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)&lt;BR /&gt;&amp;nbsp;at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)&lt;BR /&gt;&amp;nbsp;at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)&lt;BR /&gt;&amp;nbsp;at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)&lt;BR /&gt;&amp;nbsp;at java.lang.Thread.run(Thread.java:744)&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 09:13:45 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/data-nodes-evicted-randomly-and-cluster-marks-node-for/m-p/21809#M28089</guid>
      <dc:creator>akhan_enki</dc:creator>
      <dc:date>2022-09-16T09:13:45Z</dc:date>
    </item>
    <item>
      <title>Re: data nodes evicted randomly and cluster marks node for decomm</title>
      <link>https://community.cloudera.com/t5/Support-Questions/data-nodes-evicted-randomly-and-cluster-marks-node-for/m-p/21817#M28090</link>
      <description>&lt;P&gt;Also, noticed this error:&lt;/P&gt;&lt;P&gt;HAS_DOWNSTREAM_IN_PIPELINE&lt;BR /&gt;java.io.EOFException: Premature EOF: no length prefix available&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1988)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:176)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1083)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at java.lang.Thread.run(Thread.java:744)&lt;BR /&gt;2014-11-19 11:39:06,712 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-2015128538-10.10.10.10-1403613223603:blk_1088878247_15143005 src: /10.10.10.100:52326 dest: /10.10.10.100:50010&lt;BR /&gt;2014-11-19 11:39:44,941 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-2015128538-10.10.10.10-1403613223603:blk_1088878255_15143013 src: /10.10.10.104:57300 dest: /10.10.10.100:50010&lt;BR /&gt;2014-11-19 11:39:43,972 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-2015128538-10.10.10.10-1403613223603:blk_1088878236_15142994&lt;BR /&gt;java.io.IOException: Premature EOF from inputStream&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at java.lang.Thread.run(Thread.java:744)&lt;BR /&gt;2014-11-19 11:39:47,664 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in BlockReceiver.run():&lt;BR /&gt;java.io.IOException: Broken pipe&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at sun.nio.ch.FileDispatcherImpl.write0(Native Method)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at sun.nio.ch.IOUtil.write(IOUtil.java:65)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at java.io.DataOutputStream.flush(DataOutputStream.java:123)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1306)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1246)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1167)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at java.lang.Thread.run(Thread.java:744)&lt;BR /&gt;2014-11-19 11:40:44,444 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-2015128538-10.10.10.10-1403613223603:blk_1088878236_15142994, type=HAS_DOWNSTREAM_IN_PIPELINE&lt;BR /&gt;java.io.IOException: Broken pipe&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at sun.nio.ch.FileDispatcherImpl.write0(Native Method)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at sun.nio.ch.IOUtil.write(IOUtil.java:65)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at java.io.DataOutputStream.flush(DataOutputStream.java:123)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1306)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1246)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1167)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at java.lang.Thread.run(Thread.java:744)&lt;BR /&gt;2014-11-19 11:40:59,863 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-2015128538-10.10.10.10-1403613223603:blk_1088878236_15142994, type=HAS_DOWNSTREAM_IN_PIPELINE terminating&lt;BR /&gt;2014-11-19 11:39:39,534 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-2015128538-10.10.10.10-1403613223603:blk_1088878251_15143009 src: /10.10.10.100:52327 dest: /10.10.10.100:50010&lt;BR /&gt;2014-11-19 11:39:27,357 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-2015128538-10.10.10.10-1403613223603:blk_1088878111_15142869&lt;BR /&gt;java.io.IOException: Premature EOF from inputStream&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at java.lang.Thread.run(Thread.java:744)&lt;/P&gt;</description>
      <pubDate>Wed, 19 Nov 2014 16:54:48 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/data-nodes-evicted-randomly-and-cluster-marks-node-for/m-p/21817#M28090</guid>
      <dc:creator>akhan_enki</dc:creator>
      <dc:date>2014-11-19T16:54:48Z</dc:date>
    </item>
    <item>
      <title>Re: data nodes evicted randomly and cluster marks node for decomm</title>
      <link>https://community.cloudera.com/t5/Support-Questions/data-nodes-evicted-randomly-and-cluster-marks-node-for/m-p/79957#M28091</link>
      <description>&lt;P&gt;I see below error in log:&lt;/P&gt;&lt;P&gt;&lt;FONT color="#ff0000"&gt;&lt;SPAN&gt;java.lang.OutOfMemoryError: Java heap space&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000"&gt;&lt;SPAN&gt;So i would like to know&lt;/SPAN&gt;&lt;/FONT&gt;&lt;FONT color="#000000"&gt;&lt;SPAN&gt; the heap memory you have allocated right now?&amp;nbsp;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#000000"&gt;Can you&lt;/FONT&gt; &lt;FONT color="#000000"&gt;&lt;SPAN&gt;try &lt;/SPAN&gt;&lt;/FONT&gt;&lt;FONT color="#000000"&gt;&lt;SPAN&gt;increasing heap size of datanode.&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 18 Sep 2018 07:04:19 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/data-nodes-evicted-randomly-and-cluster-marks-node-for/m-p/79957#M28091</guid>
      <dc:creator>sid2707</dc:creator>
      <dc:date>2018-09-18T07:04:19Z</dc:date>
    </item>
  </channel>
</rss>

