<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Data node down in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Data-node-down/m-p/154494#M116950</link>
    <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/2528/jyadav.html" nodeid="2528"&gt;@Jitendra Yadav&lt;/A&gt;,&lt;/P&gt;&lt;P&gt;This is the ulimiit output&lt;/P&gt;&lt;P&gt;core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 1029927
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 32768
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited&lt;/P&gt;</description>
    <pubDate>Thu, 23 Jun 2016 21:22:55 GMT</pubDate>
    <dc:creator>arunpoy</dc:creator>
    <dc:date>2016-06-23T21:22:55Z</dc:date>
    <item>
      <title>Data node down</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Data-node-down/m-p/154489#M116945</link>
      <description>&lt;P&gt;Hi&lt;/P&gt;&lt;P&gt;I see the below exception and this brings the data node down. From the errors , can anyone suggest what parameter  of the hdfs configuration i should look at and try to tune. i guess it is because of lack of resources to write into hdfs.&lt;/P&gt;&lt;P&gt;2016-06-23 08:25:39,553 INFO  datanode.DataNode (DataNode.java:transferBlock(1959)) - DatanodeRegistration(10.107.107.150:50010, datanodeUuid=c0f91520
-d7ca-4fa3-b618-0832721376ad, infoPort=50075, infoSecurePort=0, ipcPort=8010, storageInfo=lv=-56;cid=CID-9561e6ec-bc63-4bb6-934c-e89019a53c39;nsid=198
4339524;c=0) Starting thread to transfer BP-1415030235-10.107.107.100-1452778704087:blk_1077927121_4186297 to 10.107.107.152:50010 
2016-06-23 08:25:39,554 WARN  datanode.DataNode (BPServiceActor.java:run(851)) - Unexpected exception in block pool Block pool BP-1415030235-10.107.10
7.100-1452778704087 (Datanode Uuid c0f91520-d7ca-4fa3-b618-0832721376ad) service to /10.107.107.100:8020
java.lang.OutOfMemoryError: unable to create new native thread
        at java.lang.Thread.start0(Native Method)
        at java.lang.Thread.start(Thread.java:714)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.transferBlock(DataNode.java:1962)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.transferBlocks(DataNode.java:1971)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActive(BPOfferService.java:657)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:615)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:877)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:684)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:843)
        at java.lang.Thread.run(Thread.java:745)
2016-06-23 08:25:39,554 WARN  datanode.DataNode (BPServiceActor.java:run(854)) - Ending block pool service for: Block pool BP-1415030235-10.107.107.10
0-1452778704087 (Datanode Uuid c0f91520-d7ca-4fa3-b618-0832721376ad) service to 10.107.107.100:8020
2016-06-23 08:25:39,657 INFO  datanode.DataNode (BlockPoolManager.java:remove(103)) - Removed Block pool BP-1415030235-10.107.107.100-1452778704087 (D
atanode Uuid c0f91520-d7ca-4fa3-b618-0832721376ad)
2016-06-23 08:25:39,658 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:shutdownBlockPool(2511)) - Removing block pool BP-1415030235-10.107.107.100-14527
78704087
2016-06-23 08:25:39,800 INFO  datanode.DataNode (BlockReceiver.java:run(1405)) - PacketResponder: BP-1415030235-10.107.107.100-1452778704087:blk_10779
27235_4186411, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-06-23 08:25:40,337 INFO  datanode.DataNode (BlockReceiver.java:run(1405)) - PacketResponder: BP-1415030235-10.107.107.100-1452778704087:blk_10779
27234_4186410, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-06-23 08:25:41,078 INFO  datanode.DataNode (BlockReceiver.java:receiveBlock(934)) - Exception for BP-1415030235-10.107.107.100-1452778704087:blk_
1077927238_4186414
2016-06-23 08:25:41,089 INFO  datanode.DataNode (BlockReceiver.java:run(1369)) - PacketResponder: BP-1415030235-10.107.107.100-1452778704087:blk_1077927237_4186413, type=HAS_DOWNSTREAM_IN_PIPELINE: Thread is interrupted.
2016-06-23 08:25:41,089 INFO  datanode.DataNode (BlockReceiver.java:run(1405)) - PacketResponder: BP-1415030235-10.107.107.100-1452778704087:blk_1077927237_4186413, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2016-06-23 08:25:41,089 INFO  datanode.DataNode (DataXceiver.java:writeBlock(840)) - opWriteBlock BP-1415030235-10.107.107.100-1452778704087:blk_1077927237_4186413 received exception java.io.IOException: Premature EOF from inputStream
2016-06-23 08:25:41,089 ERROR datanode.DataNode (DataXceiver.java:run(278)) - :50010:DataXceiver error processing WRITE_BLOCK operation  src: /10.107.107.150:62004 dst: /10.107.107.150:50010
java.io.IOException: Premature EOF from inputStream
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:895)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:807)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
        at java.lang.Thread.run(Thread.java:745)
2016-06-23 08:25:41,671 WARN  datanode.DataNode (DataNode.java:secureMain(2540)) - Exiting Datanode
2016-06-23 08:25:41,673 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 0
2016-06-23 08:25:41,677 INFO  datanode.DataNode (LogAdapter.java:info(45)) - SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at 10.107.107.150
************************************************************/&lt;/P&gt;</description>
      <pubDate>Thu, 23 Jun 2016 21:01:54 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Data-node-down/m-p/154489#M116945</guid>
      <dc:creator>arunpoy</dc:creator>
      <dc:date>2016-06-23T21:01:54Z</dc:date>
    </item>
    <item>
      <title>Re: Data node down</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Data-node-down/m-p/154490#M116946</link>
      <description>&lt;P&gt;Seems to be an issue of OOM with datanode, please increase the heapsize of datanode process and see if that resolve the issue.&lt;/P&gt;&lt;PRE&gt;(BPServiceActor.java:run(851)) - Unexpected exception in block pool Block pool BP-1415030235-10.107.10 7.100-1452778704087 (Datanode Uuid c0f91520-d7ca-4fa3-b618-0832721376ad) service to /10.107.107.100:8020 java.lang.OutOfMemoryError: unable to create &lt;/PRE&gt;&lt;P&gt;Also check ulimit size is sufficient on datanode machines. &lt;/P&gt;&lt;PRE&gt;bash-4.1$ ulimit -a&lt;/PRE&gt;</description>
      <pubDate>Thu, 23 Jun 2016 21:09:07 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Data-node-down/m-p/154490#M116946</guid>
      <dc:creator>jyadav</dc:creator>
      <dc:date>2016-06-23T21:09:07Z</dc:date>
    </item>
    <item>
      <title>Re: Data node down</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Data-node-down/m-p/154491#M116947</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/2528/jyadav.html" nodeid="2528"&gt;@Jitendra Yadav&lt;/A&gt;, thanks for your response. what is the property and what is the recommended size. we have 256 gb of ram per machine&lt;/P&gt;</description>
      <pubDate>Thu, 23 Jun 2016 21:11:14 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Data-node-down/m-p/154491#M116947</guid>
      <dc:creator>arunpoy</dc:creator>
      <dc:date>2016-06-23T21:11:14Z</dc:date>
    </item>
    <item>
      <title>Re: Data node down</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Data-node-down/m-p/154492#M116948</link>
      <description>&lt;A rel="user" href="https://community.cloudera.com/users/2302/arunpoy.html" nodeid="2302"&gt;@ARUNKUMAR RAMASAMY&lt;/A&gt;&lt;P&gt;Check the heapsize and ulimits(hdfs user)&lt;/P&gt;</description>
      <pubDate>Thu, 23 Jun 2016 21:11:30 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Data-node-down/m-p/154492#M116948</guid>
      <dc:creator>yjagadeesan</dc:creator>
      <dc:date>2016-06-23T21:11:30Z</dc:date>
    </item>
    <item>
      <title>Re: Data node down</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Data-node-down/m-p/154493#M116949</link>
      <description>&lt;P&gt;Since you have 256g RAM of machine then I would suggest you to keep databnode heap size between 6-8G&lt;/P&gt;&lt;P&gt;You can change the heapsize from Ambari UI i.e HDFS-&amp;gt; Config &lt;/P&gt;&lt;P&gt;see screenshot.&lt;/P&gt;&lt;P&gt;&lt;A href="https://community.cloudera.com/legacyfs/online/attachments/5203-screen-shot-2016-06-23-at-31802-pm.png"&gt;screen-shot-2016-06-23-at-31802-pm.png&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 23 Jun 2016 21:18:55 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Data-node-down/m-p/154493#M116949</guid>
      <dc:creator>jyadav</dc:creator>
      <dc:date>2016-06-23T21:18:55Z</dc:date>
    </item>
    <item>
      <title>Re: Data node down</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Data-node-down/m-p/154494#M116950</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/2528/jyadav.html" nodeid="2528"&gt;@Jitendra Yadav&lt;/A&gt;,&lt;/P&gt;&lt;P&gt;This is the ulimiit output&lt;/P&gt;&lt;P&gt;core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 1029927
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 32768
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited&lt;/P&gt;</description>
      <pubDate>Thu, 23 Jun 2016 21:22:55 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Data-node-down/m-p/154494#M116950</guid>
      <dc:creator>arunpoy</dc:creator>
      <dc:date>2016-06-23T21:22:55Z</dc:date>
    </item>
    <item>
      <title>Re: Data node down</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Data-node-down/m-p/154495#M116951</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/2528/jyadav.html" nodeid="2528"&gt;@Jitendra Yadav&lt;/A&gt;, &lt;A rel="user" href="https://community.cloudera.com/users/1929/yjagadeesan.html" nodeid="1929"&gt;@Yogeshprabhu&lt;/A&gt;, the data node heap size is just 1 GB. it is the default one done during installation. May be i need to change that&lt;/P&gt;</description>
      <pubDate>Thu, 23 Jun 2016 21:26:33 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Data-node-down/m-p/154495#M116951</guid>
      <dc:creator>arunpoy</dc:creator>
      <dc:date>2016-06-23T21:26:33Z</dc:date>
    </item>
    <item>
      <title>Re: Data node down</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Data-node-down/m-p/154496#M116952</link>
      <description>&lt;P&gt;Yes, Please increase the datanode heapsize to 6G and restart the datanode services on all the hosts.&lt;/P&gt;</description>
      <pubDate>Thu, 23 Jun 2016 21:30:49 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Data-node-down/m-p/154496#M116952</guid>
      <dc:creator>jyadav</dc:creator>
      <dc:date>2016-06-23T21:30:49Z</dc:date>
    </item>
    <item>
      <title>Re: Data node down</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Data-node-down/m-p/154497#M116953</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/2302/arunpoy.html" nodeid="2302"&gt;@ARUNKUMAR RAMASAMY&lt;/A&gt; Yes change it. You might need to restart the HDFS and other services as ambari suggests. &lt;/P&gt;</description>
      <pubDate>Thu, 23 Jun 2016 21:33:18 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Data-node-down/m-p/154497#M116953</guid>
      <dc:creator>yjagadeesan</dc:creator>
      <dc:date>2016-06-23T21:33:18Z</dc:date>
    </item>
    <item>
      <title>Re: Data node down</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Data-node-down/m-p/154498#M116954</link>
      <description>&lt;P&gt;Go with the recommendations above as &lt;A rel="user" href="https://community.cloudera.com/users/2528/jyadav.html" nodeid="2528"&gt;@Jitendra Yadav&lt;/A&gt; has recommended. &lt;/P&gt;</description>
      <pubDate>Thu, 23 Jun 2016 21:42:03 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Data-node-down/m-p/154498#M116954</guid>
      <dc:creator>yjagadeesan</dc:creator>
      <dc:date>2016-06-23T21:42:03Z</dc:date>
    </item>
  </channel>
</rss>

