<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Multiple HPROF files are getting generated with hdfs user in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Multiple-HPROF-files-are-getting-generated-with-hdfs-user/m-p/54409#M60403</link>
    <description>Thanks &lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/11415"&gt;@mathieu.d&lt;/a&gt;.&lt;BR /&gt;&lt;BR /&gt;As a work around what I have done is, have changed the HeapDump Path to /dev/null for datanode.&lt;BR /&gt;&lt;BR /&gt;I will check what you have suggested and get back as my NN has a datanode role.</description>
    <pubDate>Thu, 04 May 2017 16:51:10 GMT</pubDate>
    <dc:creator>ShilpaSinha</dc:creator>
    <dc:date>2017-05-04T16:51:10Z</dc:date>
    <item>
      <title>Multiple HPROF files are getting generated with hdfs user</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Multiple-HPROF-files-are-getting-generated-with-hdfs-user/m-p/54360#M60401</link>
      <description>&lt;P&gt;Hi All,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have 3 node cluster running on CentOS 6.7.&lt;/P&gt;&lt;P&gt;Namenode has been facing issue of .hprof files in /tmp directory leading to 100% disk usage on / mount. The owner of these files are hdfs:hadoop.&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="hprof.PNG" style="width: 600px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/2956iB18BECE70D710038/image-size/large?v=v2&amp;amp;px=999" role="button" title="hprof.PNG" alt="hprof.PNG" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I know hprof is created when we have&amp;nbsp;a heap dump of the process at the time of the failure. This is typically seen in scenarios with "java.lang.OutOfMemoryError".&lt;/P&gt;&lt;P&gt;Hence I increased the RAM of my NN to 112GB from 56GB.&amp;nbsp;&lt;/P&gt;&lt;P&gt;My configs are:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;yarn.nodemanager.resource.memory-mb - 12GB&lt;/P&gt;&lt;P&gt;yarn.scheduler.maximum-allocation-mb - 16GB&lt;/P&gt;&lt;P&gt;mapreduce.map.memory.mb - 4GB&lt;/P&gt;&lt;P&gt;mapreduce.reduce.memory.mb - 4GB&lt;/P&gt;&lt;P&gt;mapreduce.map.java.opts.max.heap - 3GB&lt;/P&gt;&lt;P&gt;mapreduce.reduce.java.opts.max.heap - 3GB&lt;/P&gt;&lt;P&gt;namenode_java_heapsize - 6GB&lt;/P&gt;&lt;P&gt;secondarynamenode_java_heapsize - 6GB&lt;/P&gt;&lt;P&gt;dfs_datanode_max_locked_memory - 3GB&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The datanode log on NN has below error but they are also present on other DN (on all 3 nodes basically):&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;2017-05-03 10:03:17,914 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode{data=FSDataset{dirpath='[/bigdata/dfs/dn/current]'}, localName='XXXX.azure.com:50010', datanodeUuid='4ea75665-b223-4456-9308-1defcad54c89', xmitsInProgress=0}:Exception transfering block BP-939287337-X.X.X.4-148408516
3925:blk_1077604623_3864267 to mirror X.X.X.5:50010: java.net.SocketTimeoutException: 65000 millis timeout while waiting for channel to be ready for read. ch : java.ni
o.channels.SocketChannel[connected local=/X.X.X.4:43801 remote=X.X.X.5:50010]
2017-05-03 10:03:17,922 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: XXXX.azure.com:50010:DataXceiver error processing WRITE_BLO
CK operation  src: /X.X.X.4:53902 dst: /X.X.X.4:50010
java.net.SocketTimeoutException: 65000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/X.X.X.4:438
01 remote=/X.X.X.5:50010]
        at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)
        at java.io.FilterInputStream.read(FilterInputStream.java:83)
        at java.io.FilterInputStream.read(FilterInputStream.java:83)
        at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2241)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:743)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:246)
        at java.lang.Thread.run(Thread.java:745)
2017-05-03 10:04:52,371 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: XXXX.azure.com:50010:DataXceiver error processing WRITE_BLO
CK operation  src: /X.X.X.4:54258 dst: /X.X.X.4:50010
java.io.IOException: Premature EOF from inputStream
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:500)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:896)&lt;/PRE&gt;&lt;P&gt;This log is getting such errors even during night time or early morning when nothing is running.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;My cluster is used to getting webpage info using wget and then processing the data using SparkR.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Apart from this, I am also getting Block count more than threshold, for which I have another thread.&amp;nbsp;&lt;A href="http://community.cloudera.com/t5/Storage-Random-Access-HDFS/Datanodes-report-block-count-more-than-threshold-on-datanode-and/m-p/54170#M2851" target="_blank"&gt;http://community.cloudera.com/t5/Storage-Random-Access-HDFS/Datanodes-report-block-count-more-than-threshold-on-datanode-and/m-p/54170#M2851&amp;nbsp;&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please help! I am worried about my cluster.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Cluster configs(after recent upgrades) -&lt;/P&gt;&lt;P&gt;NN: RAM- 112GB, Core 16, Disk : 500GB&lt;/P&gt;&lt;P&gt;DN1: RAM- 56GB, Core 8, Disk: 400GB&lt;/P&gt;&lt;P&gt;DN2: R&lt;SPAN&gt;AM- &lt;/SPAN&gt;28GB, Core 4, Disk: 400GB&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;&lt;P&gt;Shilpa&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 11:33:26 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Multiple-HPROF-files-are-getting-generated-with-hdfs-user/m-p/54360#M60401</guid>
      <dc:creator>ShilpaSinha</dc:creator>
      <dc:date>2022-09-16T11:33:26Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple HPROF files are getting generated with hdfs user</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Multiple-HPROF-files-are-getting-generated-with-hdfs-user/m-p/54394#M60402</link>
      <description>&lt;P&gt;The dump refer to the data-node role. Is their a data-node role on the host you call name-node ?&lt;/P&gt;&lt;P&gt;If yes, it is the memory of that role you need to increase.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I guess the memory allocated to that role is too low.&lt;/P&gt;&lt;P&gt;HDFS &amp;gt; configuration &amp;gt; DataNode DefaultGroup &amp;gt; Resource Management &amp;gt; Java Heap Size of DataNode in Bytes in Cloudera Manager.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Also, if you don't investigate the content of the dump you can desactivate the generation of the dump in case of OOM in order to not fill up your disk.&lt;/P&gt;</description>
      <pubDate>Thu, 04 May 2017 09:04:38 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Multiple-HPROF-files-are-getting-generated-with-hdfs-user/m-p/54394#M60402</guid>
      <dc:creator>mathieu.d</dc:creator>
      <dc:date>2017-05-04T09:04:38Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple HPROF files are getting generated with hdfs user</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Multiple-HPROF-files-are-getting-generated-with-hdfs-user/m-p/54409#M60403</link>
      <description>Thanks &lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/11415"&gt;@mathieu.d&lt;/a&gt;.&lt;BR /&gt;&lt;BR /&gt;As a work around what I have done is, have changed the HeapDump Path to /dev/null for datanode.&lt;BR /&gt;&lt;BR /&gt;I will check what you have suggested and get back as my NN has a datanode role.</description>
      <pubDate>Thu, 04 May 2017 16:51:10 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Multiple-HPROF-files-are-getting-generated-with-hdfs-user/m-p/54409#M60403</guid>
      <dc:creator>ShilpaSinha</dc:creator>
      <dc:date>2017-05-04T16:51:10Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple HPROF files are getting generated with hdfs user</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Multiple-HPROF-files-are-getting-generated-with-hdfs-user/m-p/54411#M60404</link>
      <description>&lt;P&gt;Java Heap for Datanode was 1GB for all 3 DNs. Hence I changed the Heap for only NN's Datanode to 3GB. Also, changed the OOM heap dump back to /tmp Just to see if HPROF files are getting generated or not.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Java heap DN.PNG" style="width: 537px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/2958iA159512E314EFBAC/image-size/large?v=v2&amp;amp;px=999" role="button" title="Java heap DN.PNG" alt="Java heap DN.PNG" /&gt;&lt;/span&gt;Thanks,&lt;/P&gt;&lt;P&gt;Shilpa&lt;/P&gt;</description>
      <pubDate>Thu, 04 May 2017 17:12:18 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Multiple-HPROF-files-are-getting-generated-with-hdfs-user/m-p/54411#M60404</guid>
      <dc:creator>ShilpaSinha</dc:creator>
      <dc:date>2017-05-04T17:12:18Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple HPROF files are getting generated with hdfs user</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Multiple-HPROF-files-are-getting-generated-with-hdfs-user/m-p/54452#M60405</link>
      <description>&lt;P&gt;After increasing the Heap size of Datanode role on my NN, I have not seen Hprof files getting created. The issue is resolved.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/11415"&gt;@mathieu.d&lt;/a&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 05 May 2017 17:08:01 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Multiple-HPROF-files-are-getting-generated-with-hdfs-user/m-p/54452#M60405</guid>
      <dc:creator>ShilpaSinha</dc:creator>
      <dc:date>2017-05-05T17:08:01Z</dc:date>
    </item>
  </channel>
</rss>

