<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Multiple HPROF files are getting generated with hdfs user in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Multiple-HPROF-files-are-getting-generated-with-hdfs-user/m-p/219282#M60421</link>
    <description>&lt;P&gt;Hi All,&lt;/P&gt;&lt;P&gt;I have 3 node cluster running on CentOS 6.7 having cloudera 5.9.&lt;/P&gt;&lt;P&gt;Namenode has been facing issue of .hprof files in /tmp directory leading to 100% disk usage on / mount. The owner of these files are hdfs:hadoop.&lt;/P&gt;&lt;P&gt;&lt;IMG src="https://community.cloudera.com/t5/image/serverpage/image-id/2956iB18BECE70D710038/image-size/large?v=1.0&amp;amp;px=600" alt="hprof.PNG" style="box-sizing: border-box; vertical-align: baseline; cursor: pointer; max-height: 100%; padding-right: 6px;" title="hprof.PNG" /&gt;&lt;/P&gt;&lt;P&gt;I know hprof is created when we have a heap dump of the process at the time of the failure. This is typically seen in scenarios with "java.lang.OutOfMemoryError".&lt;/P&gt;&lt;P&gt;Hence I increased the RAM of my NN to 112GB from 56GB. &lt;/P&gt;&lt;P&gt;My configs are:&lt;/P&gt;&lt;PRE&gt;yarn.nodemanager.resource.memory-mb - 12GB&lt;/PRE&gt;&lt;PRE&gt;yarn.scheduler.maximum-allocation-mb - 16GB&lt;/PRE&gt;&lt;PRE&gt;mapreduce.map.memory.mb - 4GB&lt;/PRE&gt;&lt;PRE&gt;mapreduce.reduce.memory.mb - 4GB&lt;/PRE&gt;&lt;PRE&gt;mapreduce.map.java.opts.max.heap - 3GB&lt;/PRE&gt;&lt;PRE&gt;mapreduce.reduce.java.opts.max.heap - 3GB&lt;/PRE&gt;&lt;PRE&gt;namenode_java_heapsize - 6GB&lt;/PRE&gt;&lt;PRE&gt;secondarynamenode_java_heapsize - 6GB&lt;/PRE&gt;&lt;PRE&gt;dfs_datanode_max_locked_memory - 3GB&lt;/PRE&gt;&lt;P&gt;dfs blocksize - 128 MB&lt;/P&gt;&lt;P&gt;The datanode log on NN has below error but they are also present on other DN (on all 3 nodes basically):&lt;/P&gt;&lt;PRE&gt;2017-05-03 10:03:17,914 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode{data=FSDataset{dirpath='[/bigdata/dfs/dn/current]'}, localName='XXXX.azure.com:50010', datanodeUuid='4ea75665-b223-4456-9308-1defcad54c89', xmitsInProgress=0}:Exception transfering block BP-939287337-X.X.X.4-148408516
3925:blk_1077604623_3864267 to mirror X.X.X.5:50010: java.net.SocketTimeoutException: 65000 millis timeout while waiting for channel to be ready for read. ch : java.ni
o.channels.SocketChannel[connected local=/X.X.X.4:43801 remote=X.X.X.5:50010]
2017-05-03 10:03:17,922 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: XXXX.azure.com:50010:DataXceiver error processing WRITE_BLO
CK operation  src: /X.X.X.4:53902 dst: /X.X.X.4:50010
java.net.SocketTimeoutException: 65000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/X.X.X.4:438
01 remote=/X.X.X.5:50010]
        at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)
        at java.io.FilterInputStream.read(FilterInputStream.java:83)
        at java.io.FilterInputStream.read(FilterInputStream.java:83)
        at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2241)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:743)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:246)
        at java.lang.Thread.run(Thread.java:745)
2017-05-03 10:04:52,371 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: XXXX.azure.com:50010:DataXceiver error processing WRITE_BLO
CK operation  src: /X.X.X.4:54258 dst: /X.X.X.4:50010
java.io.IOException: Premature EOF from inputStream
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:500)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:896)&lt;/PRE&gt;&lt;P&gt;This log is getting such errors even during night time or early morning when nothing is running.&lt;/P&gt;&lt;P&gt;My cluster is used to getting webpage info using wget and then processing the data using SparkR.&lt;/P&gt;&lt;P&gt;Apart from this, I am also getting Block count more than threshold, for which I have another thread. &lt;A href="http://community.cloudera.com/t5/Storage-Random-Access-HDFS/Datanodes-report-block-count-more-than-threshold-on-datanode-and/m-p/54170#M2851"&gt;http://community.cloudera.com/t5/Storage-Random-Access-HDFS/Datanodes-report-block-count-more-than-t...&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Please help!.&lt;/P&gt;&lt;P&gt;Cluster configs(after recent upgrades) -&lt;/P&gt;&lt;P&gt;NN: RAM- 112GB, Core 16, Disk : 500GB&lt;/P&gt;&lt;P&gt;DN1: RAM- 56GB, Core 8, Disk: 400GB&lt;/P&gt;&lt;P&gt;DN2: RAM- 28GB, Core 4, Disk: 400GB&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;&lt;P&gt;Shilpa&lt;/P&gt;</description>
    <pubDate>Thu, 04 May 2017 00:35:47 GMT</pubDate>
    <dc:creator>textshilpa</dc:creator>
    <dc:date>2017-05-04T00:35:47Z</dc:date>
    <item>
      <title>Multiple HPROF files are getting generated with hdfs user</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Multiple-HPROF-files-are-getting-generated-with-hdfs-user/m-p/219282#M60421</link>
      <description>&lt;P&gt;Hi All,&lt;/P&gt;&lt;P&gt;I have 3 node cluster running on CentOS 6.7 having cloudera 5.9.&lt;/P&gt;&lt;P&gt;Namenode has been facing issue of .hprof files in /tmp directory leading to 100% disk usage on / mount. The owner of these files are hdfs:hadoop.&lt;/P&gt;&lt;P&gt;&lt;IMG src="https://community.cloudera.com/t5/image/serverpage/image-id/2956iB18BECE70D710038/image-size/large?v=1.0&amp;amp;px=600" alt="hprof.PNG" style="box-sizing: border-box; vertical-align: baseline; cursor: pointer; max-height: 100%; padding-right: 6px;" title="hprof.PNG" /&gt;&lt;/P&gt;&lt;P&gt;I know hprof is created when we have a heap dump of the process at the time of the failure. This is typically seen in scenarios with "java.lang.OutOfMemoryError".&lt;/P&gt;&lt;P&gt;Hence I increased the RAM of my NN to 112GB from 56GB. &lt;/P&gt;&lt;P&gt;My configs are:&lt;/P&gt;&lt;PRE&gt;yarn.nodemanager.resource.memory-mb - 12GB&lt;/PRE&gt;&lt;PRE&gt;yarn.scheduler.maximum-allocation-mb - 16GB&lt;/PRE&gt;&lt;PRE&gt;mapreduce.map.memory.mb - 4GB&lt;/PRE&gt;&lt;PRE&gt;mapreduce.reduce.memory.mb - 4GB&lt;/PRE&gt;&lt;PRE&gt;mapreduce.map.java.opts.max.heap - 3GB&lt;/PRE&gt;&lt;PRE&gt;mapreduce.reduce.java.opts.max.heap - 3GB&lt;/PRE&gt;&lt;PRE&gt;namenode_java_heapsize - 6GB&lt;/PRE&gt;&lt;PRE&gt;secondarynamenode_java_heapsize - 6GB&lt;/PRE&gt;&lt;PRE&gt;dfs_datanode_max_locked_memory - 3GB&lt;/PRE&gt;&lt;P&gt;dfs blocksize - 128 MB&lt;/P&gt;&lt;P&gt;The datanode log on NN has below error but they are also present on other DN (on all 3 nodes basically):&lt;/P&gt;&lt;PRE&gt;2017-05-03 10:03:17,914 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode{data=FSDataset{dirpath='[/bigdata/dfs/dn/current]'}, localName='XXXX.azure.com:50010', datanodeUuid='4ea75665-b223-4456-9308-1defcad54c89', xmitsInProgress=0}:Exception transfering block BP-939287337-X.X.X.4-148408516
3925:blk_1077604623_3864267 to mirror X.X.X.5:50010: java.net.SocketTimeoutException: 65000 millis timeout while waiting for channel to be ready for read. ch : java.ni
o.channels.SocketChannel[connected local=/X.X.X.4:43801 remote=X.X.X.5:50010]
2017-05-03 10:03:17,922 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: XXXX.azure.com:50010:DataXceiver error processing WRITE_BLO
CK operation  src: /X.X.X.4:53902 dst: /X.X.X.4:50010
java.net.SocketTimeoutException: 65000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/X.X.X.4:438
01 remote=/X.X.X.5:50010]
        at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)
        at java.io.FilterInputStream.read(FilterInputStream.java:83)
        at java.io.FilterInputStream.read(FilterInputStream.java:83)
        at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2241)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:743)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:246)
        at java.lang.Thread.run(Thread.java:745)
2017-05-03 10:04:52,371 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: XXXX.azure.com:50010:DataXceiver error processing WRITE_BLO
CK operation  src: /X.X.X.4:54258 dst: /X.X.X.4:50010
java.io.IOException: Premature EOF from inputStream
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:500)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:896)&lt;/PRE&gt;&lt;P&gt;This log is getting such errors even during night time or early morning when nothing is running.&lt;/P&gt;&lt;P&gt;My cluster is used to getting webpage info using wget and then processing the data using SparkR.&lt;/P&gt;&lt;P&gt;Apart from this, I am also getting Block count more than threshold, for which I have another thread. &lt;A href="http://community.cloudera.com/t5/Storage-Random-Access-HDFS/Datanodes-report-block-count-more-than-threshold-on-datanode-and/m-p/54170#M2851"&gt;http://community.cloudera.com/t5/Storage-Random-Access-HDFS/Datanodes-report-block-count-more-than-t...&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Please help!.&lt;/P&gt;&lt;P&gt;Cluster configs(after recent upgrades) -&lt;/P&gt;&lt;P&gt;NN: RAM- 112GB, Core 16, Disk : 500GB&lt;/P&gt;&lt;P&gt;DN1: RAM- 56GB, Core 8, Disk: 400GB&lt;/P&gt;&lt;P&gt;DN2: RAM- 28GB, Core 4, Disk: 400GB&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;&lt;P&gt;Shilpa&lt;/P&gt;</description>
      <pubDate>Thu, 04 May 2017 00:35:47 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Multiple-HPROF-files-are-getting-generated-with-hdfs-user/m-p/219282#M60421</guid>
      <dc:creator>textshilpa</dc:creator>
      <dc:date>2017-05-04T00:35:47Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple HPROF files are getting generated with hdfs user</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Multiple-HPROF-files-are-getting-generated-with-hdfs-user/m-p/219283#M60422</link>
      <description>&lt;P&gt;Hi, as a workaround you could prevent the generation of the hprof files by setting the jvm option HeapDumpPath to /dev/null instead of /tmp. This will not resolve the root cause obviously.&lt;/P&gt;</description>
      <pubDate>Thu, 04 May 2017 03:12:45 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Multiple-HPROF-files-are-getting-generated-with-hdfs-user/m-p/219283#M60422</guid>
      <dc:creator>wbekker</dc:creator>
      <dc:date>2017-05-04T03:12:45Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple HPROF files are getting generated with hdfs user</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Multiple-HPROF-files-are-getting-generated-with-hdfs-user/m-p/219284#M60423</link>
      <description>&lt;P&gt;Hmm.. actually I thought about it once but didnt do it. But till I find a resolution i need a workaround. So i have changed the HeapDump Path to /dev/null but only for Datanodes.&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="15006-tmp-hprof.png" style="width: 506px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/15993i05D021DB32AD11D1/image-size/medium?v=v2&amp;amp;px=400" role="button" title="15006-tmp-hprof.png" alt="15006-tmp-hprof.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;Thanks &lt;A rel="user" href="https://community.cloudera.com/users/12921/wbekker.html" nodeid="12921" target="_blank"&gt;@Ward Bekker&lt;/A&gt; &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Sun, 18 Aug 2019 02:40:37 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Multiple-HPROF-files-are-getting-generated-with-hdfs-user/m-p/219284#M60423</guid>
      <dc:creator>textshilpa</dc:creator>
      <dc:date>2019-08-18T02:40:37Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple HPROF files are getting generated with hdfs user</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Multiple-HPROF-files-are-getting-generated-with-hdfs-user/m-p/219285#M60424</link>
      <description>&lt;P&gt;Hi All,&lt;/P&gt;&lt;P&gt;I found the Heap size of Datanode Role of my NameNode was low (1 GB) hence increased it to 3GB and hprof files are now not getting generated. I changed the heap dump path back to /tmp to verify for 24 hours ago to verify.&lt;/P&gt;&lt;P&gt;HDFS &amp;gt; configuration &amp;gt; DataNode DefaultGroup &amp;gt; Resource Management &amp;gt; Java Heap Size of DataNode in Bytes in Cloudera Manager.&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;&lt;P&gt;Shilpa&lt;/P&gt;</description>
      <pubDate>Sat, 06 May 2017 00:19:04 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Multiple-HPROF-files-are-getting-generated-with-hdfs-user/m-p/219285#M60424</guid>
      <dc:creator>textshilpa</dc:creator>
      <dc:date>2017-05-06T00:19:04Z</dc:date>
    </item>
  </channel>
</rss>

