Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Datanode heap exhaustion

Datanode heap exhaustion

New Contributor

I recently upgraded our cluster from HDP2.2 to HDP2.3.4.7, and I'm now getting datanode "heap alerts" in Ambari after the processes have been running for a few days. I've increased the heap memory from 1G -> 2G -> 3G, but it seems like it's not making much (if any) difference.

The datanodes have just over 0.5M blocks on them. (Is this too high? I just noticed CDH generates an alarm if a datanode goes above 0.5M blocks...)

Any tips on how to troubleshoot this? I'm not entirely clear if this is expected behavior for the datanode in HDP2.3 (does it really need a lot more memory?) or if there's something else going on.

(At this point I've been doing a rolling restart of my datanodes to get the heap back)

Also, in case it's useful:

jmap -heap output for the datanode process:

--

Heap Usage: New Generation (Eden + 1 Survivor Space): capacity = 188743680 (180.0MB) used = 57840904 (55.16138458251953MB) free = 130902776 (124.83861541748047MB) 30.645213656955296% used

Eden Space: capacity = 167772160 (160.0MB) used = 36869384 (35.16138458251953MB) free = 130902776 (124.83861541748047MB) 21.975865364074707% used

From Space: capacity = 20971520 (20.0MB) used = 20971520 (20.0MB) free = 0 (0.0MB) 100.0% used

To Space: capacity = 20971520 (20.0MB) used = 0 (0.0MB) free = 20971520 (20.0MB) 0.0% used

concurrent mark-sweep generation: capacity = 2877292544 (2744.0MB) used = 2615359104 (2494.2008056640625MB) free = 261933440 (249.7991943359375MB) 90.89653081866118% used

-- ps:

hdfs 20635 4.7 4.7 5005732 3144808 ? Sl Jun12 152:17 jsvc.exec -Dproc_datanode -outfile /var/log/hadoop/hdfs/jsvc.out -errfile /var/log/hadoop/hdfs/jsvc.err -pidfile /var/run/hadoop/hdfs/hadoop_secure_dn.pid -nodetach -user hdfs -cp /usr/hdp/current/hadoop-client/conf:/usr/hdp/2.3.4.7-4/hadoop/lib/*:/usr/hdp/2.3.4.7-4/hadoop/.//*:/usr/hdp/2.3.4.7-4/hadoop-hdfs/./:/usr/hdp/2.3.4.7-4/hadoop-hdfs/lib/*:/usr/hdp/2.3.4.7-4/hadoop-hdfs/.//*:/usr/hdp/2.3.4.7-4/hadoop-yarn/lib/*:/usr/hdp/2.3.4.7-4/hadoop-yarn/.//*:/usr/hdp/2.3.4.7-4/hadoop-mapreduce/lib/*:/usr/hdp/2.3.4.7-4/hadoop-mapreduce/.//* -Xmx3072m -Dhdp.version=2.3.4.7-4 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.3.4.7-4/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.3.4.7-4/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.4.7-4/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.3.4.7-4 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop-hdfs-datanode-kibo-b2.int.mathereconomics.com.log -Dhadoop.home.dir=/usr/hdp/2.3.4.7-4/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.3.4.7-4/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.4.7-4/hadoop/lib/native:/usr/hdp/2.3.4.7-4/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.4.7-4/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.id.str=hdfs -jvm server -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop//hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop//gc.log-201606122237 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms2944m -Xmx2944m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop//hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop//gc.log-201606122237 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms2944m -Xmx2944m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop//hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop//gc.log-201606122237 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms2944m -Xmx2944m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter

7 REPLIES 7
Highlighted

Re: Datanode heap exhaustion

Super Guru
Highlighted

Re: Datanode heap exhaustion

@Steve Steyic, if the alert states that the percentage of heap used is too high, then it's possible that this is just a natural consequence of the way the JVM manages the heap and reports its usage. It might not be a real operational problem. 0.5 M blocks on a DataNode is not necessarily too large.

As objects are allocated, they'll fill the heap. Many objects turn out to be short-lived and get garbage collected relatively quickly. For longer-lived objects, they might not get cleared until the heap is very highly consumed and the JVM decides to trigger a full garbage collection. As far as measurement of the heap usage, this can look like a sawtooth pattern if you can imagine a plot of heap usage over time.

Sometimes a JVM will just hover reporting a high proportion of its heap used. This can happen if there were a large number of long-lived objects that couldn't be collected quickly, but there isn't quite enough new object allocation activity happening right now to trigger the full garbage collection that would bring the reported usage back down. One way to test this theory is to trigger a full garbage collection manually on one of the DataNodes, such as by running this command on the DataNode's PID:

jcmd <PID> GC.run

If the reported heap usage drops significantly after running this command, then that validates the above theory.

A possible way to "stay ahead" of this problem and do full garbage collections more frequently is to add the following garbage collection options to HADOOP_DATANODE_OPTS:

-XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=<percent>

The typical value for <percent> is 70. In some cases, this may need tuning to find the optimal value, but in practice, I have seen 70 work well almost always.

If you're interested in more background on garbage collection, then you might want to read NameNode Garbage Collection Configuration: Best Practices and Rationale. This article is admittedly focused on the NameNode instead of the DataNode, but much of the background information on garbage collection and configuration is applicable to any JVM process.

Highlighted

Re: Datanode heap exhaustion

Explorer

I'm using HDP 2.3 and in the last days I've got the same problem as @Steve Steyic

It is strange because we didn't write much more data so the blocks number is more or less the same but Ambari keeps nagging about datanode heap space being too high. I tried the jcmd approach suggested by @Chris Nauroth but when I try I get:

[root@hds04 ~]# /usr/jdk64/jdk1.8.0_40/bin/jcmd $(pgrep -f org.apache.hadoop.hdfs.server.datanode.DataNode) GC.run
27781:
com.sun.tools.attach.AttachNotSupportedException: Unable to open socket file: target process not responding or HotSpot VM not loaded
    at sun.tools.attach.LinuxVirtualMachine.<init>(LinuxVirtualMachine.java:106)
    at sun.tools.attach.LinuxAttachProvider.attachVirtualMachine(LinuxAttachProvider.java:63)
    at com.sun.tools.attach.VirtualMachine.attach(VirtualMachine.java:208)
    at sun.tools.jcmd.JCmd.executeCommandForPid(JCmd.java:147)
    at sun.tools.jcmd.JCmd.main(JCmd.java:131)

where 27781 is indeed the datanode PID. I'll try to forge the full GC through the HADOOP_DATANODE_OPTS but I'm curious about wht the jcmd cannot attach to the datanode's JVM

EDIT: fixed, I just had to run jcmd as the hdfs user (which is the datanode's PID owner)

Highlighted

Re: Datanode heap exhaustion

@Davide Ferrari, correct, the command must be run as the same user that ran the target process. Sorry for my delayed response. I'm glad to see you worked through this anyway.

Highlighted

Re: Datanode heap exhaustion

Contributor
# on a datanode:
sudo su - hdfs
jcmd $(pgrep -U hdfs -f ".*DataNode") GC.run

I can confirm this works, also the alert disappears (for a while) in Ambari. (95% -> 43% heap use)

Highlighted

Re: Datanode heap exhaustion

New Contributor

One thing I found was that the datanode heap utilization alert in Ambari was added in a very recent release (past 1 or 2, I don't recall now exactly).

Looking at how this works, the datanode heap seems to bump up against the max on a regular basis - but that doesn't seem to cause any actual problems.

I didn't have much luck with trying to get it to "keep" the heap below acceptable levels (tried the fixes proposed above and they didn't help) - and the heap utilization didn't see to be causing any problems - so I just disabled that alert in Ambari.

Highlighted

Re: Datanode heap exhaustion

Contributor

I'm facing the same issues. I also bummed up the datanode heap from 1 -> 2 gigs, still Ambari-Alerts. And no noticable hdfs effects.

Good to know its a new feature, which might need tuning. It could also be dependent on the JVM-version, we are at java oracle-1.8.

Some garbage collection options from a running Datanode proces (-XX: prefix):

  • +useConcMarkSweepGC
  • ParallelGCThreads=4
  • NewSize=200M
  • MaxNewSize=200M
  • PermSize=128M
Don't have an account?
Coming from Hortonworks? Activate your account here