Community Articles

Find and share helpful community-sourced technical articles.
Announcements
Celebrating as our community reaches 100,000 members! Thank you!
Labels (1)
avatar
Master Collaborator

Introduction

Apache Hadoop's efficiency and reliability significantly depend on the Java Virtual Machine (JVM)'s performance. To assist in monitoring and analyzing JVM pauses, especially those induced by Garbage Collection (GC), Hadoop integrates a utility named JvmPauseMonitor, a Hadoop common class introduced by https://issues.apache.org/jira/browse/HADOOP-9618 and widely used in Hadoop projects like HDFS, Hbase, Yarn, Oozie, MapReduce, etc.

Overview of JvmPauseMonitor

Link to reference code here

Key Features and Implementation

 

 

private String formatMessage(long extraSleepTime,

      Map<String, GcTimes> gcTimesAfterSleep,

      Map<String, GcTimes> gcTimesBeforeSleep) {

    

    Set<String> gcBeanNames = Sets.intersection(

        gcTimesAfterSleep.keySet(),

        gcTimesBeforeSleep.keySet());

    List<String> gcDiffs = Lists.newArrayList();

    for (String name : gcBeanNames) {

      GcTimes diff = gcTimesAfterSleep.get(name).subtract(

          gcTimesBeforeSleep.get(name));

      if (diff.gcCount != 0) {

        gcDiffs.add("GC pool '" + name + "' had collection(s): " +

            diff.toString());

      }

    }

    

    String ret = "Detected pause in JVM or host machine (eg GC): " +

        "pause of approximately " + extraSleepTime + "ms\n";

    if (gcDiffs.isEmpty()) {

      ret += "No GCs detected";

    } else {

      ret += Joiner.on("\n").join(gcDiffs);

    }

    return ret;

  }

 

 

Monitoring Cycle: 

It executes a sleep-wake cycle, periodically sleeping through the Thread.sleep(SLEEP_INTERVAL_MS) call, comparing the intended sleep duration against the actual elapsed time to identify JVM pauses. 

SLEEP_INTERVAL_MS is defined as:

 

 

private static final long SLEEP_INTERVAL_MS = 500;

 

 

Evaluation of GC Metrics: 

It assesses garbage collection metrics before and after the sleep interval to determine whether GC activities contributed to the detected pauses.

Compute Diff:

diff is computed by subtracting the GC metrics captured before the sleep (gcTimesBeforeSleep) from those after the sleep (gcTimesAfterSleep) for each garbage collector identified by name.

Determine GC type:

If GC events happened, the diff.gcCount variable will be positive, then logged as "GC pause ".

If no GC events are happening during the sleep interval, the diff.gcCount variable will be zero, then it is "No GC"

Logging Thresholds:

JvmPauseMonitor uses configurable thresholds (INFO_THRESHOLD_DEFAULT = 1000 for info-level logs and WARN_THRESHOLD_DEFAULT = 10000 for warn-level logs) to categorize the severity of detected pauses.

Understanding "GC" and "No GC"

"GC" Detected: If the difference (diff) in GC metrics (count and time) before and after the pause indicates an increase, it signifies that GC events occurred during the pause. These events are detailed in the log message, specifying which GC pools were active and the extent of their activity.

 

 

if (diff.gcCount != 0) {

  gcDiffs.add("GC pool '" + name + "' had collection(s): " + diff.toString());

}

 

 

"No GC" Detected: Conversely, if no significant differences in GC metrics are found (i.e., gcDiffs is empty), the pause is logged without attributing it to GC activities. This scenario suggests the pause was caused by factors other than garbage collection, such as OS-level scheduling delays. 

 

 

if (gcDiffs.isEmpty()) {

  ret += "No GCs detected";

}

 

 

GC logging/debug:

Add the below parameters to JVM params, which usually can be added to java_opts in the Role's Configuration in CM.

Basic logging parameters for performance tuning:

 

 

-verbose:gc  

-XX:-PrintGCCause  

-XX:+PrintGCDetails  

-XX:+PrintAdaptiveSizePolicy 

-XX:+PrintGCDateStamps  

 

 

Additional parameters that may be used for more deep evaluation: 

 

 

-XX:+PrintClassHistogramBeforeFullGC 

-XX:+PrintClassHistogramAfterFullGC 

-XX:+PrintReferenceGC

-XX:+PrintTenuringDistribution 

 

 

Either you can analyze the collected GC logs directly or use some tools like GCEasy(https://gceasy.io/)

Possible Reasons

Possible reasons for "GC" pauses inside JVMs are  

  • Heap Size and Utilization problem
  • Improper GC parameters
  • JVM threads are overloaded
  • Known bugs or memory leaks

Many articles explain JVM GC issues, so we discuss it less here.

Possible reasons for the "No GC" pause:

It implies that garbage collection activities didn't cause the pause. Several factors outside of GC can lead to such pauses

  • Operating System Pauses

The application or the JVM process might be paused due to the operating system's actions, such as swapping memory to disk, other processes consuming excessive CPU resources, or OS-level maintenance tasks. 

  1. Using uptime , iostat , sar to find the OS utilization pattern.  
  2. Check the /var/log/message and `dmesg -T` for warnings from kernel
  3. Cloudera Manager host charts, e.g. load average, Disk, and Network utilization.
  4. Review the Diagnostic Bundle Alerts and health checks from CSI
    • Hardware Issues

Underlying hardware problems, such as failing disks, network issues, or memory errors, can lead to pauses in the JVM as it waits for IO operations or encounters hardware errors.

  • External System Calls

Calls to external systems or services, especially those that involve network communication or disk access, can introduce delays. If these calls block the main application threads, they can result in pauses not attributed to GC.

Example Case of a "No GC" pause that causes Namenode to crash

Log snippets:

"No GC pause happened"

 

 

2024-02-25 21:31:27,854 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 7861ms

No GCs detected

2024-02-25 21:31:35,704 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 7350ms

No GCs detected

 

 

NN is not responsive to sendEdits through QuorumjournalManager

 

 

2024-02-25 21:31:36,703 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 8849 ms (timeout=20000 ms) for a response for sendEdits. No responses yet.

2024-02-25 21:31:37,703 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 9850 ms (timeout=20000 ms) for a response for sendEdits. No responses yet.

2024-02-25 21:31:38,704 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 10851 ms (timeout=20000 ms) for a response for sendEdits. No responses yet.

2024-02-25 21:31:39,706 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 11852 ms (timeout=20000 ms) for a response for sendEdits. No responses yet.

 

 

NN is fenced

 

 

2024-02-25 21:31:43,517 ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: flush failed for required journal

2024-02-25 21:32:13,947 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

 

 

Charts

The "No GC" is due to a sudden write I/O on /root directory (sys disk) observed from Grafana:

(Chart 1) Disk utilization of directories

willx_0-1712155319493.png

(Chart 2) Disk write I/O capacity per second

willx_1-1712155350796.png

(Chart 3) Disk write IOPS 

willx_2-1712155350619.png

(Chart 4) Disk I/O operation time in 1 second

willx_3-1712155350538.png

We found there's a cron job that runs "hdfs oiv" command to parse fsimage into fsimage.csv but it is generating the big csv file at /root directory which is the system disk causing the sudden I/O peak further causing the "No GC"

Reference:

Garbage Collection Pauses in Namenode and Datanode

347 Views
0 Kudos
webinar banner
Version history
Last update:
‎04-18-2024 10:09 PM
Updated by:
meetups banner