Community Articles

Find and share helpful community-sourced technical articles.
avatar
Super Guru

I was recently involved with, quite possibly, the worst HBase performance debugging issue so far in my lifetime. The issue first arose with a generic problem statement: after X hours of processing, tasks accessing HBase begin to take over 10 times longer than prior. Upon restarting HBase, performance returned to expected levels. There were no obvious errors in HBase logs, HDFS logs, or the host's syslog.

This problem would manifest itself on a near-constant period: every X hours after restart. It affected different types of client tasks (those reading and writing), and was not limited to a specific node or set of nodes. Strangely, despite all inspection of HBase logs and profiling information, HBase seemed to be functioning perfectly fine. Just, slower. This lead us to investigate numerous operating system configuration changes and monitoring, none of which completely described the circumstances and symptoms of the problem.

After many long days of investigation and some JVM options, we stumbled onto the first answer which satisfied (or, at least, didn't invalidate) the circumstances: a known, unfixed bug in Java 7 in which the JIT code compilation is disabled after the JIT's code cache executes a flush to reclaim space.

https://bugs.openjdk.java.net/browse/JDK-8051955

The JIT (just-in-time) compiler runs behind the scenes in Java compiling Java byte-code into native machine code. Code compilation is a tool designed to help long-lived Java applications run fast without negatively affecting the start-up time of short-lived applications. After methods are invoked, they are compiled from Java byte code into machine code and cached by the JVM. Subsequent invocations of a method which are cached can directly invoke the machine code instead of having to deal with Java byte-code.

Analysis:

On a 64-bit JVM with Java 7, this cache has a size of 50MB which is a sufficient amount of size for most applications. Methods which are not used frequently are evicted from this cache; this helps avoid the JVM from quickly reaching the limit. However, with sufficient time, this cache can still become full and trigger a temporary halting of JIT compilation and caching to flush the cache. However in Java 7, there is an unresolved issue in that JIT compilation is not re-enabled after the code cache is flushed. While the process continues to run, no machine code will be cached which means that code is constantly being re-compiled from byte code into machine code.

We were able to confirm that this is what was happening by enabling two JVM options for the HBase services in hbase-env.sh:

  1. -XX:+PrintCompilation
  2. -XX:+PrintSafepointStatistics

The first option prints a log message for every compilation, every method marked as "not entrant" (the method is candidate to be removed from the cache), and every method marked as "zombie" (removed from the cache). This is helpful in determining when the JIT compilation is happening.

The second option prints debugging information about JVM safepoints which are invoked. A JVM safepoint scan be thought of as a low-level "lock" -- the safepoint is taken to provide mutual exclusion at the JVM level. A common use for enabling this option is to analyze the frequency and time taken by garbage collection operations. For example, the concurrent-mark-and-sweep (CMS) collector takes safepoints for various points in its execution. When the code cache becomes full and a flushing event occurs, a safepoint is taken named "HandleFullCodeCache".

The combination of these two options can show that a Java process is performing JIT compilation up until the point that the "HandleFullCodeCache" safepoint is executed, and then not further JIT compilation happens after that point. In our case, the time after JIT compilation was not happening was near within one hour of when the tasks reportedly began to see performance issues.

In our case, we did not observe the following log message which was meant to make this obtuse issue more obvious. We missed it because we were working remotely and on a decent sized installation which made it not feasible to collect and analyze all logs:

Java HotSpot(TM) 64-Bit Server VM warning: CodeCache is full. Compiler has been disabled.

Solution:

There are two solutions to this problem: one short-term and one long-term. The short-term solution is to increase the size of the JVM Code Cache from the default of 50MB on 64-bit JVMs. This can be accomplished via the -XX:ReservedCodeCacheSize JVM option. Increasing this to a larger value can ultimately prevent the code cache from ever becoming completely full.

export HBASE_SERVER_OPTS="$HBASE_SERVER_OPTS -XX:ReservedCodeCacheSize=256m"

On HDP releases <=2.6, it is necessary to set HBASE_REGIONSERVER_OPTS variable explicitly instead.

export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:ReservedCodeCacheSize=256m"

The implication of this configuration is that it would remove available on-heap memory, but this is typically quite minor (100's of MB when we typically consider 1's of GB). The long-term solution is to upgrade to Java 8. Java 7 is long end-of-life'ed by Oracle and this is a prime example of known issues which were never patched in Java 7. It is strongly recommended that any user still on Java 7 have a plan to move to Java 8 as soon as possible. No other changes would be required on Java 8 as it is not subject to this bug.

1,708 Views