Community Articles

Find and share helpful community-sourced technical articles.
Labels (1)
avatar
Rising Star

Finding out the hit rate

It is possible to determine cache hit rate per node or per query. Per node, you can see hit rate by looking at LLAP metrics view (<llap node>:15002, see General debugging):

Screen Shot 2017-11-13 at 15.20.23 .png

Per query, it is possible to see LLAP IO counters (including hit rate) upon running the query by setting hive.tez.exec.print.summary=true, which should produce the counters output at the end of the query, for example - empty cache:

42867-image5.png

Some data in cache:

42868-image4.png

Why is the cache hit rate low

  1. Consider the data size and the cache size across all nodes. E.g. with a 10 Gb cache on each of the 4 LLAP nodes, reading 1 Tb of data cannot achieve more than ~4% cache hit rate even in the perfect case. In practice the rate will be lower due to effects of compression (cache is not snappy/gzip compressed, only encoded), interference from other queries, locality misses, etc.
  2. In HDP 2.X/Hive 2.X, cache has coarser granularity to avoid some fragmentation issues that are resolved in HDP 3.0/Hive 3.0. This can cause considerable wasted memory in cache on some workload, esp. if the table has a lot of strings with a small range of values, and/or is written with smaller compression buffer sizes than 256Kb. When writing data, you might consider ensuring that ORC compression buffer size is set to 256Kb, and set hive.exec.orc.buffer.size.enforce=true (on HDP 2.6, it requires a backport) to disable writing smaller CBs. This issue doesn't result in errors but can make cache less efficient.
  3. If the cache size seems sufficient, check relative data and metadata hit rates (see above screenshots). If there are both data and metadata misses, it can be due to other queries caching different data in place of this queries data, or it could be a locality issue. Check hive.llap.task.scheduler.locality.delay; it can be increased (or set to -1 for infinite delay) to get better locality at the cost of waiting longer to launch tasks, if IO is a bottleneck.
  4. If metadata hit rate is very high but data is lower, it is likely that cache doesn't fit all the data; so, some data gets evicted, but metadata that is cached with high priority stays in cache.
4,022 Views