<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: HBase latency spikes every 10 minutes in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/HBase-latency-spikes-every-10-minutes/m-p/328743#M230319</link>
    <description>&lt;P&gt;Thanks&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/81261"&gt;@PrathapKumar&lt;/a&gt;&amp;nbsp;to pointed out stuff to check.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;So far I can confirm on the data nodes there are no:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Slow BlockReceiver write data to disk cost&lt;/LI&gt;&lt;LI&gt;Slow BlockReceiver write packet to mirror took&lt;/LI&gt;&lt;LI&gt;Slow flushOrSync took/Slow manageWriterOsCache took&lt;/LI&gt;&lt;LI&gt;Any other WARN/ERROR.&lt;/LI&gt;&lt;/UL&gt;</description>
    <pubDate>Mon, 25 Oct 2021 07:24:15 GMT</pubDate>
    <dc:creator>kras</dc:creator>
    <dc:date>2021-10-25T07:24:15Z</dc:date>
    <item>
      <title>HBase latency spikes every 10 minutes</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HBase-latency-spikes-every-10-minutes/m-p/327442#M230064</link>
      <description>&lt;P&gt;Hi there,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have an issue on my HBase cluster. HBase version: 2.0.2.3.1.4.0-315&lt;/P&gt;&lt;P&gt;There are latency spikes every 10mins on all HBase operations, mostly visible on reads. Please have a look to the first graph below. Metric for the graph is `hbase_table_latency_gettime_max`.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Screenshot 2021-10-12 at 21.20.11.png" style="width: 999px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/32648i5D57812C129A9F33/image-size/large?v=v2&amp;amp;px=999" role="button" title="Screenshot 2021-10-12 at 21.20.11.png" alt="Screenshot 2021-10-12 at 21.20.11.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;I see also spikes every 10mins on `hbase_regionserver_ipc_queuecalltime`, please have a look to the graph below:&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Screenshot 2021-10-12 at 21.20.51.png" style="width: 999px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/32649i9DAF2FD8C57E192E/image-size/large?v=v2&amp;amp;px=999" role="button" title="Screenshot 2021-10-12 at 21.20.51.png" alt="Screenshot 2021-10-12 at 21.20.51.png" /&gt;&lt;/span&gt;&lt;BR /&gt;What I've checked so far:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;It doesn't look like GC as GC doesn't correlate with spikes time.&lt;/LI&gt;&lt;LI&gt;It is not a major compaction. I see spikes with and without it.&lt;/LI&gt;&lt;LI&gt;It is not replication. I did a test with and without replication.&lt;/LI&gt;&lt;LI&gt;I see nothing suspicious in logs or at least what could bring my attention: DEBUG and TRACE level was enabled.&lt;/LI&gt;&lt;LI&gt;Memstore flushes are happening every hour.&lt;/LI&gt;&lt;LI&gt;Amount of active handlers looks good to me, it is set according to recommendations&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Screenshot 2021-10-12 at 21.53.33.png" style="width: 999px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/32651i4B12FF3CA76E9A59/image-size/large?v=v2&amp;amp;px=999" role="button" title="Screenshot 2021-10-12 at 21.53.33.png" alt="Screenshot 2021-10-12 at 21.53.33.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;There are scans of meta happening every 5 mins (please have a look to the graph below)&lt;/LI&gt;&lt;LI&gt;There are scans of namespace happening every 10 mins&amp;nbsp;and slightly before the spikes (please have a look to the graph below)&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Screenshot 2021-10-12 at 21.45.06.png" style="width: 999px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/32650i1047222156EA2F10/image-size/large?v=v2&amp;amp;px=999" role="button" title="Screenshot 2021-10-12 at 21.45.06.png" alt="Screenshot 2021-10-12 at 21.45.06.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Could you help me and maybe share some ideas what else I could check.&amp;nbsp;I would much appreciate it.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 12 Oct 2021 20:03:53 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HBase-latency-spikes-every-10-minutes/m-p/327442#M230064</guid>
      <dc:creator>kras</dc:creator>
      <dc:date>2021-10-12T20:03:53Z</dc:date>
    </item>
    <item>
      <title>Re: HBase latency spikes every 10 minutes</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HBase-latency-spikes-every-10-minutes/m-p/327606#M230103</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/82118"&gt;@kras&lt;/a&gt;,&lt;/P&gt;&lt;P&gt;1. Is it CDH or HDP, what is the version.&lt;/P&gt;&lt;P&gt;2. In regionserver logs is there&amp;nbsp;“responseTooSlow” or “operationTooSlow” or any other WARN/ERROR messages. please provide log snippets.&lt;/P&gt;&lt;P&gt;3. How is the locality of the regions (check locality on hbase webUI, click on table, on right side there is a column shows each region locality.)&lt;/P&gt;&lt;P&gt;4. How many regions deployed on each RegionServer.&lt;/P&gt;&lt;P&gt;5. Any warning / errors in RS log around the spike?&lt;/P&gt;&lt;P&gt;6. Is any job trying to scan every 10 min? Which table contribute most I/O? Is there any hotspot.&lt;/P&gt;&lt;P&gt;7. is HDFS healthy? check DN logs, is there any slow messages around the spike? Refer to&amp;nbsp;&lt;A href="https://my.cloudera.com/knowledge/Diagnosing-Errors-Error-Slow-ReadProcessor-Error-Slow?id=73443" target="_blank"&gt;https://my.cloudera.com/knowledge/Diagnosing-Errors-Error-Slow-ReadProcessor-Error-Slow?id=73443&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Regards,&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Will&lt;/EM&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 14 Oct 2021 03:00:09 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HBase-latency-spikes-every-10-minutes/m-p/327606#M230103</guid>
      <dc:creator>willx</dc:creator>
      <dc:date>2021-10-14T03:00:09Z</dc:date>
    </item>
    <item>
      <title>Re: HBase latency spikes every 10 minutes</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HBase-latency-spikes-every-10-minutes/m-p/327907#M230185</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/82089"&gt;@willx&lt;/a&gt;&amp;nbsp;, thanks a lot for your questions!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;gt; 1. Is it CDH or HDP, what is the version.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;HDP 3.1.4.0-315&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;gt; 2. In regionserver logs is there “responseTooSlow” or “operationTooSlow” or any other WARN/ERROR messages. please provide log snippets.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Yes, I have in the logs “responseTooSlow”, have a look to the example below.&amp;nbsp;But it doesn't correlate with spike times and there are very few amount of them during a day.&lt;/LI&gt;&lt;/UL&gt;&lt;LI-CODE lang="java"&gt;WARN [RpcServer.default.FPBQ.Fifo.handler=22,queue=3,port=16020] ipc.RpcServer: (responseTooSlow): {"call":"Multi(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MultiRequest)","starttimems":1634529195627,"responsesize":2846904,"method":"Multi","param":"region= table_name,%,1539591382521.35818b60a3e8dba8d3d1fe0f0d02b292., for 13378 action(s) and 1st row key=&amp;amp;C&amp;gt;\\x15\\x86\\xE7k\\xA6\\xFD5\\ &amp;lt;TRUNCATED&amp;gt;","processingtimems":11644,"client":"ip:port","queuetimems":0,"class":"HRegionServer"}
&lt;/LI-CODE&gt;&lt;UL&gt;&lt;LI&gt;There are now ERRORs&lt;/LI&gt;&lt;LI&gt;Other WARNs:&lt;/LI&gt;&lt;/UL&gt;&lt;LI-CODE lang="java"&gt;WARN [RpcServer.default.FPBQ.Fifo.handler=10,queue=10,port=16020] regionserver.RSRpcServices: Large batch operation detected (greater than 5000) (HBASE-18023). Requested Number of Rows: 12596 Client: svc-stats//ip first region in multi=table_name,\x09,1541077881948.9bcc8cee00ab92b2402730813923c2f6.&lt;/LI-CODE&gt;&lt;LI-CODE lang="java"&gt;WARN [RpcServer.default.FPBQ.Fifo.handler=55,queue=17,port=16020] regionserver.MultiVersionConcurrencyControl: STUCK: MultiVersionConcurrencyControl{readPoint=3971335621, writePoint=3971335632}&lt;/LI-CODE&gt;&lt;LI-CODE lang="java"&gt;WARN [Close-WAL-Writer-3012] asyncfs.FanOutOneBlockAsyncDFSOutputHelper: complete file /foo/WALs/host,port,1633080603058/host%2C16020%2C1633080603058.1634479683029 not finished, retry = 0&lt;/LI-CODE&gt;&lt;P&gt;For the half of a day the amount of each WARNs is&lt;/P&gt;&lt;LI-CODE lang="java"&gt;grep WARN hbase-hbase-regionserver.log | grep "2021-10-18" | grep "responseTooSlow" | wc -l
13

grep WARN hbase-hbase-regionserver.log | grep "2021-10-18" | grep "Large batch operation detected" | wc -l
4194

grep WARN hbase-hbase-regionserver.log | grep "2021-10-18" | grep "MultiVersionConcurrencyControl" | wc -l
33

grep WARN hbase-hbase-regionserver.log | grep "2021-10-18" | grep "FanOutOneBlockAsyncDFSOutputHelper" | wc -l
4&lt;/LI-CODE&gt;&lt;P&gt;&lt;BR /&gt;&amp;gt; 3. How is the locality of the regions (check locality on hbase webUI, click on table, on right side there is a column shows each region locality.)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Locality is 100% on all RS.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;gt; 4. How many regions deployed on each RegionServer.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have 5 RS with 79 regions each. For each RS 16GB of heap and 65gb off-heap is allocated. Hadoop cluster backed by SSD.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;gt; 5. Any warning / errors in RS log around the spike?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;No errors. Only warns I mentioned above and I would say only&amp;nbsp;&lt;STRONG&gt;Large batch operation detected (greater than 5000)&amp;nbsp;&lt;/STRONG&gt;is popping up a lot.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;gt; 6. Is any job trying to scan every 10 min? Which table contribute most I/O? Is there any hotspot.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;No cron jobs.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;gt; 7. is HDFS healthy? check DN logs, is there any slow messages around the spike? Refer to &lt;A href="https://my.cloudera.com/knowledge/Diagnosing-Errors-Error-Slow-ReadProcessor-Error-Slow?id=73443" target="_blank"&gt;https://my.cloudera.com/knowledge/Diagnosing-Errors-Error-Slow-ReadProcessor-Error-Slow?id=73443&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Unfortunately I don't have access to the link. There is no any WARN/ERROR on DN. HDFS looks healthy, cluster serves plenty of requests with very low latency &amp;lt; 10ms.&lt;/P&gt;</description>
      <pubDate>Mon, 18 Oct 2021 15:21:32 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HBase-latency-spikes-every-10-minutes/m-p/327907#M230185</guid>
      <dc:creator>kras</dc:creator>
      <dc:date>2021-10-18T15:21:32Z</dc:date>
    </item>
    <item>
      <title>Re: HBase latency spikes every 10 minutes</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HBase-latency-spikes-every-10-minutes/m-p/328112#M230195</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/82118"&gt;@kras&lt;/a&gt;,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;From the evidences you provided, the most frequent warning is:&lt;/P&gt;&lt;LI-CODE lang="java"&gt;WARN [RpcServer.default.FPBQ.Fifo.handler=10,queue=10,port=16020] regionserver.RSRpcServices: Large batch operation detected (greater than 5000) (HBASE-18023). Requested Number of Rows: 12596 Client: svc-stats//ip first region in multi=table_name,\x09,1541077881948.9bcc8cee00ab92b2402730813923c2f6.&lt;/LI-CODE&gt;&lt;P&gt;which indicates&lt;SPAN&gt;&amp;nbsp;when an RPC is received from a client that has more than 5000 "actions" (where an "action" is a collection of mutations for a specific row) in a single RPC. Misbehaving clients who send large RPCs to RegionServers can be malicious, causing temporary pauses via garbage collection or denial of service via crashes. The threshold of 5000 actions per RPC is defined by the property "hbase.rpc.rows.warning.threshold" in hbase-site.xml.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Please refer to this jira: &lt;A href="https://issues.apache.org/jira/browse/HBASE-18023" target="_blank"&gt;https://issues.apache.org/jira/browse/HBASE-18023&lt;/A&gt;&amp;nbsp;for detailed explanation.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;We can identify the table name is "table_name", please check which application is writing / reading this table. Simplest way is to halt this application, to see if performance is improved. If you identified the latency spike is due to this table, please improve your application logic, control your batch size.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;If you have already improved the "harmful" applications but still see performance issues, I would recommend you read through this article which include most common performance issues and tuning suggestions:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;A href="https://community.cloudera.com/t5/Community-Articles/Tuning-Hbase-for-optimized-performance-Part-1/ta-p/248137" target="_blank"&gt;https://community.cloudera.com/t5/Community-Articles/Tuning-Hbase-for-optimized-performance-Part-1/ta-p/248137&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;This article has 5 parts, please read through it you will have ideas to tune your hbase.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;This issue looks like a little complex, t&lt;/SPAN&gt;&lt;SPAN&gt;here will be multi-factors to impact your hbase performance. We encourage you to raise support cases with Cloudera.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Regards,&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Will&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;If the answer helps, please accept as solution and click thumbs up.&lt;/EM&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 19 Oct 2021 11:57:52 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HBase-latency-spikes-every-10-minutes/m-p/328112#M230195</guid>
      <dc:creator>willx</dc:creator>
      <dc:date>2021-10-19T11:57:52Z</dc:date>
    </item>
    <item>
      <title>Re: HBase latency spikes every 10 minutes</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HBase-latency-spikes-every-10-minutes/m-p/328655#M230293</link>
      <description>&lt;P&gt;There are many reasons this could happen including OS/Kernel bugs (update your system), swap, transparent huge pages, pauses by a hypervisor for the High latency issues.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;1) As we are seeing the "responseTooSlow" on the region servers, please check the data node logs for the underlying issue from the data node logs.&lt;/P&gt;&lt;P&gt;2) In the data node logs please check we have below ERROR/WARN in the data node logs are not.&lt;/P&gt;&lt;P&gt;Slow BlockReceiver write data to disk cost - This indicates that there was a delay in writing the block to the OS cache or disk.&lt;BR /&gt;Slow BlockReceiver write packet to mirror took - This indicates that there was a delay in writing the block across the network&lt;BR /&gt;Slow flushOrSync took/Slow manageWriterOsCache took - This indicates that there was a delay in writing the block to the OS cache or disk&lt;/P&gt;&lt;P&gt;3) If we have the above ERROR/WARN we need to check the infra team and OS vendor team to fix the underlying hardware issues to overcome issue.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 23 Oct 2021 05:50:10 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HBase-latency-spikes-every-10-minutes/m-p/328655#M230293</guid>
      <dc:creator>PrathapKumar</dc:creator>
      <dc:date>2021-10-23T05:50:10Z</dc:date>
    </item>
    <item>
      <title>Re: HBase latency spikes every 10 minutes</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HBase-latency-spikes-every-10-minutes/m-p/328739#M230318</link>
      <description>&lt;P&gt;I can confirm `Large batch operation detected` WARN is not a cause of the spikes. The client which produces traffic was identified and disabled. That wasn't resolve an issue.&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;WARN [RpcServer.default.FPBQ.Fifo.handler=10,queue=10,port=16020] regionserver.RSRpcServices: Large batch operation detected (greater than 5000) (HBASE-18023). Requested Number of Rows: 12596 Client: svc-stats//ip first region in multi=table_name,\x09,1541077881948.9bcc8cee00ab92b2402730813923c2f6.&lt;/LI-CODE&gt;</description>
      <pubDate>Mon, 25 Oct 2021 07:21:23 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HBase-latency-spikes-every-10-minutes/m-p/328739#M230318</guid>
      <dc:creator>kras</dc:creator>
      <dc:date>2021-10-25T07:21:23Z</dc:date>
    </item>
    <item>
      <title>Re: HBase latency spikes every 10 minutes</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HBase-latency-spikes-every-10-minutes/m-p/328743#M230319</link>
      <description>&lt;P&gt;Thanks&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/81261"&gt;@PrathapKumar&lt;/a&gt;&amp;nbsp;to pointed out stuff to check.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;So far I can confirm on the data nodes there are no:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Slow BlockReceiver write data to disk cost&lt;/LI&gt;&lt;LI&gt;Slow BlockReceiver write packet to mirror took&lt;/LI&gt;&lt;LI&gt;Slow flushOrSync took/Slow manageWriterOsCache took&lt;/LI&gt;&lt;LI&gt;Any other WARN/ERROR.&lt;/LI&gt;&lt;/UL&gt;</description>
      <pubDate>Mon, 25 Oct 2021 07:24:15 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HBase-latency-spikes-every-10-minutes/m-p/328743#M230319</guid>
      <dc:creator>kras</dc:creator>
      <dc:date>2021-10-25T07:24:15Z</dc:date>
    </item>
    <item>
      <title>Re: HBase latency spikes every 10 minutes</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HBase-latency-spikes-every-10-minutes/m-p/328753#M230322</link>
      <description>&lt;P&gt;Hi, &lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/82118"&gt;@kras&lt;/a&gt; Thank you for writing back with your observation.&lt;/P&gt;&lt;P&gt;Can you please check the below details as well?&lt;/P&gt;&lt;P&gt;1) When the Region Server JVM reports High CPU, Open "top" Command for the Region Server PID,&lt;/P&gt;&lt;P&gt;2) Use "Shift H" to open the Thread View of the PID. This would show the Threads within the Region Server JVM with CPU Usage,&lt;/P&gt;&lt;P&gt;3) Monitor the Thread View &amp;amp; Identify the Thread hitting the Max CPU Usage,&lt;/P&gt;&lt;P&gt;4) Take Thread Dump | JStack of Region Server PID &amp;amp; Compare the Thread with the "top" Thread View consuming the Highest CPU.&lt;/P&gt;&lt;P&gt;5) Check the CUP usage of the other services that are hosted on the Region Server host.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The above Process would allow you to identify the Thread contributing towards the CPU Usage. Compare the same with other Region Server &amp;amp; your Team can make a Conclusive Call to identify the reasoning for CPU Utilization. Howsoever Logs are reviewed, Narrowing the Focus of JVM review would assist in identifying the Cause. Review shared Link for additional reference.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Ref: &amp;nbsp;&lt;A href="https://www.infoworld.com/article/3336222/java-challengers-6-thread-behavior-in-the-jvm.html" target="_blank"&gt;https://www.infoworld.com/article/3336222/java-challengers-6-thread-behavior-in-the-jvm.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;A href="https://blogs.manageengine.com/application-performance-2/appmanager/2011/02/09/identify-java-code-co.." target="_blank"&gt;https://blogs.manageengine.com/application-performance-2/appmanager/2011/02/09/identify-java-code-co..&lt;/A&gt;.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;A href="https://blog.jamesdbloom.com/JVMInternals.html" target="_blank"&gt;https://blog.jamesdbloom.com/JVMInternals.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks &amp;amp; Regards,&lt;/P&gt;&lt;P&gt;Prathap Kumar.&lt;/P&gt;</description>
      <pubDate>Mon, 25 Oct 2021 08:30:00 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HBase-latency-spikes-every-10-minutes/m-p/328753#M230322</guid>
      <dc:creator>PrathapKumar</dc:creator>
      <dc:date>2021-10-25T08:30:00Z</dc:date>
    </item>
  </channel>
</rss>

