Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Standby Namenode is getting RPC latency bad health alert

avatar
Explorer

Hi,

 

We are getting RPC latency warning alert on Standby Namenode. But couldn't find any error message in hadoop-hdfs/standby_namenode.logs.

 

Can any one please suggest what could be the reason for this?

 

Thanks.

1 REPLY 1

avatar
Community Manager

I spoke with some of my contacts about this one and here is their response. I hope it helps.

 

This warning message indicates a potential performance problem which may be occurring for different reasons, from disk/network latency to high CPU load to GC pauses, to mention a few. Based on our earlier experience, I suggest to check/verify the followings:
1. the latency of the network services the Standby NameNode (LDAP/AD, NTP, DNS) uses
2. the possible disk overload (ideally dedicate individual disks to separate the IO loads of the QuorumJournalNode [edit logs storage], NameNode [checkpointing!], and Zookeeper [znode persistency] services), thus the use of NFS mounted storage should be avoided
3. check/verify the GC activity of the Standby NameNode process ('jstat' command, service logs) by running the following two commands in parallel on the Standby NameNode until after you receive another alert in Cloudera Manager:
jstat -gc -t -h30 <SBNN JVMPID> 2s
jstat -gcutil -t -h30 <SBNN JVMPID> 2s
4. corresponding to the occasionally high GC activity, you may need to increase the heap size on both NameNodes
5. the RPC handler counts should also be set properly to match the occasional large list loads (similar to 'hadoop fsck /'), which could increase latencies if run too often

 

Generally speaking, the increased RPC latency has two parts, the average time the requests spend in the queue (controlled by the NameNode Handler Count property) and the time needed to process the requests. The length of this latter depends on the performance of the HDFS metadata (edit logs, fsimage) directory. The Cloudera Manager Healt Check alert message contains both the queue and the processing times.

 

In cases of extremely high activity, such as an attempt to decommission then recommission multiple datanodes or a large number of YARN reducers or Flume/Sqoop data ingestion processes or HBase bulk data load, a lot of edit logs can be generated by the Active Namenode. The process of synchronizing the edits with each JournalNode and sending them to the Standby NameNode and the Standby Namenode checkpointing can be highly I/O hungry. While the Standby NameNode is checkpointing it is not accepting edits from the JournalNodes. The JournalNodes might be having trouble keeping in sync which delayed edits being relayed to the Standby NameNode. This in turn can result in network latencies/delays on the Standby NameNode.

 

The "rpc_call_queue_len_avg" graphs for the NameNode can also be checked to see if it has any continuous spikes or curves. Ideally that should be 0, indicating that the handlers are sufficient. If not, the value of the 'dfs.datanode.handler.count', the 'dfs.namenode.handler.count' and the 'dfs.namenode.service.handler.count' properties can be bumped. The values of the 'dfs.namenode.handler.count' and the 'dfs.namenode.service.handler.count' both should be the ln (# of cluster nodes)*20 while the 'dfs.datanode.handler.count' is the tenth of these values.

 

Finally, there can be another special condition when Cloudera Manager health check emits this alert: if the NameNode Health Check interferes with the regular NameNode checkpointing.


Cy Jervis, Manager, Community Program
Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.