Community Articles

Find and share helpful community-sourced technical articles.
Labels (1)
avatar

Introduction

This article continues where part 1 left off. It describes a few more configuration settings that can be enabled in CDP, HDP, CDH, or Apache Hadoop clusters to help the NameNode scale better.

Audience

This article is for Hadoop administrators who are familiar with HDFS and its components. If you are using Ambari or Cloudera Manager you should know how to manage services and configurations. It is assumed that you have read part 1 of the article.

RPC Handler Count

The Hadoop RPC server consists of a single RPC queue per port and multiple handler (worker) threads that dequeue and process requests. If the number of handlers is insufficient, then the RPC queue starts building up and eventually overflows. You may start seeing task failures and eventually job failures and unhappy users.

It is recommended that the RPC handler count be set to 20 * log2(Cluster Size) with an upper limit of 200.

e.g. for a 250 node cluster you should initialize this to 20 * log2(250) = 160. The RPC handler count can be configured with the following setting in hdfs-site.xml.

  <property>
    <name>dfs.namenode.handler.count</name>
    <value>160</value>
  </property>

This heuristic is from the excellent Hadoop Operations book. If you are using Ambari to manage your cluster this setting can be changed via a slider in the Ambari Server Web UI. If you're using Cloudera Manager you can search for property name "dfs.namenode.service.handler.count" under the HDFS configuration page and adjust the value.

Service RPC Handler Count

Pre-requisite: If you have not enabled the Service RPC port already, please do so first as described here.

 

There is no precise calculation for the Service RPC handler count however the default value of 10 is too low for most production clusters. We have often seen this initialized to 50% of the dfs.namenode.handler.count in busy clusters and this value works well in practice.

e.g. for the same 250 node cluster you would initialize the service RPC handler count with the following setting in hdfs-site.xml.

  <property>
    <name>dfs.namenode.service.handler.count</name>
    <value>80</value>
  </property>

DataNode Lifeline Protocol

The Lifeline protocol is a feature recently added by the Apache Hadoop Community (see Apache HDFS Jira HDFS-9239). It introduces a new lightweight RPC message that is used by the DataNodes to report their health to the NameNode. It was developed in response to problems seen in some overloaded clusters where the NameNode was too busy to process heartbeats and spuriously marked DataNodes as dead.

For a non-HA cluster, the feature can be enabled with the following setting in hdfs-site.xml (replace mynamenode.example.com with the hostname or IP address of your NameNode). The port number can be different too.

  <property>
    <name>dfs.namenode.lifeline.rpc-address</name>
    <value>mynamenode.example.com:8050</value>
  </property>

For an HA cluster, the lifeline RPC port can be enabled with settings like the following, replacing mycluster, nn1 and nn2 appropriately.

  <property>
    <name>dfs.namenode.lifeline.rpc-address.mycluster.nn1</name>
    <value>mynamenode1.example.com:8050</value>
  </property>

  <property>
    <name>dfs.namenode.lifeline.rpc-address.mycluster.nn2</name>
    <value>mynamenode2.example.com:8050</value>
  </property>

Additional lifeline protocol settings are documented in the HDFS-9239 release note but these can be left at their default values for most clusters.

 

Note: Changing the lifeline protocol settings requires a restart of the NameNodes, DataNodes and ZooKeeper Failover Controllers to take full effect. If you have NameNode HA setup, you can restart the NameNodes one at a time followed by a rolling restart of the remaining components to avoid cluster downtime.

Conclusion

That is it for Part 2. In Part 3 of this article, we will explore how to enable two new HDFS features to help your NameNode scale better.

28,065 Views