@ARUN
It's very high
The Hadoop RPC server consists of a single RPC queue per port and multiple handler (worker) threads that dequeue and process requests. If the number of handlers is insufficient, then the RPC queue starts building up and eventually overflows. You may start seeing task failures and eventually job failures and unhappy users.
It is recommended that the RPC handler count is set to 20 * log2(Cluster Size) with an upper limit of 200.
e.g. for a 64 node cluster you should initialize this to 20 * log2(64) = 120. The RPC handler count can be configured with the following setting in hdfs-site.xml
<property>
<name>dfs.namenode.handler.count</name>
<value>120</value>
</property>
This heuristic is from the excellent Hadoop Operations book. If you are using Ambari to manage your cluster this setting can be changed via a slider in the Ambari Server Web UI.
Link. Hope this helps your.