Support Questions
Find answers, ask questions, and share your expertise

Question on Namenode services failover

Expert Contributor

we have few application connecting to our cluster through name node.

Currently they are connecting to the active name node. but when failover happens they would have to modify their connection string manually.

is there a better way to handle this situation without any manual intervention ?


Super Mentor

@Kumar Veerappan

In your Hadoop Client code you should use the NameService instead of the individual NN1 and NN2


conf.set("dfs.nameservices", nameserviceId);
conf.set("dfs.ha.namenodes."+ nameserviceId, "nn1,nn2");
conf.set("dfs.namenode.rpc-address."+ nameserviceId +".nn1", getProperty("nn1.rpc-address"));
conf.set("dfs.namenode.rpc-address."+ nameserviceId +".nn2", getProperty("nn2.rpc-address"));
conf.set("dfs.namenode.http-address."+ nameserviceId +".nn1", getProperty("nn1.http-address"));
conf.set("dfs.namenode.http-address."+ nameserviceId +".nn2", getProperty("nn2.http-address"));



Super Mentor


dfs.client.failover.proxy.provider.[nameservice ID]- the Java class that HDFS clients use to contact the Active NameNode

Configure the name of the Java class which will be used by the DFS Client to determine which NameNode is the current Active, and therefore which NameNode is currently serving client requests.

; ;