Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Ambari namenode HA connect

avatar
New Contributor

Hi,

After successfully enabling HA on a Ambari cluster, what address should we connect to in order to always use the active namenode?

For example origionaly we connected to 'namenode01:8020' to drop files, but now there is namenode01 and namenode02.

So we need to create a keepalived vip which checks '-getServiceState' every few seconds or something?

We currently use the following for connection: https://github.com/colinmarc/hdfs

1 ACCEPTED SOLUTION

avatar
Master Mentor

@Stefan Warmerdam

Usually whatever value you see in the "hdfs-site.xml" for the property "dfs.nameservices" and "fs.defaultFS" is used when HA is enabled.

You can also use the Ambari FileView to upload files. https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.2.0/bk_ambari-views/content/ch_using_files_view...

View solution in original post

3 REPLIES 3

avatar
Master Mentor

@Stefan Warmerdam

Usually whatever value you see in the "hdfs-site.xml" for the property "dfs.nameservices" and "fs.defaultFS" is used when HA is enabled.

You can also use the Ambari FileView to upload files. https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.2.0/bk_ambari-views/content/ch_using_files_view...

avatar
Master Mentor

@Stefan Warmerdam

Not sure about the client that you mentioned as part fo the github link. But In case of a Java Client it determines it using nameservice name something like following:

    Configuration conf = new Configuration(false);
    conf.set("fs.defaultFS", "hdfs://nameservice1");
    conf.set("fs.default.name", conf.get("fs.defaultFS"));
    conf.set("dfs.nameservices","nameservice1");
    conf.set("dfs.ha.namenodes.nameservice1", "namenode1,namenode2");
    conf.set("dfs.namenode.rpc-address.nameservice1.namenode1","hadoopnamenode01:8020");
    conf.set("dfs.namenode.rpc-address.nameservice1.namenode2", "hadoopnamenode02:8020");
    conf.set("dfs.client.failover.proxy.provider.nameservice1","org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");

.

avatar
New Contributor

thank you for your detailed answer!