Created 09-14-2017 06:23 AM
Hi,
After successfully enabling HA on a Ambari cluster, what address should we connect to in order to always use the active namenode?
For example origionaly we connected to 'namenode01:8020' to drop files, but now there is namenode01 and namenode02.
So we need to create a keepalived vip which checks '-getServiceState' every few seconds or something?
We currently use the following for connection: https://github.com/colinmarc/hdfs
Created 09-14-2017 06:30 AM
Usually whatever value you see in the "hdfs-site.xml" for the property "dfs.nameservices" and "fs.defaultFS" is used when HA is enabled.
You can also use the Ambari FileView to upload files. https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.2.0/bk_ambari-views/content/ch_using_files_view...
Created 09-14-2017 06:30 AM
Usually whatever value you see in the "hdfs-site.xml" for the property "dfs.nameservices" and "fs.defaultFS" is used when HA is enabled.
You can also use the Ambari FileView to upload files. https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.2.0/bk_ambari-views/content/ch_using_files_view...
Created 09-14-2017 06:37 AM
Not sure about the client that you mentioned as part fo the github link. But In case of a Java Client it determines it using nameservice name something like following:
Configuration conf = new Configuration(false); conf.set("fs.defaultFS", "hdfs://nameservice1"); conf.set("fs.default.name", conf.get("fs.defaultFS")); conf.set("dfs.nameservices","nameservice1"); conf.set("dfs.ha.namenodes.nameservice1", "namenode1,namenode2"); conf.set("dfs.namenode.rpc-address.nameservice1.namenode1","hadoopnamenode01:8020"); conf.set("dfs.namenode.rpc-address.nameservice1.namenode2", "hadoopnamenode02:8020"); conf.set("dfs.client.failover.proxy.provider.nameservice1","org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");
.
Created 09-14-2017 07:19 AM
thank you for your detailed answer!