We have an external tool connecting to HDFS. IN order to connect I have provided the Primary Name Node IP for now.
So literally the IP address has been hardcoded.
Following are my questions :
1) whenever I do maintenance(restarts,graceful stop and start) , will it back to the original state i.e Primary and Secondary Name Node services would come back on the same nodes as it was before the maintenance ?
2) Does HDP has any inbuilt DNS service , which I could use instead of Hardcoding the IP/Hostname
Is your cluster is an Ambari deployed cluster, also is this an HA enabled cluster?
In an HA environment, Primary and StandBy Namenode will switch states. For instance if you restart the Primary Namenode, the StandBy Namenode will failover to become the new Primary Namenode and the one you just restarted will start in StandBy state.
If its not an HA enabled cluster than all the services will remain running on the same hosts.
Also, below is some documentation related to DNS:
If you are worried about the HDFS availability configure HA (if not already done), you can read more about it in Ambari docs. Instead of using the HDFS namenode host and port (hdfs://nnhost:nnport/..) you can then use the HDFS Nameservice ID (hdfs://nameserviceID/..), this way your client do not need to know about the Active/Standby Node. However the client would need to know about the HDFS HA configuration.