Created 12-07-2015 05:34 AM
Is it supported to modify an existing cluster's NameNode logical name (dfs.nameservices) in an HA configuration ?
I was able to get dfs.nameservices renamed using the following steps, but I want to confirm if this could have some issues that I'm unaware of at this time. The steps involved re-creating the NN HA znode, re-initializing the shared edits for the journal node and performing the bootstrap again for the standby NameNode:
(run the commands as hdfs user)
1) Turn on safemode $ hdfs dfsadmin -safemode enter Safe mode is ON in rm-hdp23n1.novalocal/172.25.17.33:8020 Safe mode is ON in rm-hdp23n3.novalocal/172.25.16.71:8020
2) Perform namenode checkpointing:
$ hdfs dfsadmin -saveNamespace Save namespace successful for rm-hdp23n1.novalocal/172.25.17.33:8020 Save namespace successful for rm-hdp23n3.novalocal/172.25.16.71:8020
3) From Ambari stop all HDFS services.
4) Make the appropriate changes with the properties.
Update "fs.defaultFS" in core-file.xml, And then all the properties in hdfs-site.xml that are related to the HA servicename be modified. For instance, in my cluster I changed the following properties or their values:
"dfs.client.failover.proxy.provider.cluster456" : "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
"dfs.ha.namenodes.cluster456" : "nn1,nn2",
"dfs.namenode.http-address.cluster456.nn1" : "rm-hdp23n1.novalocal:50070",
"dfs.namenode.http-address.cluster456.nn2" : "rm-hdp23n3.novalocal:50070",
"dfs.namenode.https-address.cluster456.nn1" : "rm-hdp23n1.novalocal:50470",
"dfs.namenode.https-address.cluster456.nn2" : "rm-hdp23n3.novalocal:50470",
"dfs.namenode.rpc-address.cluster456.nn1" : "rm-hdp23n1.novalocal:8020",
"dfs.namenode.rpc-address.cluster456.nn2" : "rm-hdp23n3.novalocal:8020",
"dfs.namenode.shared.edits.dir" : "qjournal://rm-hdp23n2.novalocal:8485;rm-hdp23n3.novalocal:8485;rm-hdp23n1.novalocal:8485/cluster456",
"dfs.nameservices" : "cluster456",
5) And then start only the journal nodes (With ZKFC and both NameNodes still in Stopped state) and re-initialize shared edits:
$ hdfs namenode -initializeSharedEdits -force
6) initialize zk node for NN HA: hdfs zkfc -formatZK -force
7) And then start the NameNodes and zkfc on both the nodes.
Created 12-07-2015 05:39 AM
Created 12-07-2015 05:39 AM
Created 12-12-2015 05:05 PM
Agreed! We might need to run hive metatool to update existing nameservice URI.