- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
HA Hadoop core
- Labels:
-
Apache Hadoop
-
Apache Spark
Created ‎12-05-2017 10:05 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi
I'm trying to make HA Hadoop client for Spark job (need for spark warehouse) which will switch from NN1 to NN2 if NN1 breaks down.
public class ConfigFactoryTest { public static void main(String [] args) throws IOException { HdfsConfiguration conf = new HdfsConfiguration(true); conf.set("fs.defaultFS", "hdfs://bigdata5.int.ch:8020"); conf.set("fs.default.name", conf.get("fs.defaultFS")); conf.set("dfs.nameservices","hdfscluster"); conf.set("dfs.ha.namenodes.nameservice1", "nn1,nn2"); conf.set("dfs.namenode.rpc-address.hdfscluster.nn1","bigdata1.int.ch:8020"); conf.set("dfs.namenode.rpc-address.hdfscluster.nn2", "bigdata5.int.ch:8020"); conf.set("dfs.client.failover.proxy.provider.hdfscluster","org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"); FileSystem fs = FileSystem.get(conf); while(true){ FileStatus[] fsStatus = fs.listStatus(new Path("/")); for(int i = 0; i < fsStatus.length; i++) { System.out.println(fsStatus[i].getPath().toString()); } } } }
Or I followed examples but when I tried to turn NN1 while this client is running, I'm getting exception that NN1 isnt available anymore and application is shutting down. Can someone point me in right direction?
Thank you
Created ‎12-05-2017 10:15 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Your following property should be pointing to the NameService instead of the Individual NN host.
Incorrect code:
conf.set("fs.defaultFS", "hdfs://bigdata5.int.ch:8020"); conf.set("dfs.ha.namenodes.nameservice1", "nn1,nn2");
.
Should be something like following:
conf.set("fs.defaultFS", "hdfs://hdfscluster"); conf.set("dfs.ha.namenodes.hdfscluster", "nn1,nn2");
For more details please refer to: http://henning.kropponline.de/2016/11/27/sample-hdfs-ha-client/
.
Created ‎12-05-2017 10:15 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Your following property should be pointing to the NameService instead of the Individual NN host.
Incorrect code:
conf.set("fs.defaultFS", "hdfs://bigdata5.int.ch:8020"); conf.set("dfs.ha.namenodes.nameservice1", "nn1,nn2");
.
Should be something like following:
conf.set("fs.defaultFS", "hdfs://hdfscluster"); conf.set("dfs.ha.namenodes.hdfscluster", "nn1,nn2");
For more details please refer to: http://henning.kropponline.de/2016/11/27/sample-hdfs-ha-client/
.
Created ‎12-05-2017 10:45 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you very much sir. You solved my case 🙂
