Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

HA Hadoop core

Solved Go to solution
Highlighted

HA Hadoop core

Contributor

Hi

I'm trying to make HA Hadoop client for Spark job (need for spark warehouse) which will switch from NN1 to NN2 if NN1 breaks down.

public class ConfigFactoryTest {
    public static void main(String [] args) throws IOException {


        HdfsConfiguration conf = new HdfsConfiguration(true);
        conf.set("fs.defaultFS", "hdfs://bigdata5.int.ch:8020");
        conf.set("fs.default.name", conf.get("fs.defaultFS"));
        conf.set("dfs.nameservices","hdfscluster");
        conf.set("dfs.ha.namenodes.nameservice1", "nn1,nn2");
        conf.set("dfs.namenode.rpc-address.hdfscluster.nn1","bigdata1.int.ch:8020");
        conf.set("dfs.namenode.rpc-address.hdfscluster.nn2", "bigdata5.int.ch:8020");
        conf.set("dfs.client.failover.proxy.provider.hdfscluster","org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");


          FileSystem fs = FileSystem.get(conf);
          while(true){
          FileStatus[] fsStatus =  fs.listStatus(new Path("/"));

          for(int i = 0; i < fsStatus.length; i++) {
              System.out.println(fsStatus[i].getPath().toString());
          }
        }
    }
    }

Or I followed examples but when I tried to turn NN1 while this client is running, I'm getting exception that NN1 isnt available anymore and application is shutting down. Can someone point me in right direction?
Thank you

1 ACCEPTED SOLUTION

Accepted Solutions

Re: HA Hadoop core

Super Mentor

@Ivan Majnaric



Your following property should be pointing to the NameService instead of the Individual NN host.

Incorrect code:

conf.set("fs.defaultFS", "hdfs://bigdata5.int.ch:8020");
conf.set("dfs.ha.namenodes.nameservice1", "nn1,nn2");

.

Should be something like following:

conf.set("fs.defaultFS", "hdfs://hdfscluster");
conf.set("dfs.ha.namenodes.hdfscluster", "nn1,nn2");




For more details please refer to: http://henning.kropponline.de/2016/11/27/sample-hdfs-ha-client/

.

2 REPLIES 2

Re: HA Hadoop core

Super Mentor

@Ivan Majnaric



Your following property should be pointing to the NameService instead of the Individual NN host.

Incorrect code:

conf.set("fs.defaultFS", "hdfs://bigdata5.int.ch:8020");
conf.set("dfs.ha.namenodes.nameservice1", "nn1,nn2");

.

Should be something like following:

conf.set("fs.defaultFS", "hdfs://hdfscluster");
conf.set("dfs.ha.namenodes.hdfscluster", "nn1,nn2");




For more details please refer to: http://henning.kropponline.de/2016/11/27/sample-hdfs-ha-client/

.

Re: HA Hadoop core

Contributor

Thank you very much sir. You solved my case :)