Archives of Support Questions (Read Only)

This is an archived board for historical reference. Information and links may no longer be available or relevant
Announcements
This board is archived and read-only for historical reference. To ask a new question, please post a new topic on the appropriate active board.

HA Hadoop core

avatar
Expert Contributor

Hi

I'm trying to make HA Hadoop client for Spark job (need for spark warehouse) which will switch from NN1 to NN2 if NN1 breaks down.

public class ConfigFactoryTest {
    public static void main(String [] args) throws IOException {


        HdfsConfiguration conf = new HdfsConfiguration(true);
        conf.set("fs.defaultFS", "hdfs://bigdata5.int.ch:8020");
        conf.set("fs.default.name", conf.get("fs.defaultFS"));
        conf.set("dfs.nameservices","hdfscluster");
        conf.set("dfs.ha.namenodes.nameservice1", "nn1,nn2");
        conf.set("dfs.namenode.rpc-address.hdfscluster.nn1","bigdata1.int.ch:8020");
        conf.set("dfs.namenode.rpc-address.hdfscluster.nn2", "bigdata5.int.ch:8020");
        conf.set("dfs.client.failover.proxy.provider.hdfscluster","org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");


          FileSystem fs = FileSystem.get(conf);
          while(true){
          FileStatus[] fsStatus =  fs.listStatus(new Path("/"));

          for(int i = 0; i < fsStatus.length; i++) {
              System.out.println(fsStatus[i].getPath().toString());
          }
        }
    }
    }

Or I followed examples but when I tried to turn NN1 while this client is running, I'm getting exception that NN1 isnt available anymore and application is shutting down. Can someone point me in right direction?
Thank you

1 ACCEPTED SOLUTION

avatar
Master Mentor

@Ivan Majnaric



Your following property should be pointing to the NameService instead of the Individual NN host.

Incorrect code:

conf.set("fs.defaultFS", "hdfs://bigdata5.int.ch:8020");
conf.set("dfs.ha.namenodes.nameservice1", "nn1,nn2");

.

Should be something like following:

conf.set("fs.defaultFS", "hdfs://hdfscluster");
conf.set("dfs.ha.namenodes.hdfscluster", "nn1,nn2");




For more details please refer to: http://henning.kropponline.de/2016/11/27/sample-hdfs-ha-client/

.

View solution in original post

2 REPLIES 2

avatar
Master Mentor

@Ivan Majnaric



Your following property should be pointing to the NameService instead of the Individual NN host.

Incorrect code:

conf.set("fs.defaultFS", "hdfs://bigdata5.int.ch:8020");
conf.set("dfs.ha.namenodes.nameservice1", "nn1,nn2");

.

Should be something like following:

conf.set("fs.defaultFS", "hdfs://hdfscluster");
conf.set("dfs.ha.namenodes.hdfscluster", "nn1,nn2");




For more details please refer to: http://henning.kropponline.de/2016/11/27/sample-hdfs-ha-client/

.

avatar
Expert Contributor

Thank you very much sir. You solved my case 🙂