<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Trying to build a new HA cluster setup in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228483#M190343</link>
    <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/39225/kasim123.html" nodeid="39225"&gt;@Kasim Shaik&lt;/A&gt;&lt;/P&gt;&lt;P&gt;The following error indicates  that you might not have configured the FQDN properly in your cluster.&lt;/P&gt;&lt;PRE&gt;java.net.UnknownHostException: master1&lt;/PRE&gt;&lt;P&gt;Can you please check if the "&lt;STRONG&gt;hostname -f&lt;/STRONG&gt;" command actually returns you the same desired FQDN?&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Example:&lt;/STRONG&gt;&lt;/P&gt;&lt;PRE&gt;root@master1:~#    hostname -f&lt;/PRE&gt;&lt;P&gt;.&lt;/P&gt;&lt;P&gt;&lt;A href="https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.1.0/bk_ambari-installation-ppc/content/set_the_hostname.html" target="_blank"&gt;https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.1.0/bk_ambari-installation-ppc/content/set_the_hostname.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Every node of your cluster should be able to resolve the nodes properly with the FQDN correctly.&lt;/P&gt;</description>
    <pubDate>Fri, 25 Aug 2017 19:27:44 GMT</pubDate>
    <dc:creator>jsensharma</dc:creator>
    <dc:date>2017-08-25T19:27:44Z</dc:date>
    <item>
      <title>Trying to build a new HA cluster setup</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228482#M190342</link>
      <description>&lt;P&gt;Hi Team,&lt;/P&gt;&lt;P&gt;I have been trying to build a ha cluster set up, but could not set it up properly. Every it fails while starting zkfc service.  Not sure where went wrong.  &lt;/P&gt;&lt;P&gt;This is what it shows up when I tried to start zkfc controller after starting journalnode controllers.&lt;/P&gt;&lt;P&gt;17/08/25 04:48:41 INFO zookeeper.ZooKeeper: Initiating client connection, connectString= master1:2181,master2:2181:slave1:2181 sessionTimeout=5000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@1417e278&lt;BR /&gt;17/08/25 04:48:51 FATAL tools.DFSZKFailoverController: Got a fatal error, exiting now&lt;BR /&gt;java.net.UnknownHostException:  master1&lt;BR /&gt;   at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)&lt;BR /&gt;   at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:922)&lt;BR /&gt;   at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1316)&lt;BR /&gt;   at java.net.InetAddress.getAllByName0(InetAddress.java:1269)&lt;BR /&gt;   at java.net.InetAddress.getAllByName(InetAddress.java:1185)&lt;BR /&gt;   at java.net.InetAddress.getAllByName(InetAddress.java:1119)&lt;BR /&gt;   at org.apache.zookeeper.client.StaticHostProvider.&amp;lt;init&amp;gt;(StaticHostProvider.java:61)&lt;BR /&gt;   at org.apache.zookeeper.ZooKeeper.&amp;lt;init&amp;gt;(ZooKeeper.java:445)&lt;BR /&gt;   at org.apache.zookeeper.ZooKeeper.&amp;lt;init&amp;gt;(ZooKeeper.java:380)&lt;BR /&gt;   at org.apache.hadoop.ha.ActiveStandbyElector.getNewZooKeeper(ActiveStandbyElector.java:628)&lt;BR /&gt;   at org.apache.hadoop.ha.ActiveStandbyElector.createConnection(ActiveStandbyElector.java:767)&lt;BR /&gt;   at org.apache.hadoop.ha.ActiveStandbyElector.&amp;lt;init&amp;gt;(ActiveStandbyElector.java:227)&lt;BR /&gt;   at org.apache.hadoop.ha.ZKFailoverController.initZK(ZKFailoverController.java:350)&lt;BR /&gt;   at org.apache.hadoop.ha.ZKFailoverController.doRun(ZKFailoverController.java:191)&lt;BR /&gt;   at org.apache.hadoop.ha.ZKFailoverController.access$000(ZKFailoverController.java:61)&lt;BR /&gt;   at org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:172)&lt;BR /&gt;   at org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:168)&lt;BR /&gt;   at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:412)&lt;BR /&gt;   at org.apache.hadoop.ha.ZKFailoverController.run(ZKFailoverController.java:168)&lt;BR /&gt;   at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:181)&lt;BR /&gt;root@master1:~# &lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;</description>
      <pubDate>Fri, 25 Aug 2017 18:59:28 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228482#M190342</guid>
      <dc:creator>kasim123</dc:creator>
      <dc:date>2017-08-25T18:59:28Z</dc:date>
    </item>
    <item>
      <title>Re: Trying to build a new HA cluster setup</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228483#M190343</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/39225/kasim123.html" nodeid="39225"&gt;@Kasim Shaik&lt;/A&gt;&lt;/P&gt;&lt;P&gt;The following error indicates  that you might not have configured the FQDN properly in your cluster.&lt;/P&gt;&lt;PRE&gt;java.net.UnknownHostException: master1&lt;/PRE&gt;&lt;P&gt;Can you please check if the "&lt;STRONG&gt;hostname -f&lt;/STRONG&gt;" command actually returns you the same desired FQDN?&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Example:&lt;/STRONG&gt;&lt;/P&gt;&lt;PRE&gt;root@master1:~#    hostname -f&lt;/PRE&gt;&lt;P&gt;.&lt;/P&gt;&lt;P&gt;&lt;A href="https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.1.0/bk_ambari-installation-ppc/content/set_the_hostname.html" target="_blank"&gt;https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.1.0/bk_ambari-installation-ppc/content/set_the_hostname.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Every node of your cluster should be able to resolve the nodes properly with the FQDN correctly.&lt;/P&gt;</description>
      <pubDate>Fri, 25 Aug 2017 19:27:44 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228483#M190343</guid>
      <dc:creator>jsensharma</dc:creator>
      <dc:date>2017-08-25T19:27:44Z</dc:date>
    </item>
    <item>
      <title>Re: Trying to build a new HA cluster setup</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228484#M190344</link>
      <description>&lt;P&gt;Hi Jay,&lt;/P&gt;&lt;P&gt;Thanks for the reply.&lt;/P&gt;&lt;P&gt;I replaced the hostname with FQDN domain and ran the same command.  It worked successfully.  However, ran into another problem.  After formatting zkfc,  ran name -format command and landed in another problem.&lt;/P&gt;&lt;P&gt;`````&lt;/P&gt;&lt;P&gt;17/08/25 05:43:09 INFO common.Storage: Storage directory /home/kasim/journal/tmp/dfs/name has been successfully formatted.&lt;BR /&gt;17/08/25 05:43:09 WARN namenode.NameNode: Encountered exception during format: &lt;BR /&gt;org.apache.hadoop.hdfs.qjournal.client.QuorumException: Could not format one or more JournalNodes. 1 exceptions thrown:&lt;BR /&gt;10.104.10.16:8485: Cannot create directory /home/kasim/dfs/jn/ha-cluster/current&lt;BR /&gt;   at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:337)&lt;BR /&gt;   at org.apache.hadoop.hdfs.qjournal.server.JNStorage.format(JNStorage.java:190)&lt;BR /&gt;   at org.apache.hadoop.hdfs.qjournal.server.Journal.format(Journal.java:217)&lt;BR /&gt;   at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.format(JournalNodeRpcServer.java:141)&lt;BR /&gt;   at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.format(QJournalProtocolServerSideTranslatorPB.java:145)&lt;BR /&gt;   at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25419)&lt;BR /&gt;   at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)&lt;BR /&gt;   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)&lt;BR /&gt;   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)&lt;BR /&gt;   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)&lt;BR /&gt;   at java.security.AccessController.doPrivileged(Native Method)&lt;BR /&gt;   at javax.security.auth.Subject.doAs(Subject.java:415)&lt;BR /&gt;   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)&lt;BR /&gt;   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)&lt;BR /&gt;&lt;BR /&gt;   at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)&lt;BR /&gt;   at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)&lt;BR /&gt;   at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.format(QuorumJournalManager.java:214)&lt;BR /&gt;   at org.apache.hadoop.hdfs.server.namenode.FSEditLog.formatNonFileJournals(FSEditLog.java:392)&lt;BR /&gt;   at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:162)&lt;BR /&gt;   at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:992)&lt;BR /&gt;   at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1434)&lt;BR /&gt;   at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)&lt;BR /&gt;17/08/25 05:43:09 ERROR namenode.NameNode: Failed to start namenode.&lt;BR /&gt;org.apache.hadoop.hdfs.qjournal.client.QuorumException: Could not format one or more JournalNodes. 1 exceptions thrown:&lt;BR /&gt;10.104.10.16:8485: Cannot create directory /home/kasim/dfs/jn/ha-cluster/current&lt;BR /&gt;   at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:337)&lt;BR /&gt;   at org.apache.hadoop.hdfs.qjournal.server.JNStorage.format(JNStorage.java:190)&lt;BR /&gt;   at org.apache.hadoop.hdfs.qjournal.server.Journal.format(Journal.java:217)&lt;BR /&gt;   at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.format(JournalNodeRpcServer.java:141)&lt;BR /&gt;   at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.format(QJournalProtocolServerSideTranslatorPB.java:145)&lt;BR /&gt;   at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25419)&lt;BR /&gt;   at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)&lt;BR /&gt;   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)&lt;BR /&gt;   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)&lt;BR /&gt;   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)&lt;BR /&gt;   at java.security.AccessController.doPrivileged(Native Method)&lt;BR /&gt;   at javax.security.auth.Subject.doAs(Subject.java:415)&lt;BR /&gt;   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)&lt;BR /&gt;   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)&lt;BR /&gt;&lt;BR /&gt;   at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)&lt;BR /&gt;   at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)&lt;BR /&gt;   at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.format(QuorumJournalManager.java:214)&lt;BR /&gt;   at org.apache.hadoop.hdfs.server.namenode.FSEditLog.formatNonFileJournals(FSEditLog.java:392)&lt;BR /&gt;   at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:162)&lt;BR /&gt;   at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:992)&lt;BR /&gt;   at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1434)&lt;BR /&gt;   at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)&lt;BR /&gt;17/08/25 05:43:09 INFO util.ExitUtil: Exiting with status 1&lt;BR /&gt;17/08/25 05:43:09 INFO namenode.NameNode: SHUTDOWN_MSG: &lt;BR /&gt;/************************************************************&lt;BR /&gt;SHUTDOWN_MSG: Shutting down NameNode at odc-c-01.prc.eucalyptus-systems.com/10.104.10.1&lt;BR /&gt;************************************************************/&lt;/P&gt;&lt;P&gt;````&lt;/P&gt;&lt;P&gt;I checked the folder structure, it already got created.&lt;/P&gt;&lt;P&gt;```&lt;/P&gt;&lt;P&gt;/home/kasim/dfs/jn/ha-cluster/current&lt;/P&gt;&lt;P&gt;[root@odc-c-01 name]# cd current/&lt;BR /&gt;[root@odc-c-01 current]# ls&lt;BR /&gt;seen_txid  VERSION&lt;BR /&gt;[root@odc-c-01 current]# pwd&lt;BR /&gt;/home/kasim/journal/tmp/dfs/name/current&lt;BR /&gt;[root@odc-c-01 current]# &lt;/P&gt;&lt;P&gt;````&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;</description>
      <pubDate>Fri, 25 Aug 2017 19:51:54 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228484#M190344</guid>
      <dc:creator>kasim123</dc:creator>
      <dc:date>2017-08-25T19:51:54Z</dc:date>
    </item>
    <item>
      <title>Re: Trying to build a new HA cluster setup</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228485#M190345</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/39225/kasim123.html" nodeid="39225"&gt;@Kasim Shaik&lt;/A&gt;&lt;/P&gt;&lt;P&gt;The error is :&lt;/P&gt;&lt;PRE&gt;WARN namenode.NameNode: Encountered exception during format: org.apache.hadoop.hdfs.qjournal.client.QuorumException: Could not format one or more JournalNodes. 1 exceptions thrown:
10.104.10.16:8485: Cannot create directory /home/kasim/dfs/jn/ha-cluster/current&lt;/PRE&gt;&lt;P&gt;.&lt;/P&gt;&lt;P&gt;- Please check the permission on the directory,  The user who is running the NameNode format should be able to write to that directory.&lt;/P&gt;&lt;PRE&gt;# ls -ld  /home/kasim/dfs/
# ls -ld  /home/kasim/dfs/jn
# ls -ld  /home/kasim/dfs/jn/ha-cluster
# ls -ld  /home/kasim/dfs/jn/ha-cluster/current
# ls -lart  /home/kasim/dfs/jn/ha-cluster/current&lt;/PRE&gt;&lt;P&gt;.&lt;/P&gt;</description>
      <pubDate>Fri, 25 Aug 2017 19:56:17 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228485#M190345</guid>
      <dc:creator>jsensharma</dc:creator>
      <dc:date>2017-08-25T19:56:17Z</dc:date>
    </item>
    <item>
      <title>Re: Trying to build a new HA cluster setup</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228486#M190346</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/3418/jsensharma.html" nodeid="3418"&gt;@Jay SenSharma&lt;/A&gt;&lt;/P&gt;&lt;P&gt;The folder is created on a filer.  I am running as root user.  User "root" has all privileges on that folder.&lt;/P&gt;&lt;P&gt;{code}&lt;/P&gt;&lt;P&gt;[root@odc-c-01 kasim]# ls -ld  /home/kasim/dfs/&lt;BR /&gt;drwxr-xr-x 3 nobody nobody 4096 Aug 23 03:31 /home/kasim/dfs/&lt;BR /&gt;[root@odc-c-01 kasim]# ls -ld  /home/kasim/dfs/jn&lt;BR /&gt;drwxr-xr-x 3 nobody nobody 4096 Aug 25 05:43 /home/kasim/dfs/jn&lt;BR /&gt;[root@odc-c-01 kasim]# ls -ld  /home/kasim/dfs/jn/ha-cluster&lt;BR /&gt;drwxr-xr-x 3 nobody nobody 4096 Aug 25 05:59 /home/kasim/dfs/jn/ha-cluster&lt;BR /&gt;[root@odc-c-01 kasim]# ls -ld  /home/kasim/dfs/jn/ha-cluster/current&lt;BR /&gt;drwxr-xr-x 3 nobody nobody 4096 Aug 25 05:59 /home/kasim/dfs/jn/ha-cluster/current&lt;BR /&gt;[root@odc-c-01 kasim]# ls -lart  /home/kasim/dfs/jn/ha-cluster/current&lt;BR /&gt;total 16&lt;BR /&gt;drwxr-xr-x 3 nobody nobody 4096 Aug 25 05:59 ..&lt;BR /&gt;-rwxr-xr-x 1 nobody nobody  154 Aug 25 05:59 VERSION&lt;BR /&gt;drwxr-xr-x 2 nobody nobody 4096 Aug 25 05:59 paxos&lt;BR /&gt;drwxr-xr-x 3 nobody nobody 4096 Aug 25 05:59 .&lt;BR /&gt;[root@odc-c-01 kasim]# &lt;/P&gt;&lt;P&gt;{code}&lt;/P&gt;</description>
      <pubDate>Fri, 25 Aug 2017 20:03:21 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228486#M190346</guid>
      <dc:creator>kasim123</dc:creator>
      <dc:date>2017-08-25T20:03:21Z</dc:date>
    </item>
    <item>
      <title>Re: Trying to build a new HA cluster setup</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228487#M190347</link>
      <description>&lt;P&gt;&lt;A href="https://community.hortonworks.com/users/39225/kasim123.html"&gt;@Kasim Shaik&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Do you know how to use blueprints ? I could hep you on that  to deploy without any fuss!&lt;/P&gt;</description>
      <pubDate>Fri, 25 Aug 2017 20:15:00 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228487#M190347</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2017-08-25T20:15:00Z</dc:date>
    </item>
    <item>
      <title>Re: Trying to build a new HA cluster setup</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228488#M190348</link>
      <description>&lt;P&gt;Yes, Please.&lt;/P&gt;</description>
      <pubDate>Fri, 25 Aug 2017 20:17:22 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228488#M190348</guid>
      <dc:creator>kasim123</dc:creator>
      <dc:date>2017-08-25T20:17:22Z</dc:date>
    </item>
    <item>
      <title>Re: Trying to build a new HA cluster setup</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228489#M190349</link>
      <description>&lt;P&gt;&lt;A href="https://community.hortonworks.com/users/39225/kasim123.html"&gt;@Kasim Shaik&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Can you tell me the number of master  node and datanodes or edge nodes you want in your cluster?&lt;/P&gt;</description>
      <pubDate>Fri, 25 Aug 2017 21:11:13 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228489#M190349</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2017-08-25T21:11:13Z</dc:date>
    </item>
    <item>
      <title>Re: Trying to build a new HA cluster setup</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228490#M190350</link>
      <description>&lt;P&gt;I have a total of 6 machines in my set up. One for active node, one for stand by node, one for resources manager and remaining 3 machines for data nodes. My question is dfs.journalnode.edits.dir location should ba remote shared directory or it can be on local filesystem with uniform directory structure across all journal nodes.&lt;/P&gt;</description>
      <pubDate>Fri, 25 Aug 2017 21:17:32 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228490#M190350</guid>
      <dc:creator>kasim123</dc:creator>
      <dc:date>2017-08-25T21:17:32Z</dc:date>
    </item>
    <item>
      <title>Re: Trying to build a new HA cluster setup</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228491#M190351</link>
      <description>&lt;P&gt;&lt;A href="https://community.hortonworks.com/users/39225/kasim123.html"&gt;@Kasim Shaik&lt;/A&gt;&lt;/P&gt;&lt;P&gt;With 6  machines you could have&lt;/P&gt;&lt;PRE&gt;2 master nodes for HDFS HA
1 edge node with clients/Ambari server
3 data nodes&lt;/PRE&gt;&lt;P&gt;What version of HDP? Will you use Mysql for hive/ranger/oozie ? &lt;/P&gt;&lt;P&gt;Is that fine for you &lt;/P&gt;</description>
      <pubDate>Fri, 25 Aug 2017 21:30:51 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228491#M190351</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2017-08-25T21:30:51Z</dc:date>
    </item>
    <item>
      <title>Re: Trying to build a new HA cluster setup</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228492#M190352</link>
      <description>&lt;P&gt;&lt;A href="https://community.hortonworks.com/users/39225/kasim123.html"&gt;@Kasim Shaik&lt;/A&gt;&lt;/P&gt;&lt;P&gt;In the cluster_config.json change the following &lt;/P&gt;&lt;P&gt;Stack_version to match your version "2.x" &lt;/P&gt;&lt;P&gt;In the hostmap.json change the masterx,datanodex or ambari-server to match FQDN of the machines. &lt;/P&gt;&lt;P&gt;Make sure you have internal repos to match the entries in repo.json and dputil-repo.jso &lt;/P&gt;&lt;P&gt;Cli.txt change the "ambari-server" to match your FQDN of your Ambai server and launch them in that order&lt;/P&gt;&lt;P&gt;Remeber to rename the *.json.txt to *.json  as HCC doesn't accept .json file type upload&lt;/P&gt;</description>
      <pubDate>Fri, 25 Aug 2017 21:48:42 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228492#M190352</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2017-08-25T21:48:42Z</dc:date>
    </item>
    <item>
      <title>Re: Trying to build a new HA cluster setup</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228493#M190353</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/3418/jsensharma.html" nodeid="3418"&gt;Hi  @Jay SenSharma,&lt;BR /&gt;&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Finally, I was able to configure HA cluster successfully.  Fail-over is happening when I tried to do the same using "hdfs haadmin -failover" command.  However, I noticed, fsimage  &amp;amp; edit log files only in one server.&lt;/P&gt;&lt;P&gt;[root@odc-c-01 current]# hdfs haadmin -getServiceState odc-c-01&lt;BR /&gt;standby&lt;BR /&gt;[root@odc-c-01 current]# hdfs haadmin -getServiceState odc-c-16&lt;BR /&gt;active&lt;BR /&gt;[root@odc-c-01 current]#&lt;/P&gt;&lt;P&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;&amp;lt;name&amp;gt;hadoop.tmp.dir&amp;lt;/name&amp;gt;&lt;BR /&gt;&amp;lt;value&amp;gt;/shared/kasim/journal/tmp&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;property&amp;gt;&lt;BR /&gt;  &amp;lt;name&amp;gt;dfs.journalnode.edits.dir&amp;lt;/name&amp;gt;&lt;BR /&gt;  &amp;lt;value&amp;gt;/shared/kasim/dfs/jn&amp;lt;/value&amp;gt;&lt;BR /&gt;&amp;lt;/property&amp;gt;&lt;BR /&gt;&amp;lt;/configuration&amp;gt;&lt;/P&gt;&lt;P&gt;[root@odc-c-01 current]# ls fsimage_0000000000000003698&lt;BR /&gt;fsimage_0000000000000003698&lt;BR /&gt;[root@odc-c-01 current]#&lt;BR /&gt;[root@odc-c-01 current]# pwd&lt;BR /&gt;/shared/kasim/journal/tmp/dfs/name/current&lt;BR /&gt;[root@odc-c-01 current]# &lt;/P&gt;&lt;P&gt;Still I do not understand why it is writing fsimage &amp;amp; edits log information to only one server and in a different directory which I have not mention for "dfs.journalnode.edits.dir".  Could you shed some light over on that part.&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;</description>
      <pubDate>Mon, 28 Aug 2017 15:03:17 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228493#M190353</guid>
      <dc:creator>kasim123</dc:creator>
      <dc:date>2017-08-28T15:03:17Z</dc:date>
    </item>
    <item>
      <title>Re: Trying to build a new HA cluster setup</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228494#M190354</link>
      <description>&lt;P&gt;&lt;A href="https://community.hortonworks.com/users/39225/kasim123.html"&gt;@Kasim Shaik&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Can you check paste the screenshot of the below directories &lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Ambari UI--&amp;gt;HDFS--&amp;gt;Configs--&amp;gt;NammeNode directories&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;If you have ONLY one directory  path then that explains why you have only one copy&lt;/P&gt;</description>
      <pubDate>Mon, 28 Aug 2017 15:11:00 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228494#M190354</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2017-08-28T15:11:00Z</dc:date>
    </item>
    <item>
      <title>Re: Trying to build a new HA cluster setup</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228495#M190355</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/1271/sheltong.html" nodeid="1271"&gt;@Geoffrey Shelton Okot&lt;BR /&gt;&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/1271/sheltong.html" nodeid="1271"&gt;&lt;/A&gt;I have had used apache hadoop tar file to configure HA, not amabari GUI&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;</description>
      <pubDate>Mon, 28 Aug 2017 15:15:53 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228495#M190355</guid>
      <dc:creator>kasim123</dc:creator>
      <dc:date>2017-08-28T15:15:53Z</dc:date>
    </item>
    <item>
      <title>Re: Trying to build a new HA cluster setup</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228496#M190356</link>
      <description>&lt;P&gt;&lt;A href="https://community.hortonworks.com/users/39225/kasim123.html"&gt;@Kasim Shaik&lt;/A&gt;&lt;/P&gt;&lt;P&gt;It doesn't matter whether you used tarball and blueprint which I sent you. After the installation how are you managing your cluster ?  I guess by Ambari not so ?&lt;/P&gt;&lt;P&gt;Just check how many directories are in &lt;STRONG&gt; Ambari UI--&amp;gt;HDFS--&amp;gt;Configs--&amp;gt;NameNode directories&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 28 Aug 2017 15:26:47 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Trying-to-build-a-new-HA-cluster-setup/m-p/228496#M190356</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2017-08-28T15:26:47Z</dc:date>
    </item>
  </channel>
</rss>

