<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Fail to start HDFS FAILOVERCONTROLLER (FC) - supervisor.conf: Permission denied in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Fail-to-start-HDFS-FAILOVERCONTROLLER-FC-supervisor-conf/m-p/48699#M49485</link>
    <description>&lt;P&gt;Please ignore the permission error on supervisor.conf. The script that failed to update that file doesn't actually need to target that file. A future version of Cloudera Manager has updated the code to not log this spurrious error. You may also want to revert the permissions to supervisor.conf to not be world-readable.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;What does the end of the stderr log say?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Did you check the role logs for your FC for a relevant error message?&lt;/P&gt;</description>
    <pubDate>Tue, 20 Dec 2016 22:24:49 GMT</pubDate>
    <dc:creator>Darren</dc:creator>
    <dc:date>2016-12-20T22:24:49Z</dc:date>
    <item>
      <title>Fail to start HDFS FAILOVERCONTROLLER (FC) - supervisor.conf: Permission denied</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Fail-to-start-HDFS-FAILOVERCONTROLLER-FC-supervisor-conf/m-p/48682#M49482</link>
      <description>&lt;P&gt;I have CDH 5.9.0-1 set up and it works well (Done through Cloudera Manager). I try to enable Failover Controller (FC) in HDFS. After the installation I got error (Error Log) on the NameNode (Primary) with&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;+ replace_conf_dir&lt;BR /&gt;+ find /run/cloudera-scm-agent/process/152-hdfs-FAILOVERCONTROLLER -type f '!' -path '/run/cloudera-scm-agent/process/152-hdfs-FAILOVERCONTROLLER/logs/*' '!' -name '*.log' '!' -name '*.keytab' '!' -name '*jceks' -exec perl -pi -e 's#{{CMF_CONF_DIR}}#/run/cloudera-scm-agent/process/152-hdfs-FAILOVERCONTROLLER#g' '{}' ';'&lt;BR /&gt;&lt;FONT color="#FF0000"&gt;&lt;STRONG&gt;Can't open /run/cloudera-scm-agent/process/152-hdfs-FAILOVERCONTROLLER/supervisor.conf: Permission denied.&lt;/STRONG&gt;&lt;/FONT&gt;&lt;BR /&gt;+ make_scripts_executable&lt;BR /&gt;+ find /run/cloudera-scm-agent/process/152-hdfs-FAILOVERCONTROLLER -regex '.*\.\(py\|sh\)$' -exec chmod u+x '{}' ';'&lt;BR /&gt;+ '[' DATANODE_MAX_LOCKED_MEMORY '!=' '' ']'&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I'm not sure what is the issue other than the permissions. This &lt;STRONG&gt;&lt;FONT color="#0000FF"&gt;supervisor.conf&lt;/FONT&gt;&lt;/STRONG&gt; is owned by root.&lt;/P&gt;&lt;P&gt;Each time I restart the FC, a new process number is introduced. So even I change permmissions to allow everyone to Read, the new process is owned by root only.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I consult this link but not sure how to proceed.&lt;/P&gt;&lt;P&gt;&lt;A href="http://community.cloudera.com/t5/Cloudera-Manager-Installation/Problem-with-cloudera-scm-agent-and-supervisord/m-p/47045#M8572" target="_blank"&gt;http://community.cloudera.com/t5/Cloudera-Manager-Installation/Problem-with-cloudera-scm-agent-and-supervisord/m-p/47045#M8572&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Any help will be appraciated.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 20 Dec 2016 18:26:53 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Fail-to-start-HDFS-FAILOVERCONTROLLER-FC-supervisor-conf/m-p/48682#M49482</guid>
      <dc:creator>spin0</dc:creator>
      <dc:date>2016-12-20T18:26:53Z</dc:date>
    </item>
    <item>
      <title>Re: Fail to start HDFS FAILOVERCONTROLLER (FC) - supervisor.conf: Permission denied</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Fail-to-start-HDFS-FAILOVERCONTROLLER-FC-supervisor-conf/m-p/48685#M49483</link>
      <description>&lt;P&gt;supervisor.conf ususally only is read/write for owner (root) since it is used by the supervisor to start the process. &amp;nbsp;Have you configured the agent to run as a user other than root?&lt;/P&gt;&lt;P&gt;What are the file permissions listed in your FAILOVERCONTROLLER dir?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 20 Dec 2016 18:56:17 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Fail-to-start-HDFS-FAILOVERCONTROLLER-FC-supervisor-conf/m-p/48685#M49483</guid>
      <dc:creator>bgooley</dc:creator>
      <dc:date>2016-12-20T18:56:17Z</dc:date>
    </item>
    <item>
      <title>Re: Fail to start HDFS FAILOVERCONTROLLER (FC) - supervisor.conf: Permission denied</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Fail-to-start-HDFS-FAILOVERCONTROLLER-FC-supervisor-conf/m-p/48698#M49484</link>
      <description>&lt;P&gt;bgoogley,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Have you configured the agent to run as a user other than root?&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;When installation took place,we created a Hadoop account that we use (with sudo) to install Cloudera Manager.&lt;/P&gt;&lt;P&gt;So I'm not sure where to check it.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The account used to install Cloudera Manager&lt;/P&gt;&lt;P&gt;hduser is belong to &lt;STRONG&gt;hduser&lt;/STRONG&gt; and &lt;STRONG&gt;wheel &lt;/STRONG&gt;groups.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;&lt;SPAN&gt;What are the file permissions listed in your FAILOVERCONTROLLER dir?&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;For this location&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;/run/cloudera-scm-agent/process/152-hdfs-FAILOVERCONTROLLER/supervisor.conf&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Here is the permissions:&lt;/P&gt;&lt;P&gt;drwxr-xr-x root root &lt;STRONG&gt;run&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;drwxr-xr-x cloudera-scm cloudera-scm &lt;STRONG&gt;cloudera-scm-agent&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;drwxr-x--x root root &lt;STRONG&gt;process&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;drwxr-x--x hdfs&amp;nbsp;hdfs &lt;STRONG&gt;152-hdfs-FAILOVERCONTROLLER&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;-rw-------- root root &lt;STRONG&gt;supervisor.conf&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;other files in that directory&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;-rw-r------ hdfs hdfs &amp;lt;Any files&amp;gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Note&lt;/STRONG&gt;:&lt;/P&gt;&lt;P&gt;I also check Cloudera SCM Agent on all nodes (NameNode [Pramary, Secondary], DataNode etc.) and it is active on all nodes.&lt;/P&gt;&lt;P&gt;#sudo service cloudera-scm-agent status.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 20 Dec 2016 22:16:13 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Fail-to-start-HDFS-FAILOVERCONTROLLER-FC-supervisor-conf/m-p/48698#M49484</guid>
      <dc:creator>spin0</dc:creator>
      <dc:date>2016-12-20T22:16:13Z</dc:date>
    </item>
    <item>
      <title>Re: Fail to start HDFS FAILOVERCONTROLLER (FC) - supervisor.conf: Permission denied</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Fail-to-start-HDFS-FAILOVERCONTROLLER-FC-supervisor-conf/m-p/48699#M49485</link>
      <description>&lt;P&gt;Please ignore the permission error on supervisor.conf. The script that failed to update that file doesn't actually need to target that file. A future version of Cloudera Manager has updated the code to not log this spurrious error. You may also want to revert the permissions to supervisor.conf to not be world-readable.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;What does the end of the stderr log say?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Did you check the role logs for your FC for a relevant error message?&lt;/P&gt;</description>
      <pubDate>Tue, 20 Dec 2016 22:24:49 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Fail-to-start-HDFS-FAILOVERCONTROLLER-FC-supervisor-conf/m-p/48699#M49485</guid>
      <dc:creator>Darren</dc:creator>
      <dc:date>2016-12-20T22:24:49Z</dc:date>
    </item>
    <item>
      <title>Re: Fail to start HDFS FAILOVERCONTROLLER (FC) - supervisor.conf: Permission denied</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Fail-to-start-HDFS-FAILOVERCONTROLLER-FC-supervisor-conf/m-p/48700#M49486</link>
      <description>&lt;P&gt;&lt;STRONG&gt;Note&lt;/STRONG&gt; :&lt;/P&gt;&lt;P&gt;Error log is from the NameNode (Primary) where Failover Controller fails to start.&amp;nbsp;&lt;/P&gt;&lt;P&gt;We install Fialover Controller on NameNode (Primary) and NamNode(Secondary).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;What does the end of the stderr log say?&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The end of stderr log:&lt;/P&gt;&lt;PRE&gt;+ '[' refresh-datanode = zkfc ']'
+ '[' mkdir = zkfc ']'
+ '[' nfs3 = zkfc ']'
+ '[' namenode = zkfc -o secondarynamenode = zkfc -o datanode = zkfc ']'
+ exec /opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-hdfs/bin/hdfs --config /run/cloudera-scm-agent/process/184-hdfs-FAILOVERCONTROLLER zkfc
Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: HA is not enabled for this namenode.
	at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.create(DFSZKFailoverController.java:130)
	at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:186)&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Did you check the role logs for your FC for a relevant error message?&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;I do not see anything particular. You probably are familiar with it. It is new to me so I could miss something.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Here is the whole stderr log.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;Tue Dec 20 14:41:57 PST 2016
+ source_parcel_environment
+ '[' '!' -z /opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/meta/cdh_env.sh ']'
+ OLD_IFS=' 	
'
+ IFS=:
+ SCRIPT_ARRAY=($SCM_DEFINES_SCRIPTS)
+ DIRNAME_ARRAY=($PARCEL_DIRNAMES)
+ IFS=' 	
'
+ COUNT=1
++ seq 1 1
+ for i in '`seq 1 $COUNT`'
+ SCRIPT=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/meta/cdh_env.sh
+ PARCEL_DIRNAME=CDH-5.9.0-1.cdh5.9.0.p0.23
+ . /opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/meta/cdh_env.sh
++ CDH_DIRNAME=CDH-5.9.0-1.cdh5.9.0.p0.23
++ export CDH_HADOOP_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop
++ CDH_HADOOP_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop
++ export CDH_MR1_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-0.20-mapreduce
++ CDH_MR1_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-0.20-mapreduce
++ export CDH_HDFS_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-hdfs
++ CDH_HDFS_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-hdfs
++ export CDH_HTTPFS_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-httpfs
++ CDH_HTTPFS_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-httpfs
++ export CDH_MR2_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-mapreduce
++ CDH_MR2_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-mapreduce
++ export CDH_YARN_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-yarn
++ CDH_YARN_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-yarn
++ export CDH_HBASE_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hbase
++ CDH_HBASE_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hbase
++ export CDH_ZOOKEEPER_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/zookeeper
++ CDH_ZOOKEEPER_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/zookeeper
++ export CDH_HIVE_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hive
++ CDH_HIVE_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hive
++ export CDH_HUE_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hue
++ CDH_HUE_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hue
++ export CDH_OOZIE_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/oozie
++ CDH_OOZIE_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/oozie
++ export CDH_HUE_PLUGINS_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop
++ CDH_HUE_PLUGINS_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop
++ export CDH_FLUME_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/flume-ng
++ CDH_FLUME_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/flume-ng
++ export CDH_PIG_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/pig
++ CDH_PIG_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/pig
++ export CDH_HCAT_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hive-hcatalog
++ CDH_HCAT_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hive-hcatalog
++ export CDH_SQOOP2_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/sqoop2
++ CDH_SQOOP2_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/sqoop2
++ export CDH_LLAMA_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/llama
++ CDH_LLAMA_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/llama
++ export CDH_SENTRY_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/sentry
++ CDH_SENTRY_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/sentry
++ export TOMCAT_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/bigtop-tomcat
++ TOMCAT_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/bigtop-tomcat
++ export JSVC_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/bigtop-utils
++ JSVC_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/bigtop-utils
++ export CDH_HADOOP_BIN=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop/bin/hadoop
++ CDH_HADOOP_BIN=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop/bin/hadoop
++ export CDH_IMPALA_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/impala
++ CDH_IMPALA_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/impala
++ export CDH_SOLR_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/solr
++ CDH_SOLR_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/solr
++ export CDH_HBASE_INDEXER_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hbase-solr
++ CDH_HBASE_INDEXER_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hbase-solr
++ export SEARCH_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/search
++ SEARCH_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/search
++ export CDH_SPARK_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/spark
++ CDH_SPARK_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/spark
++ export WEBHCAT_DEFAULT_XML=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/etc/hive-webhcat/conf.dist/webhcat-default.xml
++ WEBHCAT_DEFAULT_XML=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/etc/hive-webhcat/conf.dist/webhcat-default.xml
++ export CDH_KMS_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-kms
++ CDH_KMS_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-kms
++ export CDH_PARQUET_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/parquet
++ CDH_PARQUET_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/parquet
++ export CDH_AVRO_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/avro
++ CDH_AVRO_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/avro
+ locate_cdh_java_home
+ '[' -z '' ']'
+ '[' -z /opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/bigtop-utils ']'
+ local BIGTOP_DETECT_JAVAHOME=
+ for candidate in '"${JSVC_HOME}"' '"${JSVC_HOME}/.."' '"/usr/lib/bigtop-utils"' '"/usr/libexec"'
+ '[' -e /opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/bigtop-utils/bigtop-detect-javahome ']'
+ BIGTOP_DETECT_JAVAHOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/bigtop-utils/bigtop-detect-javahome
+ break
+ '[' -z /opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/bigtop-utils/bigtop-detect-javahome ']'
+ . /opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/bigtop-utils/bigtop-detect-javahome
++ BIGTOP_DEFAULTS_DIR=/etc/default
++ '[' -n /etc/default -a -r /etc/default/bigtop-utils ']'
++ JAVA6_HOME_CANDIDATES=('/usr/lib/j2sdk1.6-sun' '/usr/lib/jvm/java-6-sun' '/usr/lib/jvm/java-1.6.0-sun-1.6.0' '/usr/lib/jvm/j2sdk1.6-oracle' '/usr/lib/jvm/j2sdk1.6-oracle/jre' '/usr/java/jdk1.6' '/usr/java/jre1.6')
++ OPENJAVA6_HOME_CANDIDATES=('/usr/lib/jvm/java-1.6.0-openjdk' '/usr/lib/jvm/jre-1.6.0-openjdk')
++ JAVA7_HOME_CANDIDATES=('/usr/java/jdk1.7' '/usr/java/jre1.7' '/usr/lib/jvm/j2sdk1.7-oracle' '/usr/lib/jvm/j2sdk1.7-oracle/jre' '/usr/lib/jvm/java-7-oracle')
++ OPENJAVA7_HOME_CANDIDATES=('/usr/lib/jvm/java-1.7.0-openjdk' '/usr/lib/jvm/java-7-openjdk')
++ JAVA8_HOME_CANDIDATES=('/usr/java/jdk1.8' '/usr/java/jre1.8' '/usr/lib/jvm/j2sdk1.8-oracle' '/usr/lib/jvm/j2sdk1.8-oracle/jre' '/usr/lib/jvm/java-8-oracle')
++ OPENJAVA8_HOME_CANDIDATES=('/usr/lib/jvm/java-1.8.0-openjdk' '/usr/lib/jvm/java-8-openjdk')
++ MISCJAVA_HOME_CANDIDATES=('/Library/Java/Home' '/usr/java/default' '/usr/lib/jvm/default-java' '/usr/lib/jvm/java-openjdk' '/usr/lib/jvm/jre-openjdk')
++ case ${BIGTOP_JAVA_MAJOR} in
++ JAVA_HOME_CANDIDATES=(${JAVA7_HOME_CANDIDATES[@]} ${JAVA8_HOME_CANDIDATES[@]} ${MISCJAVA_HOME_CANDIDATES[@]} ${OPENJAVA7_HOME_CANDIDATES[@]} ${OPENJAVA8_HOME_CANDIDATES[@]})
++ '[' -z '' ']'
++ for candidate_regex in '${JAVA_HOME_CANDIDATES[@]}'
+++ ls -rvd /usr/java/jdk1.7.0_67-cloudera
++ for candidate in '`ls -rvd ${candidate_regex}* 2&amp;gt;/dev/null`'
++ '[' -e /usr/java/jdk1.7.0_67-cloudera/bin/java ']'
++ export JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera
++ JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera
++ break 2
+ verify_java_home
+ '[' -z /usr/java/jdk1.7.0_67-cloudera ']'
+ echo JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera
+ . /usr/lib64/cmf/service/common/cdh-default-hadoop
++ [[ -z 5 ]]
++ '[' 5 = 3 ']'
++ '[' 5 = -3 ']'
++ '[' 5 -ge 4 ']'
++ export HADOOP_HOME_WARN_SUPPRESS=true
++ HADOOP_HOME_WARN_SUPPRESS=true
++ export HADOOP_PREFIX=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop
++ HADOOP_PREFIX=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop
++ export HADOOP_LIBEXEC_DIR=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop/libexec
++ HADOOP_LIBEXEC_DIR=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop/libexec
++ export HADOOP_CONF_DIR=/run/cloudera-scm-agent/process/184-hdfs-FAILOVERCONTROLLER
++ HADOOP_CONF_DIR=/run/cloudera-scm-agent/process/184-hdfs-FAILOVERCONTROLLER
++ export HADOOP_COMMON_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop
++ HADOOP_COMMON_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop
++ export HADOOP_HDFS_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-hdfs
++ HADOOP_HDFS_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-hdfs
++ export HADOOP_MAPRED_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-mapreduce
++ HADOOP_MAPRED_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-mapreduce
++ '[' 5 = 4 ']'
++ '[' 5 = 5 ']'
++ export HADOOP_YARN_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-yarn
++ HADOOP_YARN_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-yarn
++ replace_pid
++ echo
++ sed 's#{{PID}}#64392#g'
+ export HADOOP_NAMENODE_OPTS=
+ HADOOP_NAMENODE_OPTS=
++ replace_pid
++ echo
++ sed 's#{{PID}}#64392#g'
+ export HADOOP_DATANODE_OPTS=
+ HADOOP_DATANODE_OPTS=
++ replace_pid
++ echo
++ sed 's#{{PID}}#64392#g'
+ export HADOOP_SECONDARYNAMENODE_OPTS=
+ HADOOP_SECONDARYNAMENODE_OPTS=
++ replace_pid
++ echo
++ sed 's#{{PID}}#64392#g'
+ export HADOOP_NFS3_OPTS=
+ HADOOP_NFS3_OPTS=
++ replace_pid
++ echo
++ sed 's#{{PID}}#64392#g'
+ export HADOOP_JOURNALNODE_OPTS=
+ HADOOP_JOURNALNODE_OPTS=
+ '[' 5 -ge 4 ']'
+ HDFS_BIN=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-hdfs/bin/hdfs
+ export 'HADOOP_OPTS=-Djava.net.preferIPv4Stack=true '
+ HADOOP_OPTS='-Djava.net.preferIPv4Stack=true '
+ echo 'using /usr/java/jdk1.7.0_67-cloudera as JAVA_HOME'
+ echo 'using 5 as CDH_VERSION'
+ echo 'using /run/cloudera-scm-agent/process/184-hdfs-FAILOVERCONTROLLER as CONF_DIR'
+ echo 'using  as SECURE_USER'
+ echo 'using  as SECURE_GROUP'
+ set_hadoop_classpath
+ set_classpath_in_var HADOOP_CLASSPATH
+ '[' -z HADOOP_CLASSPATH ']'
+ [[ -n /usr/share/cmf ]]
++ find /usr/share/cmf/lib/plugins -maxdepth 1 -name '*.jar'
++ tr '\n' :
+ ADD_TO_CP=/usr/share/cmf/lib/plugins/event-publish-5.9.0-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.9.0.jar:
+ [[ -n navigator/cdh57 ]]
+ for DIR in '$CM_ADD_TO_CP_DIRS'
++ find /usr/share/cmf/lib/plugins/navigator/cdh57 -maxdepth 1 -name '*.jar'
++ tr '\n' :
+ PLUGIN=/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.8.0-shaded.jar:
+ ADD_TO_CP=/usr/share/cmf/lib/plugins/event-publish-5.9.0-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.9.0.jar:/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.8.0-shaded.jar:
+ eval 'OLD_VALUE=$HADOOP_CLASSPATH'
++ OLD_VALUE=
+ NEW_VALUE=/usr/share/cmf/lib/plugins/event-publish-5.9.0-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.9.0.jar:/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.8.0-shaded.jar:
+ export HADOOP_CLASSPATH=/usr/share/cmf/lib/plugins/event-publish-5.9.0-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.9.0.jar:/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.8.0-shaded.jar
+ HADOOP_CLASSPATH=/usr/share/cmf/lib/plugins/event-publish-5.9.0-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.9.0.jar:/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.8.0-shaded.jar
+ set -x
+ replace_conf_dir
+ find /run/cloudera-scm-agent/process/184-hdfs-FAILOVERCONTROLLER -type f '!' -path '/run/cloudera-scm-agent/process/184-hdfs-FAILOVERCONTROLLER/logs/*' '!' -name '*.log' '!' -name '*.keytab' '!' -name '*jceks' -exec perl -pi -e 's#{{CMF_CONF_DIR}}#/run/cloudera-scm-agent/process/184-hdfs-FAILOVERCONTROLLER#g' '{}' ';'
Can't open /run/cloudera-scm-agent/process/184-hdfs-FAILOVERCONTROLLER/supervisor.conf: Permission denied.
+ make_scripts_executable
+ find /run/cloudera-scm-agent/process/184-hdfs-FAILOVERCONTROLLER -regex '.*\.\(py\|sh\)$' -exec chmod u+x '{}' ';'
+ '[' DATANODE_MAX_LOCKED_MEMORY '!=' '' ']'
+ ulimit -l
+ export HADOOP_IDENT_STRING=hdfs
+ HADOOP_IDENT_STRING=hdfs
+ '[' -n '' ']'
+ '[' mkdir '!=' zkfc ']'
+ acquire_kerberos_tgt hdfs.keytab
+ '[' -z hdfs.keytab ']'
+ '[' -n '' ']'
+ '[' validate-writable-empty-dirs = zkfc ']'
+ '[' file-operation = zkfc ']'
+ '[' bootstrap = zkfc ']'
+ '[' failover = zkfc ']'
+ '[' transition-to-active = zkfc ']'
+ '[' initializeSharedEdits = zkfc ']'
+ '[' initialize-znode = zkfc ']'
+ '[' format-namenode = zkfc ']'
+ '[' monitor-decommission = zkfc ']'
+ '[' jnSyncWait = zkfc ']'
+ '[' nnRpcWait = zkfc ']'
+ '[' -safemode = '' -a get = '' ']'
+ '[' monitor-upgrade = zkfc ']'
+ '[' finalize-upgrade = zkfc ']'
+ '[' rolling-upgrade-prepare = zkfc ']'
+ '[' rolling-upgrade-finalize = zkfc ']'
+ '[' nnDnLiveWait = zkfc ']'
+ '[' refresh-datanode = zkfc ']'
+ '[' mkdir = zkfc ']'
+ '[' nfs3 = zkfc ']'
+ '[' namenode = zkfc -o secondarynamenode = zkfc -o datanode = zkfc ']'
+ exec /opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-hdfs/bin/hdfs --config /run/cloudera-scm-agent/process/184-hdfs-FAILOVERCONTROLLER zkfc
Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: HA is not enabled for this namenode.
	at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.create(DFSZKFailoverController.java:130)
	at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:186)&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 20 Dec 2016 23:41:23 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Fail-to-start-HDFS-FAILOVERCONTROLLER-FC-supervisor-conf/m-p/48700#M49486</guid>
      <dc:creator>spin0</dc:creator>
      <dc:date>2016-12-20T23:41:23Z</dc:date>
    </item>
    <item>
      <title>Re: Fail to start HDFS FAILOVERCONTROLLER (FC) - supervisor.conf: Permission denied</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Fail-to-start-HDFS-FAILOVERCONTROLLER-FC-supervisor-conf/m-p/48701#M49487</link>
      <description>&lt;P&gt;See the error:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: HA is not enabled for this namenode.
	at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.create(DFSZKFailoverController.java:130)
	at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:186)&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;You can't do automatica failover to a secondary namenode. &amp;nbsp;YOu would need to enable HA to get that...&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;A href="http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_hag_hdfs_ha_enabling.html" target="_blank"&gt;http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_hag_hdfs_ha_enabling.html&amp;nbsp;&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 21 Dec 2016 00:12:57 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Fail-to-start-HDFS-FAILOVERCONTROLLER-FC-supervisor-conf/m-p/48701#M49487</guid>
      <dc:creator>bgooley</dc:creator>
      <dc:date>2016-12-21T00:12:57Z</dc:date>
    </item>
    <item>
      <title>Re: Fail to start HDFS FAILOVERCONTROLLER (FC) - supervisor.conf: Permission denied</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Fail-to-start-HDFS-FAILOVERCONTROLLER-FC-supervisor-conf/m-p/48735#M49488</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We did the HDFS HA Enabling before and it leads to this troubleshooting.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We did verify the following and its currenyt roles:&lt;/P&gt;&lt;P&gt;NameNode (supposed to be Active Node) -&amp;gt; Roles: FC, JN, NN etc.&lt;/P&gt;&lt;P&gt;NameNode Secondary (supposed to be Standby Node) -&amp;gt; Roles: FC, JN, NM etc.&lt;/P&gt;&lt;P&gt;Third Node -&amp;gt; JN&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;And JournalNode Edits Directory: /var/lib/jn (drwx------ hdfs hadoop)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;In term of requirements, it is OK I beleive.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;When we verify HDFS - Configuration - Section nameservice&lt;/P&gt;&lt;P&gt;Here&amp;nbsp;is what we found:&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;NameNode Nameservice: &amp;lt;No Value&amp;gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Mount Points: &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; /&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;SecondaryNameNode Nameservice: &amp;lt;No Value&amp;gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Questions&lt;/P&gt;&lt;P&gt;1- Does it mean that HDFS HA was not set up correctly by system (Scripts during the HDFS HA enabling)?&lt;/P&gt;&lt;P&gt;2- Does it mean that we do not have any HDFS HA set up since there is no name service?&lt;/P&gt;&lt;P&gt;3- By doing again HDFS HA Enabling, will it cause any duplication of nameservice?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 21 Dec 2016 21:38:20 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Fail-to-start-HDFS-FAILOVERCONTROLLER-FC-supervisor-conf/m-p/48735#M49488</guid>
      <dc:creator>spin0</dc:creator>
      <dc:date>2016-12-21T21:38:20Z</dc:date>
    </item>
    <item>
      <title>Re: Fail to start HDFS FAILOVERCONTROLLER (FC) - supervisor.conf: Permission denied</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Fail-to-start-HDFS-FAILOVERCONTROLLER-FC-supervisor-conf/m-p/48737#M49489</link>
      <description>If you have a role called "SecondaryNameNode", then that's incorrect. This is a very confusing name for the role in Hadoop. The SecondaryNameNode is only used in non-HA scenario. In HA, you have multiple (regular) NameNode roles defined.&lt;BR /&gt;&lt;BR /&gt;HDFS HA, when properly configured, will have a nameservice. There are many other steps though.&lt;BR /&gt;&lt;BR /&gt;The HDFS HA setup process is particularly complicated, so if you can return to a normal non-HA state, then get the wizard to work, it's much better. What issue did you hit with the Enable NameNode HA wizard?&lt;BR /&gt;&lt;BR /&gt;If you have a trial or enterprise license, you can use the config history page to help identify what changes you made since you had a normal, non-HA state, which can help you revert your changes.</description>
      <pubDate>Wed, 21 Dec 2016 21:46:15 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Fail-to-start-HDFS-FAILOVERCONTROLLER-FC-supervisor-conf/m-p/48737#M49489</guid>
      <dc:creator>Darren</dc:creator>
      <dc:date>2016-12-21T21:46:15Z</dc:date>
    </item>
    <item>
      <title>Re: Fail to start HDFS FAILOVERCONTROLLER (FC) - supervisor.conf: Permission denied</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Fail-to-start-HDFS-FAILOVERCONTROLLER-FC-supervisor-conf/m-p/48741#M49490</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Darren,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;It sounds complicated with HDFS HA.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;&lt;SPAN&gt;If you have a role called "SecondaryNameNode", then that's incorrect. This is a very confusing name for the role in Hadoop. The SecondaryNameNode is only used in non-HA scenario. In HA, you have multiple (regular) NameNode roles defined.&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;With the comment above, it rings my bell now the concept Primary and Secondary for non-HA and it is was in old version. Thank for reminding me this concept.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;We verified the current setup in&lt;SPAN class="apple-converted-space"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN&gt;HDFS&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN class="apple-converted-space"&gt;&amp;nbsp;&lt;/SPAN&gt;-&lt;SPAN class="apple-converted-space"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN&gt;Configuration&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN class="apple-converted-space"&gt;&amp;nbsp;&lt;/SPAN&gt;-&lt;SPAN class="apple-converted-space"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN&gt;History and Rollback&lt;/SPAN&gt;&lt;/STRONG&gt;, we did not see anything or changes related to HDFS HA. Only one thing is&lt;SPAN class="apple-converted-space"&gt;&amp;nbsp;&lt;/SPAN&gt;JournalNodes Edits Directory: /var/lib/jn&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;We think it is Ok now to enable&lt;SPAN class="apple-converted-space"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN&gt;HDFS HA Enabling&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN class="apple-converted-space"&gt;&amp;nbsp;&lt;/SPAN&gt;option.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;We will follow the link to enable HDFS HA using Cloudera Manager &amp;nbsp;&lt;A href="http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_hag_hdfs_ha_enabling.html" target="_blank"&gt;http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_hag_hdfs_ha_enabling.html&amp;nbsp;&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 21 Dec 2016 22:23:25 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Fail-to-start-HDFS-FAILOVERCONTROLLER-FC-supervisor-conf/m-p/48741#M49490</guid>
      <dc:creator>spin0</dc:creator>
      <dc:date>2016-12-21T22:23:25Z</dc:date>
    </item>
    <item>
      <title>Re: Fail to start HDFS FAILOVERCONTROLLER (FC) - supervisor.conf: Permission denied</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Fail-to-start-HDFS-FAILOVERCONTROLLER-FC-supervisor-conf/m-p/48763#M49491</link>
      <description>&lt;P&gt;All,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The resolution to this error is to enable the HDFS HA Enabling.&lt;/P&gt;&lt;P&gt;Thank everyone for helping it.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;1- You need to pay attention to Failover Controller (FC) already exist on the nodes that you assign to be active and standby for HDFS HA.&lt;/P&gt;&lt;P&gt;Basically remove FC from these nodes before doing the HDFS HA Enabling.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;2- Have your JournalNodes Edits Directory set up.&lt;/P&gt;&lt;P&gt;Usually it is in /var/lib/jn&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Once the HDFS HA is enable, you can verify it by doing from Cloudera Manager&lt;/P&gt;&lt;P&gt;- HDFS - Instances - Federation and High Availability &amp;lt;- click on it to see the setup&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;or&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;-HDFS - Configuration -&amp;lt;Do a search on nameservice&amp;gt;&lt;/P&gt;&lt;P&gt;In filed NameNodes Nameservice, you should see all nodes that you assign in HDFS HA.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 22 Dec 2016 22:25:42 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Fail-to-start-HDFS-FAILOVERCONTROLLER-FC-supervisor-conf/m-p/48763#M49491</guid>
      <dc:creator>spin0</dc:creator>
      <dc:date>2016-12-22T22:25:42Z</dc:date>
    </item>
  </channel>
</rss>

