Support Questions
Find answers, ask questions, and share your expertise

Fail to start HDFS FAILOVERCONTROLLER (FC) - supervisor.conf: Permission denied

Solved Go to solution

Fail to start HDFS FAILOVERCONTROLLER (FC) - supervisor.conf: Permission denied

Explorer

I have CDH 5.9.0-1 set up and it works well (Done through Cloudera Manager). I try to enable Failover Controller (FC) in HDFS. After the installation I got error (Error Log) on the NameNode (Primary) with

 

+ replace_conf_dir
+ find /run/cloudera-scm-agent/process/152-hdfs-FAILOVERCONTROLLER -type f '!' -path '/run/cloudera-scm-agent/process/152-hdfs-FAILOVERCONTROLLER/logs/*' '!' -name '*.log' '!' -name '*.keytab' '!' -name '*jceks' -exec perl -pi -e 's#{{CMF_CONF_DIR}}#/run/cloudera-scm-agent/process/152-hdfs-FAILOVERCONTROLLER#g' '{}' ';'
Can't open /run/cloudera-scm-agent/process/152-hdfs-FAILOVERCONTROLLER/supervisor.conf: Permission denied.
+ make_scripts_executable
+ find /run/cloudera-scm-agent/process/152-hdfs-FAILOVERCONTROLLER -regex '.*\.\(py\|sh\)$' -exec chmod u+x '{}' ';'
+ '[' DATANODE_MAX_LOCKED_MEMORY '!=' '' ']'

 

I'm not sure what is the issue other than the permissions. This supervisor.conf is owned by root.

Each time I restart the FC, a new process number is introduced. So even I change permmissions to allow everyone to Read, the new process is owned by root only.

 

I consult this link but not sure how to proceed.

http://community.cloudera.com/t5/Cloudera-Manager-Installation/Problem-with-cloudera-scm-agent-and-s...

 

Any help will be appraciated.

 

 

1 ACCEPTED SOLUTION

Accepted Solutions

Re: Fail to start HDFS FAILOVERCONTROLLER (FC) - supervisor.conf: Permission denied

Explorer

All,

 

The resolution to this error is to enable the HDFS HA Enabling.

Thank everyone for helping it.

 

1- You need to pay attention to Failover Controller (FC) already exist on the nodes that you assign to be active and standby for HDFS HA.

Basically remove FC from these nodes before doing the HDFS HA Enabling.

 

2- Have your JournalNodes Edits Directory set up.

Usually it is in /var/lib/jn

 

Once the HDFS HA is enable, you can verify it by doing from Cloudera Manager

- HDFS - Instances - Federation and High Availability <- click on it to see the setup

 

or

 

-HDFS - Configuration -<Do a search on nameservice>

In filed NameNodes Nameservice, you should see all nodes that you assign in HDFS HA.

 

 

 

View solution in original post

9 REPLIES 9

Re: Fail to start HDFS FAILOVERCONTROLLER (FC) - supervisor.conf: Permission denied

Super Guru

supervisor.conf ususally only is read/write for owner (root) since it is used by the supervisor to start the process.  Have you configured the agent to run as a user other than root?

What are the file permissions listed in your FAILOVERCONTROLLER dir?

 

Re: Fail to start HDFS FAILOVERCONTROLLER (FC) - supervisor.conf: Permission denied

Explorer

bgoogley,

 

Have you configured the agent to run as a user other than root?

 

When installation took place,we created a Hadoop account that we use (with sudo) to install Cloudera Manager.

So I'm not sure where to check it.

 

The account used to install Cloudera Manager

hduser is belong to hduser and wheel groups.

 

 

What are the file permissions listed in your FAILOVERCONTROLLER dir?

 

For this location

/run/cloudera-scm-agent/process/152-hdfs-FAILOVERCONTROLLER/supervisor.conf

Here is the permissions:

drwxr-xr-x root root run

drwxr-xr-x cloudera-scm cloudera-scm cloudera-scm-agent

drwxr-x--x root root process

drwxr-x--x hdfs hdfs 152-hdfs-FAILOVERCONTROLLER

-rw-------- root root supervisor.conf

 

other files in that directory

-rw-r------ hdfs hdfs <Any files>

 

 

Note:

I also check Cloudera SCM Agent on all nodes (NameNode [Pramary, Secondary], DataNode etc.) and it is active on all nodes.

#sudo service cloudera-scm-agent status.

 

 

 

 

Re: Fail to start HDFS FAILOVERCONTROLLER (FC) - supervisor.conf: Permission denied

Please ignore the permission error on supervisor.conf. The script that failed to update that file doesn't actually need to target that file. A future version of Cloudera Manager has updated the code to not log this spurrious error. You may also want to revert the permissions to supervisor.conf to not be world-readable.

 

What does the end of the stderr log say?

 

Did you check the role logs for your FC for a relevant error message?

Re: Fail to start HDFS FAILOVERCONTROLLER (FC) - supervisor.conf: Permission denied

Explorer

Note :

Error log is from the NameNode (Primary) where Failover Controller fails to start. 

We install Fialover Controller on NameNode (Primary) and NamNode(Secondary).

 

What does the end of the stderr log say?

 

The end of stderr log:

+ '[' refresh-datanode = zkfc ']'
+ '[' mkdir = zkfc ']'
+ '[' nfs3 = zkfc ']'
+ '[' namenode = zkfc -o secondarynamenode = zkfc -o datanode = zkfc ']'
+ exec /opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-hdfs/bin/hdfs --config /run/cloudera-scm-agent/process/184-hdfs-FAILOVERCONTROLLER zkfc
Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: HA is not enabled for this namenode.
	at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.create(DFSZKFailoverController.java:130)
	at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:186)

 

Did you check the role logs for your FC for a relevant error message?

I do not see anything particular. You probably are familiar with it. It is new to me so I could miss something.

Here is the whole stderr log.

 

Tue Dec 20 14:41:57 PST 2016
+ source_parcel_environment
+ '[' '!' -z /opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/meta/cdh_env.sh ']'
+ OLD_IFS=' 	
'
+ IFS=:
+ SCRIPT_ARRAY=($SCM_DEFINES_SCRIPTS)
+ DIRNAME_ARRAY=($PARCEL_DIRNAMES)
+ IFS=' 	
'
+ COUNT=1
++ seq 1 1
+ for i in '`seq 1 $COUNT`'
+ SCRIPT=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/meta/cdh_env.sh
+ PARCEL_DIRNAME=CDH-5.9.0-1.cdh5.9.0.p0.23
+ . /opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/meta/cdh_env.sh
++ CDH_DIRNAME=CDH-5.9.0-1.cdh5.9.0.p0.23
++ export CDH_HADOOP_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop
++ CDH_HADOOP_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop
++ export CDH_MR1_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-0.20-mapreduce
++ CDH_MR1_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-0.20-mapreduce
++ export CDH_HDFS_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-hdfs
++ CDH_HDFS_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-hdfs
++ export CDH_HTTPFS_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-httpfs
++ CDH_HTTPFS_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-httpfs
++ export CDH_MR2_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-mapreduce
++ CDH_MR2_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-mapreduce
++ export CDH_YARN_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-yarn
++ CDH_YARN_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-yarn
++ export CDH_HBASE_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hbase
++ CDH_HBASE_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hbase
++ export CDH_ZOOKEEPER_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/zookeeper
++ CDH_ZOOKEEPER_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/zookeeper
++ export CDH_HIVE_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hive
++ CDH_HIVE_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hive
++ export CDH_HUE_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hue
++ CDH_HUE_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hue
++ export CDH_OOZIE_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/oozie
++ CDH_OOZIE_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/oozie
++ export CDH_HUE_PLUGINS_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop
++ CDH_HUE_PLUGINS_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop
++ export CDH_FLUME_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/flume-ng
++ CDH_FLUME_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/flume-ng
++ export CDH_PIG_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/pig
++ CDH_PIG_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/pig
++ export CDH_HCAT_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hive-hcatalog
++ CDH_HCAT_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hive-hcatalog
++ export CDH_SQOOP2_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/sqoop2
++ CDH_SQOOP2_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/sqoop2
++ export CDH_LLAMA_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/llama
++ CDH_LLAMA_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/llama
++ export CDH_SENTRY_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/sentry
++ CDH_SENTRY_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/sentry
++ export TOMCAT_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/bigtop-tomcat
++ TOMCAT_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/bigtop-tomcat
++ export JSVC_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/bigtop-utils
++ JSVC_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/bigtop-utils
++ export CDH_HADOOP_BIN=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop/bin/hadoop
++ CDH_HADOOP_BIN=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop/bin/hadoop
++ export CDH_IMPALA_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/impala
++ CDH_IMPALA_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/impala
++ export CDH_SOLR_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/solr
++ CDH_SOLR_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/solr
++ export CDH_HBASE_INDEXER_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hbase-solr
++ CDH_HBASE_INDEXER_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hbase-solr
++ export SEARCH_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/search
++ SEARCH_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/search
++ export CDH_SPARK_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/spark
++ CDH_SPARK_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/spark
++ export WEBHCAT_DEFAULT_XML=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/etc/hive-webhcat/conf.dist/webhcat-default.xml
++ WEBHCAT_DEFAULT_XML=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/etc/hive-webhcat/conf.dist/webhcat-default.xml
++ export CDH_KMS_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-kms
++ CDH_KMS_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-kms
++ export CDH_PARQUET_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/parquet
++ CDH_PARQUET_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/parquet
++ export CDH_AVRO_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/avro
++ CDH_AVRO_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/avro
+ locate_cdh_java_home
+ '[' -z '' ']'
+ '[' -z /opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/bigtop-utils ']'
+ local BIGTOP_DETECT_JAVAHOME=
+ for candidate in '"${JSVC_HOME}"' '"${JSVC_HOME}/.."' '"/usr/lib/bigtop-utils"' '"/usr/libexec"'
+ '[' -e /opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/bigtop-utils/bigtop-detect-javahome ']'
+ BIGTOP_DETECT_JAVAHOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/bigtop-utils/bigtop-detect-javahome
+ break
+ '[' -z /opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/bigtop-utils/bigtop-detect-javahome ']'
+ . /opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/bigtop-utils/bigtop-detect-javahome
++ BIGTOP_DEFAULTS_DIR=/etc/default
++ '[' -n /etc/default -a -r /etc/default/bigtop-utils ']'
++ JAVA6_HOME_CANDIDATES=('/usr/lib/j2sdk1.6-sun' '/usr/lib/jvm/java-6-sun' '/usr/lib/jvm/java-1.6.0-sun-1.6.0' '/usr/lib/jvm/j2sdk1.6-oracle' '/usr/lib/jvm/j2sdk1.6-oracle/jre' '/usr/java/jdk1.6' '/usr/java/jre1.6')
++ OPENJAVA6_HOME_CANDIDATES=('/usr/lib/jvm/java-1.6.0-openjdk' '/usr/lib/jvm/jre-1.6.0-openjdk')
++ JAVA7_HOME_CANDIDATES=('/usr/java/jdk1.7' '/usr/java/jre1.7' '/usr/lib/jvm/j2sdk1.7-oracle' '/usr/lib/jvm/j2sdk1.7-oracle/jre' '/usr/lib/jvm/java-7-oracle')
++ OPENJAVA7_HOME_CANDIDATES=('/usr/lib/jvm/java-1.7.0-openjdk' '/usr/lib/jvm/java-7-openjdk')
++ JAVA8_HOME_CANDIDATES=('/usr/java/jdk1.8' '/usr/java/jre1.8' '/usr/lib/jvm/j2sdk1.8-oracle' '/usr/lib/jvm/j2sdk1.8-oracle/jre' '/usr/lib/jvm/java-8-oracle')
++ OPENJAVA8_HOME_CANDIDATES=('/usr/lib/jvm/java-1.8.0-openjdk' '/usr/lib/jvm/java-8-openjdk')
++ MISCJAVA_HOME_CANDIDATES=('/Library/Java/Home' '/usr/java/default' '/usr/lib/jvm/default-java' '/usr/lib/jvm/java-openjdk' '/usr/lib/jvm/jre-openjdk')
++ case ${BIGTOP_JAVA_MAJOR} in
++ JAVA_HOME_CANDIDATES=(${JAVA7_HOME_CANDIDATES[@]} ${JAVA8_HOME_CANDIDATES[@]} ${MISCJAVA_HOME_CANDIDATES[@]} ${OPENJAVA7_HOME_CANDIDATES[@]} ${OPENJAVA8_HOME_CANDIDATES[@]})
++ '[' -z '' ']'
++ for candidate_regex in '${JAVA_HOME_CANDIDATES[@]}'
+++ ls -rvd /usr/java/jdk1.7.0_67-cloudera
++ for candidate in '`ls -rvd ${candidate_regex}* 2>/dev/null`'
++ '[' -e /usr/java/jdk1.7.0_67-cloudera/bin/java ']'
++ export JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera
++ JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera
++ break 2
+ verify_java_home
+ '[' -z /usr/java/jdk1.7.0_67-cloudera ']'
+ echo JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera
+ . /usr/lib64/cmf/service/common/cdh-default-hadoop
++ [[ -z 5 ]]
++ '[' 5 = 3 ']'
++ '[' 5 = -3 ']'
++ '[' 5 -ge 4 ']'
++ export HADOOP_HOME_WARN_SUPPRESS=true
++ HADOOP_HOME_WARN_SUPPRESS=true
++ export HADOOP_PREFIX=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop
++ HADOOP_PREFIX=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop
++ export HADOOP_LIBEXEC_DIR=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop/libexec
++ HADOOP_LIBEXEC_DIR=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop/libexec
++ export HADOOP_CONF_DIR=/run/cloudera-scm-agent/process/184-hdfs-FAILOVERCONTROLLER
++ HADOOP_CONF_DIR=/run/cloudera-scm-agent/process/184-hdfs-FAILOVERCONTROLLER
++ export HADOOP_COMMON_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop
++ HADOOP_COMMON_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop
++ export HADOOP_HDFS_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-hdfs
++ HADOOP_HDFS_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-hdfs
++ export HADOOP_MAPRED_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-mapreduce
++ HADOOP_MAPRED_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-mapreduce
++ '[' 5 = 4 ']'
++ '[' 5 = 5 ']'
++ export HADOOP_YARN_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-yarn
++ HADOOP_YARN_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-yarn
++ replace_pid
++ echo
++ sed 's#{{PID}}#64392#g'
+ export HADOOP_NAMENODE_OPTS=
+ HADOOP_NAMENODE_OPTS=
++ replace_pid
++ echo
++ sed 's#{{PID}}#64392#g'
+ export HADOOP_DATANODE_OPTS=
+ HADOOP_DATANODE_OPTS=
++ replace_pid
++ echo
++ sed 's#{{PID}}#64392#g'
+ export HADOOP_SECONDARYNAMENODE_OPTS=
+ HADOOP_SECONDARYNAMENODE_OPTS=
++ replace_pid
++ echo
++ sed 's#{{PID}}#64392#g'
+ export HADOOP_NFS3_OPTS=
+ HADOOP_NFS3_OPTS=
++ replace_pid
++ echo
++ sed 's#{{PID}}#64392#g'
+ export HADOOP_JOURNALNODE_OPTS=
+ HADOOP_JOURNALNODE_OPTS=
+ '[' 5 -ge 4 ']'
+ HDFS_BIN=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-hdfs/bin/hdfs
+ export 'HADOOP_OPTS=-Djava.net.preferIPv4Stack=true '
+ HADOOP_OPTS='-Djava.net.preferIPv4Stack=true '
+ echo 'using /usr/java/jdk1.7.0_67-cloudera as JAVA_HOME'
+ echo 'using 5 as CDH_VERSION'
+ echo 'using /run/cloudera-scm-agent/process/184-hdfs-FAILOVERCONTROLLER as CONF_DIR'
+ echo 'using  as SECURE_USER'
+ echo 'using  as SECURE_GROUP'
+ set_hadoop_classpath
+ set_classpath_in_var HADOOP_CLASSPATH
+ '[' -z HADOOP_CLASSPATH ']'
+ [[ -n /usr/share/cmf ]]
++ find /usr/share/cmf/lib/plugins -maxdepth 1 -name '*.jar'
++ tr '\n' :
+ ADD_TO_CP=/usr/share/cmf/lib/plugins/event-publish-5.9.0-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.9.0.jar:
+ [[ -n navigator/cdh57 ]]
+ for DIR in '$CM_ADD_TO_CP_DIRS'
++ find /usr/share/cmf/lib/plugins/navigator/cdh57 -maxdepth 1 -name '*.jar'
++ tr '\n' :
+ PLUGIN=/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.8.0-shaded.jar:
+ ADD_TO_CP=/usr/share/cmf/lib/plugins/event-publish-5.9.0-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.9.0.jar:/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.8.0-shaded.jar:
+ eval 'OLD_VALUE=$HADOOP_CLASSPATH'
++ OLD_VALUE=
+ NEW_VALUE=/usr/share/cmf/lib/plugins/event-publish-5.9.0-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.9.0.jar:/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.8.0-shaded.jar:
+ export HADOOP_CLASSPATH=/usr/share/cmf/lib/plugins/event-publish-5.9.0-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.9.0.jar:/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.8.0-shaded.jar
+ HADOOP_CLASSPATH=/usr/share/cmf/lib/plugins/event-publish-5.9.0-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.9.0.jar:/usr/share/cmf/lib/plugins/navigator/cdh57/audit-plugin-cdh57-2.8.0-shaded.jar
+ set -x
+ replace_conf_dir
+ find /run/cloudera-scm-agent/process/184-hdfs-FAILOVERCONTROLLER -type f '!' -path '/run/cloudera-scm-agent/process/184-hdfs-FAILOVERCONTROLLER/logs/*' '!' -name '*.log' '!' -name '*.keytab' '!' -name '*jceks' -exec perl -pi -e 's#{{CMF_CONF_DIR}}#/run/cloudera-scm-agent/process/184-hdfs-FAILOVERCONTROLLER#g' '{}' ';'
Can't open /run/cloudera-scm-agent/process/184-hdfs-FAILOVERCONTROLLER/supervisor.conf: Permission denied.
+ make_scripts_executable
+ find /run/cloudera-scm-agent/process/184-hdfs-FAILOVERCONTROLLER -regex '.*\.\(py\|sh\)$' -exec chmod u+x '{}' ';'
+ '[' DATANODE_MAX_LOCKED_MEMORY '!=' '' ']'
+ ulimit -l
+ export HADOOP_IDENT_STRING=hdfs
+ HADOOP_IDENT_STRING=hdfs
+ '[' -n '' ']'
+ '[' mkdir '!=' zkfc ']'
+ acquire_kerberos_tgt hdfs.keytab
+ '[' -z hdfs.keytab ']'
+ '[' -n '' ']'
+ '[' validate-writable-empty-dirs = zkfc ']'
+ '[' file-operation = zkfc ']'
+ '[' bootstrap = zkfc ']'
+ '[' failover = zkfc ']'
+ '[' transition-to-active = zkfc ']'
+ '[' initializeSharedEdits = zkfc ']'
+ '[' initialize-znode = zkfc ']'
+ '[' format-namenode = zkfc ']'
+ '[' monitor-decommission = zkfc ']'
+ '[' jnSyncWait = zkfc ']'
+ '[' nnRpcWait = zkfc ']'
+ '[' -safemode = '' -a get = '' ']'
+ '[' monitor-upgrade = zkfc ']'
+ '[' finalize-upgrade = zkfc ']'
+ '[' rolling-upgrade-prepare = zkfc ']'
+ '[' rolling-upgrade-finalize = zkfc ']'
+ '[' nnDnLiveWait = zkfc ']'
+ '[' refresh-datanode = zkfc ']'
+ '[' mkdir = zkfc ']'
+ '[' nfs3 = zkfc ']'
+ '[' namenode = zkfc -o secondarynamenode = zkfc -o datanode = zkfc ']'
+ exec /opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-hdfs/bin/hdfs --config /run/cloudera-scm-agent/process/184-hdfs-FAILOVERCONTROLLER zkfc
Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: HA is not enabled for this namenode.
	at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.create(DFSZKFailoverController.java:130)
	at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:186)

 

Re: Fail to start HDFS FAILOVERCONTROLLER (FC) - supervisor.conf: Permission denied

Super Guru

See the error:

 

Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: HA is not enabled for this namenode.
	at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.create(DFSZKFailoverController.java:130)
	at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:186)

 

You can't do automatica failover to a secondary namenode.  YOu would need to enable HA to get that...

 

http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_hag_hdfs_ha_enabling.html 

Re: Fail to start HDFS FAILOVERCONTROLLER (FC) - supervisor.conf: Permission denied

Explorer

 

We did the HDFS HA Enabling before and it leads to this troubleshooting.

 

We did verify the following and its currenyt roles:

NameNode (supposed to be Active Node) -> Roles: FC, JN, NN etc.

NameNode Secondary (supposed to be Standby Node) -> Roles: FC, JN, NM etc.

Third Node -> JN

 

And JournalNode Edits Directory: /var/lib/jn (drwx------ hdfs hadoop)

 

In term of requirements, it is OK I beleive.

 

When we verify HDFS - Configuration - Section nameservice

Here is what we found:

NameNode Nameservice: <No Value>

Mount Points:           /

SecondaryNameNode Nameservice: <No Value>

 

Questions

1- Does it mean that HDFS HA was not set up correctly by system (Scripts during the HDFS HA enabling)?

2- Does it mean that we do not have any HDFS HA set up since there is no name service?

3- By doing again HDFS HA Enabling, will it cause any duplication of nameservice?

 

Thanks.

 

 

 

Re: Fail to start HDFS FAILOVERCONTROLLER (FC) - supervisor.conf: Permission denied

If you have a role called "SecondaryNameNode", then that's incorrect. This is a very confusing name for the role in Hadoop. The SecondaryNameNode is only used in non-HA scenario. In HA, you have multiple (regular) NameNode roles defined.

HDFS HA, when properly configured, will have a nameservice. There are many other steps though.

The HDFS HA setup process is particularly complicated, so if you can return to a normal non-HA state, then get the wizard to work, it's much better. What issue did you hit with the Enable NameNode HA wizard?

If you have a trial or enterprise license, you can use the config history page to help identify what changes you made since you had a normal, non-HA state, which can help you revert your changes.

Re: Fail to start HDFS FAILOVERCONTROLLER (FC) - supervisor.conf: Permission denied

Explorer

Darren,

 

It sounds complicated with HDFS HA.

 

If you have a role called "SecondaryNameNode", then that's incorrect. This is a very confusing name for the role in Hadoop. The SecondaryNameNode is only used in non-HA scenario. In HA, you have multiple (regular) NameNode roles defined.

 

With the comment above, it rings my bell now the concept Primary and Secondary for non-HA and it is was in old version. Thank for reminding me this concept.

 

We verified the current setup in HDFS - Configuration - History and Rollback, we did not see anything or changes related to HDFS HA. Only one thing is JournalNodes Edits Directory: /var/lib/jn

 

We think it is Ok now to enable HDFS HA Enabling option.

 

We will follow the link to enable HDFS HA using Cloudera Manager  http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_hag_hdfs_ha_enabling.html 

Re: Fail to start HDFS FAILOVERCONTROLLER (FC) - supervisor.conf: Permission denied

Explorer

All,

 

The resolution to this error is to enable the HDFS HA Enabling.

Thank everyone for helping it.

 

1- You need to pay attention to Failover Controller (FC) already exist on the nodes that you assign to be active and standby for HDFS HA.

Basically remove FC from these nodes before doing the HDFS HA Enabling.

 

2- Have your JournalNodes Edits Directory set up.

Usually it is in /var/lib/jn

 

Once the HDFS HA is enable, you can verify it by doing from Cloudera Manager

- HDFS - Instances - Federation and High Availability <- click on it to see the setup

 

or

 

-HDFS - Configuration -<Do a search on nameservice>

In filed NameNodes Nameservice, you should see all nodes that you assign in HDFS HA.

 

 

 

View solution in original post