Support Questions

Find answers, ask questions, and share your expertise

Can't open /run/cloudera-scm-agent/process/.../config.zip: Permission denied.

avatar
New Contributor

I found that many commands will fail with the following permission denied error with Cloudera Manager 5.12

++ printf '! -name %s ' cloudera-config.sh httpfs.sh hue.sh impala.sh sqoop.sh supervisor.conf '*.log' hdfs.keytab '*jceks'
+ find /run/cloudera-scm-agent/process/77-hdfs-NAMENODE-format -type f '!' -path '/run/cloudera-scm-agent/process/77-hdfs-NAMENODE-format/logs/*' '!' -name cloudera-config.sh '!' -name httpfs.sh '!' -name hue.sh '!' -name impala.sh '!' -name sqoop.sh '!' -name supervisor.conf '!' -name '*.log' '!' -name hdfs.keytab '!' -name '*jceks' -exec perl -pi -e 's#{{CMF_CONF_DIR}}#/run/cloudera-scm-agent/process/77-hdfs-NAMENODE-format#g' '{}' ';'
Can't open /run/cloudera-scm-agent/process/77-hdfs-NAMENODE-format/config.zip: Permission denied.
Can't open /run/cloudera-scm-agent/process/77-hdfs-NAMENODE-format/proc.json: Permission denied.+ make_scripts_executable
+ find /run/cloudera-scm-agent/process/77-hdfs-NAMENODE-format -regex '.*\.\(py\|sh\)$' -exec chmod u+x '{}' ';'
+ '[' DATANODE_MAX_LOCKED_MEMORY '!=' '' ']'
+ ulimit -l
+ export HADOOP_IDENT_STRING=hdfs
+ HADOOP_IDENT_STRING=hdfs
+ '[' -n '' ']'
+ '[' mkdir '!=' format-namenode ']'
+ acquire_kerberos_tgt hdfs.keytab
+ '[' -z hdfs.keytab ']'
+ '[' -n '' ']'
+ '[' validate-writable-empty-dirs = format-namenode ']'
+ '[' file-operation = format-namenode ']'
+ '[' bootstrap = format-namenode ']'
+ '[' failover = format-namenode ']'
+ '[' transition-to-active = format-namenode ']'
+ '[' initializeSharedEdits = format-namenode ']'
+ '[' initialize-znode = format-namenode ']'
+ '[' format-namenode = format-namenode ']'
+ '[' -z /dfs/nn ']'
+ for dfsdir in '$DFS_STORAGE_DIRS'
+ '[' -e /dfs/nn ']'
+ '[' '!' -d /dfs/nn ']'
+ CLUSTER_ARGS=
+ '[' 2 -eq 2 ']'
+ CLUSTER_ARGS='-clusterId cluster8'
+ '[' 3 = 5 ']'
+ '[' -3 = 5 ']'
+ exec /usr/lib/hadoop-hdfs/bin/hdfs --config /run/cloudera-scm-agent/process/77-hdfs-NAMENODE-format namenode -format -clusterId cluster8 -nonInteractive
/usr/lib64/cmf/service/hdfs/hdfs.sh: line 273: /usr/lib/hadoop-hdfs/bin/hdfs: No such file or directory

Seems this is a problem caused by a bug fix since 5.9

https://www.cloudera.com/documentation/enterprise/release-notes/topics/cm_rn_fixed_issues.html#conce...

Did anyone encounter this in recent versions?

 

 

[root@c53 ~]# ls -l /var/run/cloudera-scm-agent/process/77-hdfs-NAMENODE-format/
total 88
-rwxr----- 1 hdfs hdfs 2149 Jul 19 09:40 cloudera_manager_agent_fencer.py
-rw-r----- 1 hdfs hdfs   30 Jul 19 09:40 cloudera_manager_agent_fencer_secret_key.txt
-rw-r----- 1 hdfs hdfs  353 Jul 19 09:40 cloudera-monitor.properties
-rw-r----- 1 hdfs hdfs  316 Jul 19 09:40 cloudera-stack-monitor.properties
-rw------- 1 root root 8868 Jul 19 09:40 config.zip
-rw-r----- 1 hdfs hdfs 3460 Jul 19 09:40 core-site.xml
-rw-r----- 1 hdfs hdfs   12 Jul 19 09:40 dfs_hosts_allow.txt
-rw-r----- 1 hdfs hdfs    0 Jul 19 09:40 dfs_hosts_exclude.txt
-rw-r----- 1 hdfs hdfs 1388 Jul 19 09:40 event-filter-rules.json
-rw-r--r-- 1 hdfs hdfs    4 Jul 19 09:40 exit_code
-rw-r----- 1 hdfs hdfs    0 Jul 19 09:40 hadoop-metrics2.properties
-rw-r----- 1 hdfs hdfs   98 Jul 19 09:40 hadoop-policy.xml
-rw------- 1 hdfs hdfs    0 Jul 19 09:40 hdfs.keytab
-rw-r----- 1 hdfs hdfs 4872 Jul 19 09:40 hdfs-site.xml
-rw-r----- 1 hdfs hdfs    0 Jul 19 09:40 http-auth-signature-secret
-rw-r----- 1 hdfs hdfs 2246 Jul 19 09:40 log4j.properties
drwxr-x--x 2 hdfs hdfs   80 Jul 19 09:40 logs
-rw-r----- 1 hdfs hdfs 2470 Jul 19 09:40 navigator.client.properties
-rw------- 1 root root 3879 Jul 19 09:40 proc.json
-rw-r----- 1 hdfs hdfs  315 Jul 19 09:40 ssl-client.xml
-rw-r----- 1 hdfs hdfs   98 Jul 19 09:40 ssl-server.xml
-rw------- 1 root root 3463 Jul 19 09:40 supervisor.conf
-rw-r----- 1 hdfs hdfs  187 Jul 19 09:40 topology.map
-rwxr----- 1 hdfs hdfs 1549 Jul 19 09:40 topology.py

6 REPLIES 6

avatar
Contributor

Hi

 

I resolved this issue by configuring proper java heap memory for name node

 

&

 

Configuring HA on namenode

 

regards

Shafi

avatar
Cloudera Employee

Any chance you could elaborate on your answer? Im seeing the same problem.

avatar
Contributor

Hi

 

I was using 5.12 express edition. It was not easy to startup with services after installation. The configurations needs to be changed and High Availability configurations, Journal nodes.

 

I have updated Java heap size for HDFS instances. Namenode, Secondary Name node and Journal nodes only retained

 

regards

Shafi

avatar
Rising Star

I had the same problem until just now, on 5.12.x, and in my case it was caused by a problem with ssl, so if you use ssl, please read the following:

"this might be caused by having more than one oracle java installed, and even worse, any version of openjdk java. Namely, make sure you have added the ca you used for the ssl to the java keystore of the java version you are using (you can find that out in process list). Also, make sure that keytool you are using is belonging to this version of java - so it's best to have only one version installed, or (if that is unavoidable), use the full path to keytool. Hope it helps."

avatar
New Contributor

@samurai, this post solve my issue.

avatar
New Contributor

yes, it's Java problem, I export JAVA_HOME again to solve it

 

export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64