How to resolve "Permission denied" errors in CDH

by Community Manager on ‎01-12-2016 09:42 AM - edited on ‎09-27-2016 09:13 AM by Community Manager

Symptoms

 

"Permission denied" errors can present in a variety of use cases and from nearly any application that utilizes CDH.

 

For example, when attempting to start the jobtracker using this command

 

service hadoop-0.20-mapreduce-jobtracker start

 

 You may see this error, or one similar

 

org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4891)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4873)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4847)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3192)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:3156)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3137)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:669) 

 

While the steps to reproduce this error can vary widely, the root causes are very well defined and you'll know you're suffering from this issue by finding the following line either on stdout or in the relevant log files:

 

org.apache.hadoop.security.AccessControlException: Permission denied: user=XXX, access=WRITE, inode="/someDirectory":hdfs:supergroup:drwxr-xr-x

 

Applies To

 

CDH (all versions), Mapreduce, HDFS, other services that rely on reading from or writing to HDFS

 

Cause

 

Access to the HDFS filesystem and/or permissions on certain directories are not correctly configured.

 

Troubleshooting Steps

 

There are several solutions to attempt:

 

1) The /user/ directory is owned by "hdfs" with 755 permissions. As a result only hdfs can write to that directory. Unlike unix/linux, hdfs is the superuser and not root. So you would need to do this:

sudo -u hdfs hadoop fs -mkdir /user/,,myfile,,
sudo -u hdfs hadoop fs -put myfile.txt /user/,,/,,

If you want to create a home directory for root so you can store files in his directory, do:

sudo -u hdfs hadoop fs -mkdir /user/root
sudo -u hdfs hadoop fs -chown root /user/root

Then as root you can do "hadoop fs -put file /user/root/".

2) You may also be getting denied on the network port where the NameNode is supposed to be listening:

 

Fix this by changing the address that the service is listening on in /etc/hadoop/conf/core-site.xml. By default your NameNode may be listening on "localhost:8020." (127.0.0.1)

 

So to be clear, implement this value for the following property:


<property>
    <name>fs.defaultFS</name>
    <value>hdfs://0.0.0.0:8020</value>
</property>

 

then bounce the service with hadoop-hdfs-namenode restart
optional: validate with netstat -tupln | grep '8020' 

 

References

Disclaimer: The information contained in this article was generated by third-parties and not by Cloudera or it's personnel. Cloudera cannot guarantee its accuracy or efficacy. Cloudera disclaims all warranties of any kind and users of this information assume all risk associated with it and with following the advice or directions contained herein. By visiting this page, you agree to be bound by the Terms and Conditions of Site Usage , including all disclaimers and limitations contained therein.