Support Questions

Find answers, ask questions, and share your expertise

Who agreed with this topic

Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x

avatar
Expert Contributor

When I try to start the job traker using this command

 

service hadoop-0.20-mapreduce-jobtracker start

 

 I can see this error

 

org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4891)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4873)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4847)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3192)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:3156)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3137)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:669)

 

I found this blog post which tries to address this issue

 

http://blog.spryinc.com/2013/06/hdfs-permissions-overcoming-permission.html

 

I followed the steps here and did

 

groupadd supergroup
usermod -a -G supergroup mapred
usermod -a -G supergroup hdfs

 

but i still get this problem. The only different between the blog entry and me is that for me the error is on the "root" dir whereas for the blog it is for the "/user"

 

Here is my mapred-site.xml

 

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>jt1:8021</value>
  </property>
  <property>
    <name>mapred.local.dir</name>
    <value>/tmp/mapred/jt</value>
  </property>
  <property>
    <name>mapred.system.dir</name>
    <value>/tmp/mapred/system</value>
  </property>
  <property>
    <name>mapreduce.jobtracker.staging.root.dir</name>
    <value>/user</value>
  </property>
  <property>
    <name>mapred.job.tracker.persist.jobstatus.active</name>
    <value>true</value>
  </property>
  <property>
    <name>mapred.job.tracker.persist.jobstatus.hours</name>
    <value>24</value>
  </property>
  <property>
    <name>mapred.jobtracker.taskScheduler</name>
    <value>org.apache.hadoop.mapred.FairScheduler</value>
  </property>
  <property>
    <name>mapred.fairscheduler.poolnameproperty</name>
    <value>user.name</value>
  </property>
  <property>
    <name>mapred.fairscheduler.allocation.file</name>
    <value>/etc/hadoop/conf/fair-scheduler.xml</value>
  </property>
  <property>
    <name>mapred.fairscheduler.allow.undeclared.pools</name>
    <value>true</value>
  </property>
</configuration>

 

I also found  this blog

 

http://www.hadoopinrealworld.com/fixing-org-apache-hadoop-security-accesscontrolexception-permission...

 

I did 

 

sudo -u hdfs hdfs dfs -mkdir /home

sudo -u hdfs hdfs dfs -chown mapred:mapred /home

sudo -u hdfs hdfs dfs -mkdir /home/mapred

sudo -u hdfs hdfs dfs -chown mapred /home/mapred

sudo -u hdfs hdfs dfs -chown hdfs:supergroup /

 

but still problem is not resolved 😞 Please help.

 

I wonder why it is going for the "root" dir inode="/":hdfs:supergroup:drwxr-xr-x

Who agreed with this topic