Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x

avatar
Expert Contributor

When I try to start the job traker using this command

 

service hadoop-0.20-mapreduce-jobtracker start

 

 I can see this error

 

org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4891)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4873)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4847)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3192)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:3156)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3137)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:669)

 

I found this blog post which tries to address this issue

 

http://blog.spryinc.com/2013/06/hdfs-permissions-overcoming-permission.html

 

I followed the steps here and did

 

groupadd supergroup
usermod -a -G supergroup mapred
usermod -a -G supergroup hdfs

 

but i still get this problem. The only different between the blog entry and me is that for me the error is on the "root" dir whereas for the blog it is for the "/user"

 

Here is my mapred-site.xml

 

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>jt1:8021</value>
  </property>
  <property>
    <name>mapred.local.dir</name>
    <value>/tmp/mapred/jt</value>
  </property>
  <property>
    <name>mapred.system.dir</name>
    <value>/tmp/mapred/system</value>
  </property>
  <property>
    <name>mapreduce.jobtracker.staging.root.dir</name>
    <value>/user</value>
  </property>
  <property>
    <name>mapred.job.tracker.persist.jobstatus.active</name>
    <value>true</value>
  </property>
  <property>
    <name>mapred.job.tracker.persist.jobstatus.hours</name>
    <value>24</value>
  </property>
  <property>
    <name>mapred.jobtracker.taskScheduler</name>
    <value>org.apache.hadoop.mapred.FairScheduler</value>
  </property>
  <property>
    <name>mapred.fairscheduler.poolnameproperty</name>
    <value>user.name</value>
  </property>
  <property>
    <name>mapred.fairscheduler.allocation.file</name>
    <value>/etc/hadoop/conf/fair-scheduler.xml</value>
  </property>
  <property>
    <name>mapred.fairscheduler.allow.undeclared.pools</name>
    <value>true</value>
  </property>
</configuration>

 

I also found  this blog

 

http://www.hadoopinrealworld.com/fixing-org-apache-hadoop-security-accesscontrolexception-permission...

 

I did 

 

sudo -u hdfs hdfs dfs -mkdir /home

sudo -u hdfs hdfs dfs -chown mapred:mapred /home

sudo -u hdfs hdfs dfs -mkdir /home/mapred

sudo -u hdfs hdfs dfs -chown mapred /home/mapred

sudo -u hdfs hdfs dfs -chown hdfs:supergroup /

 

but still problem is not resolved 😞 Please help.

 

I wonder why it is going for the "root" dir inode="/":hdfs:supergroup:drwxr-xr-x

1 ACCEPTED SOLUTION

avatar
Master Collaborator

The error indicates that mapreduce wants to be able to write to /.  you have the owner as hdfs with rwx, you have groups with r-x,  and others set to r-x. Since you added mapred to the groups membership earlier by adding it to supergroup and making supergroup the group for / it is the group level permissions that we will need to modify.   

 

To get it working you can do the following:

 

sudo -u hdfs hdfs dfs -chmod 775 /

 

this will change the permissions on / to drwxrwxr-x

 

 

as for why mapreduce is trying to write to / it may be that it's trying to create /user and /tmp that you have defined as the user space and the temporary space.  if you don't have those directories you could instead do the following:

 

sudo -u hdfs hdfs dfs -mkdir /user

sudo -u hdfs hdfs dfs -chown mapred:mapred /user

sudo -u hdfs hdfs dfs -mkdir /tmp

sudo -u hdfs hdfs dfs -chown mapred:mapred /tmp

 

 

View solution in original post

13 REPLIES 13

avatar
Master Collaborator

The error indicates that mapreduce wants to be able to write to /.  you have the owner as hdfs with rwx, you have groups with r-x,  and others set to r-x. Since you added mapred to the groups membership earlier by adding it to supergroup and making supergroup the group for / it is the group level permissions that we will need to modify.   

 

To get it working you can do the following:

 

sudo -u hdfs hdfs dfs -chmod 775 /

 

this will change the permissions on / to drwxrwxr-x

 

 

as for why mapreduce is trying to write to / it may be that it's trying to create /user and /tmp that you have defined as the user space and the temporary space.  if you don't have those directories you could instead do the following:

 

sudo -u hdfs hdfs dfs -mkdir /user

sudo -u hdfs hdfs dfs -chown mapred:mapred /user

sudo -u hdfs hdfs dfs -mkdir /tmp

sudo -u hdfs hdfs dfs -chown mapred:mapred /tmp

 

 

avatar
New Contributor

I was able to resolve the AccessControLException by using "sudo -u hdfs" and pushing my data file to hdfs (using the full path) as below:

 

sudo -u hdfs spark-submit --class com.cloudera.sparkwordcount.JavaWordCount --master local target/sparkwordcount-0.0.1-SNAPSHOT.jar /user/cloudera/data/inputfile.txt 2

 

I thought it would then be simple to switch the user to 'cloudera' (e.g. "sudo -u cloudera"), since I was putting the data file under the cloudera user path it seemed reasonable, but gave me the same exception. Not sure why?

avatar
Community Manager

A quick update to this thread to advise of a new Community Knowledge Article on this subject.

 

How to resolve "Permission denied" errors in CDH


Cy Jervis, Manager, Community Program
Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.

avatar
New Contributor
I had a similar issue starting up pyspark shell, spark 1.6 and turned out that my program is writing log info to /user/spark/applhistorylogs and it does not have sufficient permissions to write to this path on hdfs. Changing permissions to 777 helped.

Any idea why this issue popped up all of a sudden. I have been using same environment for last 2 months

avatar
New Contributor

I am trying to invoke a sqoop oozie job from a oozie shell action . But i am getting following error and oozie sqoop job is in suspended status.

 

JA009: Permission denied: user=yarn, access=WRITE,
inode="/user":hdfs:supergroup:drwxr-xr-x at
org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission
(DefaultAuthorizationProvider.java:257) at
org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check
(DefaultAuthorizationProvider.java:238) at
org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check
(DefaultAuthorizationProvider.java:216) at org.apache.hadoop.hdfs.server.namenode.De

 

I know the issue is the job is invoked by yarn and its not having WRITE permission on /user folder.

 

As per your solution i need to change permission rights of /user.

 

But in my company it's not possible since i don't have rights to do that and there are so many users there.

 

I have tried to change the sqoop import operation to a /temp folder where there is WRITE access to all users.But i am still getting same error. I don't know why its is always referring to /user folder.

 

Is there any other way i can resolve this issue?

 

Thanks in advace.

 

 

 

avatar
Master Collaborator

I would say that you should work with your cluster administrator to update the permissions, since your user will not be able to create the subfolder that YARN is trying to create for your user either. 

avatar
Explorer

Hi,

I'm getting a similar error, while starting the HBase Region Server. I'm not pretty sure, which permissions I have to set.. 😕 

 

2016-08-24 15:47:49,361 ERROR org.apache.hadoop.hbase.coprocessor.CoprocessorHost: The coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw java.lang.IllegalStateException: Failed to get FileSystem instance
java.lang.IllegalStateException: Failed to get FileSystem instance
	at org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(SecureBulkLoadEndpoint.java:152)
	at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$Environment.startup(CoprocessorHost.java:414)
	at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadInstance(CoprocessorHost.java:255)
	at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadSystemCoprocessors(CoprocessorHost.java:161)
	at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:218)
	at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:720)
	at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:628)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
	at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:6128)
	at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6432)
	at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6404)
	at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6360)
	at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6311)
	at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:362)
	at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:129)
	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=hbase, access=WRITE, inode="/tmp":hdfs:supergroup:drwxr-xr-x
	at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:281)
	at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:262)
	at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:242)
	at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:169)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:152)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6590)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6572)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6524)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4322)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4292)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4265)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:867)
	at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.mkdirs(AuthorizationProviderProxyClientProtocol.java:322)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:603)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)

	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
	at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3084)
	at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:3049)
	at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:957)
	at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:953)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:953)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:946)
	at org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(SecureBulkLoadEndpoint.java:139)
	... 21 more
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=hbase, access=WRITE, inode="/tmp":hdfs:supergroup:drwxr-xr-x
	at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:281)
	at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:262)
	at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:242)
	at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:169)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:152)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6590)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6572)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6524)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4322)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4292)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4265)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:867)
	at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.mkdirs(AuthorizationProviderProxyClientProtocol.java:322)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:603)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)

	at org.apache.hadoop.ipc.Client.call(Client.java:1471)
	at org.apache.hadoop.ipc.Client.call(Client.java:1408)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
	at com.sun.proxy.$Proxy23.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:544)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
	at com.sun.proxy.$Proxy24.mkdirs(Unknown Source)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
	at com.sun.proxy.$Proxy25.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3082)
	... 28 more

 

I am running a CDH 5.7 Cluster on 4 Ubuntu 14.04 machines.

 

It would be nice if someone could help me out and thanks a lot.

avatar
Explorer
Already fixed it

avatar
New Contributor

Instead of invoking the sqoop action from shell, i created a sub workflow that does the sqoop job and then called the sub workflow from the main Oozie workflow. The subworkflow will be invoked with submitter as the current user.

So problem solved for me.