Support Questions

Find answers, ask questions, and share your expertise

org.apache.hadoop.security.AccessControlException: Permission denied - Need c

avatar
Champion

The mapreduce is getting successed and I am able to check the results in the hdfs. 

The problem is when i try to see in the histroy server the jobs are not there checkout the logs found this error 

16/12/31 06:34:27 ERROR hs.HistoryFileManager: Error while trying to move a job to done
org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=READ, inode="/user/history/done_intermediate/matt/job_1483174306930_0005.summary":matt:hadoop:-rwxrwx---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:265)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:251)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:182)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5461)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5443)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:5405)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1680)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1632)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1612)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1586)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:482)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:322)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1986)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1982)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1980)

	at sun.reflect.GeneratedConstructorAccessor29.newInstance(Unknown Source)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
	at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1139)
	at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1127)
	at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1117)
	at org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:264)
	at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:231)
	at org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:224)
	at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1290)
	at org.apache.hadoop.fs.Hdfs.open(Hdfs.java:309)
	at org.apache.hadoop.fs.Hdfs.open(Hdfs.java:54)
	at org.apache.hadoop.fs.AbstractFileSystem.open(AbstractFileSystem.java:619)
	at org.apache.hadoop.fs.FileContext$6.next(FileContext.java:785)
	at org.apache.hadoop.fs.FileContext$6.next(FileContext.java:781)
	at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
	at org.apache.hadoop.fs.FileContext.open(FileContext.java:781)
	at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.getJobSummary(HistoryFileManager.java:953)
	at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.access$400(HistoryFileManager.java:82)
	at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.moveToDone(HistoryFileManager.java:370)
	at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.access$1400(HistoryFileManager.java:295)
	at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$1.run(HistoryFileManager.java:843)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:744)

looks like a permission but i am not sure where i should change em and what should be the chmod value 

 

below is my current configuration 

sudo -u hdfs hadoop fs -mkdir /user
$ sudo -u hdfs hadoop fs -mkdir /user/matt
$ sudo -u hdfs hadoop fs -chown matt /user/matt
$ sudo -u hdfs hadoop fs -mkdir /user/history
$ sudo -u hdfs hadoop fs -chmod 1777 /user/history
$ sudo -u hdfs hadoop fs -chown mapred:hadoop \
/user/history

can someone please help me with this issue . 

 

1 ACCEPTED SOLUTION

avatar
Champion
Yes. Go through your process. It is granting more accessible which is generally less risky.

Also, it is the correct way to install Hadoop/CDH.

https://www.cloudera.com/documentation/enterprise/5-6-x/topics/cm_sg_cm_users_principals.html

View solution in original post

3 REPLIES 3

avatar
Champion
I'd put the mapred account in the Hadoop group. It will then have the needed access. The hdfs, yarn, and mapred accts should all be in the Hadoop group.

avatar
Champion

Can this be done in production ? mate

 

avatar
Champion
Yes. Go through your process. It is granting more accessible which is generally less risky.

Also, it is the correct way to install Hadoop/CDH.

https://www.cloudera.com/documentation/enterprise/5-6-x/topics/cm_sg_cm_users_principals.html