Archives of Support Questions (Read Only)

This is an archived board for historical reference. Information and links may no longer be available or relevant
Announcements
This board is archived and read-only for historical reference. To ask a new question, please post a new topic on the appropriate active board.

Who agreed with this topic

Permission denied, access=EXECUTE on getting the status of a file

avatar
Frequent Visitor

Having a customer which is troubled by a strange permission problem. They're using cdh4.1.2 with mr1 (not YARN).

Following files are generated through an map-reduce job:

drwxr-xr-x - hdfs supergroup 0 2013-04-12 09:28 / 
drwxrwxrwx - datameer hadoop 0 2013-07-10 15:18 /d 
drwxr-xr-x - datameer hadoop 0 2013-09-19 06:14 /d/w 
drwxr-xr-x - datameer hadoop 0 2013-09-30 04:16 /d/w/3 
drwxr-xr-x - datameer hadoop 0 2013-09-30 04:16 /d/w/3/2 
drwxr-xr-x - datameer hadoop 0 2013-09-30 04:16 /d/w/3/2/Hourly 
drwxr-xr-x - datameer hadoop 0 2013-09-30 04:10 /d/w/3/2/Hourly/data 
-rw-r--r-- 3 datameer hadoop 95363 2013-09-30 04:06 /d/w/3/2/Hourly/optimized_preview 
drwxr-xr-x - datameer hadoop 0 2013-09-30 04:10 /d/w/3/2/Hourly/preview

 No calling FileSystem.exists(new Path("/d/w/3/2/Hourly/optimized_preview")) fails with 

Permission denied: user=datameer, access=EXECUTE, inode="/d/w/3/2/Hourly/optimized_preview":datameer:hadoop:-rw-r--r-- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4547) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkTraverse(FSNamesystem.java:4523) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:2796) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:664) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:643) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44128) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)..."

Which is kind of strange, since files itself can't have execute permissions (https://issues.apache.org/jira/browse/HADOOP-3078) and all parent directories have execute permissions (even if they would not have, then there they would be the inode in the error message and not the file).

 

I couldn't reproduce the situation on an own cdh4.1.2 cluster, but the problems persists in the customer environment.

The problem is only on certain files. Usually those files have been a different origin (most files in the /d/w/3/2/Hourly has been produced by  one map-reduce job, but the optimized_preview has been produced by a 2nd job and then moved to the folder afterwards.

 

Any clue what could cause that ?

I suspecting its somehow related to https://issues.apache.org/jira/browse/HBASE-9509, but in this case the file does exist!

 

Any help appreciated!

Johannes

Who agreed with this topic