Support Questions

Find answers, ask questions, and share your expertise

Permission denied, access=EXECUTE on getting the status of a file

avatar
New Contributor

Having a customer which is troubled by a strange permission problem. They're using cdh4.1.2 with mr1 (not YARN).

Following files are generated through an map-reduce job:

drwxr-xr-x - hdfs supergroup 0 2013-04-12 09:28 / 
drwxrwxrwx - datameer hadoop 0 2013-07-10 15:18 /d 
drwxr-xr-x - datameer hadoop 0 2013-09-19 06:14 /d/w 
drwxr-xr-x - datameer hadoop 0 2013-09-30 04:16 /d/w/3 
drwxr-xr-x - datameer hadoop 0 2013-09-30 04:16 /d/w/3/2 
drwxr-xr-x - datameer hadoop 0 2013-09-30 04:16 /d/w/3/2/Hourly 
drwxr-xr-x - datameer hadoop 0 2013-09-30 04:10 /d/w/3/2/Hourly/data 
-rw-r--r-- 3 datameer hadoop 95363 2013-09-30 04:06 /d/w/3/2/Hourly/optimized_preview 
drwxr-xr-x - datameer hadoop 0 2013-09-30 04:10 /d/w/3/2/Hourly/preview

 No calling FileSystem.exists(new Path("/d/w/3/2/Hourly/optimized_preview")) fails with 

Permission denied: user=datameer, access=EXECUTE, inode="/d/w/3/2/Hourly/optimized_preview":datameer:hadoop:-rw-r--r-- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4547) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkTraverse(FSNamesystem.java:4523) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:2796) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:664) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:643) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44128) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)..."

Which is kind of strange, since files itself can't have execute permissions (https://issues.apache.org/jira/browse/HADOOP-3078) and all parent directories have execute permissions (even if they would not have, then there they would be the inode in the error message and not the file).

 

I couldn't reproduce the situation on an own cdh4.1.2 cluster, but the problems persists in the customer environment.

The problem is only on certain files. Usually those files have been a different origin (most files in the /d/w/3/2/Hourly has been produced by  one map-reduce job, but the optimized_preview has been produced by a 2nd job and then moved to the folder afterwards.

 

Any clue what could cause that ?

I suspecting its somehow related to https://issues.apache.org/jira/browse/HBASE-9509, but in this case the file does exist!

 

Any help appreciated!

Johannes

1 ACCEPTED SOLUTION

avatar
Mentor

The exception is odd if it presents itself upon a simple fs.exists(new Path("/d/w/3/2/Hourly/optimized_preview")) call, but how certain are you on if it is that for sure? I do not see that call in the stack trace but perhaps it is chopped off.

 

The exception can be expected if they do call fs.exists(…) with path components that instead treat optimized_preview as a directory, which am rather certain they are or the tool is doing, cause the ACE inode field only shows upto the last parent, and thats quoting the whole file. For example, fs.exists(new Path("/d/w/3/2/Hourly/optimized_preview/file")).

View solution in original post

3 REPLIES 3

avatar
Mentor

The exception is odd if it presents itself upon a simple fs.exists(new Path("/d/w/3/2/Hourly/optimized_preview")) call, but how certain are you on if it is that for sure? I do not see that call in the stack trace but perhaps it is chopped off.

 

The exception can be expected if they do call fs.exists(…) with path components that instead treat optimized_preview as a directory, which am rather certain they are or the tool is doing, cause the ACE inode field only shows upto the last parent, and thats quoting the whole file. For example, fs.exists(new Path("/d/w/3/2/Hourly/optimized_preview/file")).

avatar
New Contributor
Alright, i was able to reproduce this now by having a file '/file' and calling exists on '/file/anotherFile'. That you help me to fix the customers bug!

Btw, isn't that behaviour itself not a bug in Hadoop ?

avatar
Mentor
Many thanks for following up!

I do think the directory-or-file can be checked for before traversal,
or at least the error message may be improved, for which I've filed
https://issues.apache.org/jira/browse/HDFS-5802.

The shell utils handle this in a clearer way btw:

➜ ~ ls foo/file
ls: foo/file: Not a directory
➜ ~ hadoop fs -ls foo/file
ls: `foobar/file': No such file or directory