Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Permission denied as I am unable to delete a directory in HDFS

avatar
Contributor

Hi experts,

As the root user, I am trying to delete a directory in HDFS which was created by root.
However, when I try to delete it, it says "Permission denied: user=root, access=WRITE, inode="/user":hdfs:hdfs:drwxr-xr-x"

Why does it say permission denied on "/user" when I am trying to delete the directory "/tmp/root/testdirectory"

 

The error message is below.

 

[root@test02 ~]# hdfs dfs -ls /tmp/root/
Picked up _JAVA_OPTIONS: -Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
Found 2 items
drwxrwxrwx - root hdfs 0 2021-08-09 20:35 /tmp/root/testdirectory
-rw-r--r-- 3 root hdfs 0 2021-08-10 13:54 /tmp/root/test
[root@test02 ~]# hdfs dfs -rmr /tmp/root/testdirectory
Picked up _JAVA_OPTIONS: -Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
rmr: DEPRECATED: Please use '-rm -r' instead.
21/08/11 12:08:30 WARN fs.TrashPolicyDefault: Can't create trash directory: hdfs://test/user/root/.Trash/Current/tmp/root
org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/user":hdfs:hdfs:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:351)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:251)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1756)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1740)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1699)
at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3007)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1132)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:659)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1003)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:931)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2854)

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2498)
at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2471)
at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1243)
at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1240)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1257)
at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1232)
at org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:147)
at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:109)
at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95)
at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:153)
at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:118)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:327)
at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:299)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:281)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:265)
at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119)
at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:317)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:380)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=root, access=WRITE, inode="/user":hdfs:hdfs:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:351)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:251)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1756)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1740)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1699)
at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3007)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1132)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:659)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1003)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:931)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2854)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1549)
at org.apache.hadoop.ipc.Client.call(Client.java:1495)
at org.apache.hadoop.ipc.Client.call(Client.java:1394)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
at com.sun.proxy.$Proxy10.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:587)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy11.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2496)
... 21 more
rmr: Failed to move to trash: hdfs://test/tmp/root/testdirectory: Permission denied: user=root, access=WRITE, inode="/user":hdfs:hdfs:drwxr-xr-x
[root@test02 ~]#

 

Any help is much appreciated.

 

Thanks,

1 ACCEPTED SOLUTION

avatar
Expert Contributor

Hi @ryu 

[root@test02 ~]# hdfs dfs -rmr /tmp/root/testdirectory
...
...
21/08/11 12:08:30 WARN fs.TrashPolicyDefault: Can't create trash directory: hdfs://test/user/root/.Trash/Current/tmp/root
...
...
rmr: Failed to move to trash: hdfs://test/tmp/root/testdirectory: Permission denied: user=root, access=WRITE, inode="/user":hdfs:hdfs:drwxr-xr-x

 

Upon checking the logs you are trying to delete a testdirectory and if you do that it will try to move the file to trash directory(since trash is enabled) under "/user/root/.Trash" location.

 

Since the folder /user("inode="/user":hdfs:hdfs:drwxr-xr-x") has hdfs as primary and group so eventually the user root falls under others(that is the third one r-x). The other users does not have write permission so that he was not able to write.

 

Either give write permission for user root in to the folder or try to delete the folder as hdfs user to overcome the issue.

 

If you are happy with the reply, mark it Accept as Solution

View solution in original post

2 REPLIES 2

avatar
Expert Contributor

Hi @ryu 

[root@test02 ~]# hdfs dfs -rmr /tmp/root/testdirectory
...
...
21/08/11 12:08:30 WARN fs.TrashPolicyDefault: Can't create trash directory: hdfs://test/user/root/.Trash/Current/tmp/root
...
...
rmr: Failed to move to trash: hdfs://test/tmp/root/testdirectory: Permission denied: user=root, access=WRITE, inode="/user":hdfs:hdfs:drwxr-xr-x

 

Upon checking the logs you are trying to delete a testdirectory and if you do that it will try to move the file to trash directory(since trash is enabled) under "/user/root/.Trash" location.

 

Since the folder /user("inode="/user":hdfs:hdfs:drwxr-xr-x") has hdfs as primary and group so eventually the user root falls under others(that is the third one r-x). The other users does not have write permission so that he was not able to write.

 

Either give write permission for user root in to the folder or try to delete the folder as hdfs user to overcome the issue.

 

If you are happy with the reply, mark it Accept as Solution

avatar
Contributor

Thanks it worked.