Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

select count(*) is not working in beeline Permission denied: user=anonymous, access=WRITE, inode="/user/anonymous":hdfs:hdfs:drwxr-xr-x

Highlighted

select count(*) is not working in beeline Permission denied: user=anonymous, access=WRITE, inode="/user/anonymous":hdfs:hdfs:drwxr-xr-x

select count(*) from purchtable; INFO : Tez session hasn't been created yet. Opening session ERROR : Failed to execute tez graph. org.apache.hadoop.security.AccessControlException: Permission denied: user=anonymous, access=WRITE, inode="/user/anonymous":hdfs:hdfs:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1780) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1764) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1747) at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3972) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1081) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:630) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2206) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2202) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2200) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3066) at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:3034) at org.apache.hadoop.hdfs.DistributedFileSystem$23.doCall(DistributedFileSystem.java:1105) at org.apache.hadoop.hdfs.DistributedFileSystem$23.doCall(DistributedFileSystem.java:1101) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1101) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1094) at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1877) at org.apache.hadoop.hive.ql.exec.tez.DagUtils.getDefaultDestDir(DagUtils.java:783) at org.apache.hadoop.hive.ql.exec.tez.DagUtils.getHiveJarDirectory(DagUtils.java:877) at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.createJarLocalResource(TezSessionState.java:341) at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:162) at org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:271) at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:151) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1728) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1485) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1262) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1126) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1121) at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:154) at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:71) at org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:206) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709) at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:218) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=anonymous, access=WRITE, inode="/user/anonymous":hdfs:hdfs:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1780) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1764) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1747) at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3972) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1081) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:630) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2206) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2202) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2200) at org.apache.hadoop.ipc.Client.call(Client.java:1426) at org.apache.hadoop.ipc.Client.call(Client.java:1363) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy16.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:560) at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy17.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3064) ... 32 more Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask (state=08S01,code=1)

7 REPLIES 7
Highlighted

Re: select count(*) is not working in beeline Permission denied: user=anonymous, access=WRITE, inode="/user/anonymous":hdfs:hdfs:drwxr-xr-x

Contributor

It looks like /user/anonymous is owned by hdfs:hdfs, and based on the ACLs, anonymous wouldn't be able to write in this directory. You either need to open up permissions on that directory, or then change ownership of the directory using hdfs dfs -chown command.

Highlighted

Re: select count(*) is not working in beeline Permission denied: user=anonymous, access=WRITE, inode="/user/anonymous":hdfs:hdfs:drwxr-xr-x

Thanks for reply ,for /user/anonymous i need to change the ownership can you please help with the command , why its pointing to /user/anonymous any idea .. how to open the permission on that directly

Highlighted

Re: select count(*) is not working in beeline Permission denied: user=anonymous, access=WRITE, inode="/user/anonymous":hdfs:hdfs:drwxr-xr-x

Contributor

@Avinash Reddy

For the commands, see answer below from @Gerd Koenig. As to why it's pointing there, the stack shows that it's trying to create a directory (mkdir). Hive does create directories under the /user directory for staging and persist some information related to the hive workload for the user connected to hive (in this case anonymous). I currently have the following directories under /user/<hive-user> created by Hive: .hiveJars, .staging, hive.

Highlighted

Re: select count(*) is not working in beeline Permission denied: user=anonymous, access=WRITE, inode="/user/anonymous":hdfs:hdfs:drwxr-xr-x

Guru

Hello @Avinash Reddy ,

assuming you are not using Ranger for defining ACLs, the hdfs command to change ownership to anonymous is:

# become user 'hdfs'
sudo su - hdfs

# change ownership
hdfs dfs -chown anonymous /user/anonymous

# optional, if you want to limit access to user 'anonymous' for its user directory
hdfs dfs -chmod -R 700 /user/anonymous

Regards, Gerd

Re: select count(*) is not working in beeline Permission denied: user=anonymous, access=WRITE, inode="/user/anonymous":hdfs:hdfs:drwxr-xr-x

@Gerd Koenig

[root@node1 ~]# sudo su - hdfs

[hdfs@node1 ~]$ hdfs dfs -chown anonymous /user/anonymous chown: `/user/anonymous': No such file or directory

i tried to change the permission for /user/anonymous but giving like no such file or directory

@zhoussen

/user/root/.hiveJars

/user/root/.staging

under /user/root hivejars and staging are created instead of /user/hive any idea .. why its behaving like this .any help/suggestions .

Highlighted

Re: select count(*) is not working in beeline Permission denied: user=anonymous, access=WRITE, inode="/user/anonymous":hdfs:hdfs:drwxr-xr-x

Guru

Hello @Avinash Reddy , if the directory is missing, just create it as first statement after becoming user hdfs (after: 'sudo su - hdfs'):

hdfs dfs -mkdir /user/anonymous

...then proceed with 'hdfs dfs -chown..'...

Highlighted

Re: select count(*) is not working in beeline Permission denied: user=anonymous, access=WRITE, inode="/user/anonymous":hdfs:hdfs:drwxr-xr-x

Contributor

@Avinash Reddy

It's because you are probably connecting to Hive as root user ? Did you try connecting as Hive ? Where and how are you submitting queries to Hive from ?

Don't have an account?
Coming from Hortonworks? Activate your account here