Support Questions
Find answers, ask questions, and share your expertise

Unable to switch to admin user in Master node for hdfs write access

Unable to switch to admin user in Master node for hdfs write access

Explorer

http://hortonworks.github.io/hdp-aws/using/index.html#switching-to-the-admin-user

I used CloudController to install the cluster. Creation went fine.

I was trying to write hdfs. As mentioned in the URL, tried to switch to admin user from cloudbreak user, but encountered below error.

[cloudbreak@ip-10-0-1-253 ~]$ sudo su - admin

su: user admin does not exist

Read Access:

[cloudbreak@ip-10-0-1-253 ~]$ hadoop fs -ls /user

Found 6 items

drwxr-xr-x - admin hdfs 0 2017-01-07 10:37 /user/admin

drwxrwx--- - ambari-qa hdfs 0 2017-01-07 10:32 /user/ambari-qa

drwxr-xr-x - hcat hdfs 0 2017-01-07 10:34 /user/hcat

drwxr-xr-x - hive hdfs 0 2017-01-07 10:36 /user/hive

drwxrwxr-x - spark hdfs 0 2017-01-07 10:34 /user/spark

drwxr-xr-x - yarn hdfs 0 2017-01-07 10:37 /user/yarn

Write Access:

hadoop fs -mkdir /user/ariya

17/01/07 11:34:45 WARN retry.RetryInvocationHandler: Exception while invoking ClientNamenodeProtocolTranslatorPB.mkdirs over null. Not retrying because try once and fail.

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=cloudbreak, access=WRITE, inode="/user/ariya":hdfs:hdfs:drwxr-xr-x

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)

at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827)

at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1811)

at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1794)

at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4011)

at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1102)

at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:630)

at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:422)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)

at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)

at org.apache.hadoop.ipc.Client.call(Client.java:1496)

at org.apache.hadoop.ipc.Client.call(Client.java:1396)

at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)

at com.sun.proxy.$Proxy10.mkdirs(Unknown Source)

at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:603)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:498)

at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)

at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)

at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)

at com.sun.proxy.$Proxy11.mkdirs(Unknown Source)

at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3061)

at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:3031)

at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1162)

at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1158)

at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)

at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1158)

at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1150)

at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1913)

at org.apache.hadoop.fs.shell.Mkdir.processNonexistentPath(Mkdir.java:76)

at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:273)

at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)

at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119)

at org.apache.hadoop.fs.shell.Command.run(Command.java:165)

at org.apache.hadoop.fs.FsShell.run(FsShell.java:297)

at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)

at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)

at org.apache.hadoop.fs.FsShell.main(FsShell.java:350)

mkdir: Permission denied: user=cloudbreak, access=WRITE, inode="/user/ariya":hdfs:hdfs:drwxr-xr-x

4 REPLIES 4
Highlighted

Re: Unable to switch to admin user in Master node for hdfs write access

Super Collaborator
@Ariya Bala Sadaiappan

Can you try sudo su to hdfs user and try creating the hdfs directories.

#sudo su - hdfs

Highlighted

Re: Unable to switch to admin user in Master node for hdfs write access

Explorer

With hdfs user I can create dirs, but when i scp files from my local to cloudbreak user on the master and if I want to put those files to hdfs,

hdfs user do not have permission to look into the files /home/cloudbreak/. Please suggest how to move files from my local to the hdfs.

Highlighted

Re: Unable to switch to admin user in Master node for hdfs write access

Super Collaborator

Create a user directory for cloudbreak in hdfs and now couldbreak user should be able to copy the local files to hdfs.

#sudo su - hdfs

#hdfs dfs -mkdir /user/cloudbreak

#hdfs dfs -chown cloudbreak /user/cloudbreak

Now as a cloudbreak user, try copying the files from local to hdfs directory /user/cloudbreak

Highlighted

Re: Unable to switch to admin user in Master node for hdfs write access

Super Guru
@Ariya Bala Sadaiappan

In addition to the answers given already,

If you want to copy data to HDFS other than /user/cloudreak or some location where cloudbreak does not have access to. You can create a temporary directory on local storage and scp your data in there.

e.g. mkdir /tmp/input_data_tobecopied

Now as this location is under /tmp, hdfs user should be able to read this and you can easily copy it to the HDFS wherever you want by 'hdfs' user.

Don't have an account?