Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

How to enable chown commands via Hadoop NFS Gateway

avatar
Explorer

I have a use-case where I have enabled NFS gateway for my hadoop system following this nice guide. I have mounted it on another machine via:

sudo mount -v -t nfs -o vers=3,proto=tcp,nolock,noacl $ip:/dataDir /mountDir

Now there is a use-case where I need to run chown command to a file in dataDir folder, so I run following:

chown user2  /mountDir/sample.txt

But this gives error:

chown: changing ownership of `/mountDir/sample.txt': Permission denied

and I get following in NFS gateway logs:

18/04/05 23:54:25 WARN nfs3.RpcProgramNfs3: Exception
org.apache.hadoop.security.AccessControlException: Non-super user cannot change owner
        at org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setOwner(FSDirAttrOp.java:83)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setOwner(FSNamesystem.java:1669)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setOwner(NameNodeRpcServer.java:703)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setOwner(ClientNamenodeProtocolServerSideTranslatorPB.java:464)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)

I also trying added following in /etc/nfs.map file as mentioned in docs and an error faced doing this detailed here:

uid 0 594903  //where 0 is uid of root on another machine, and 594903 is uid of hdfs which is superuser on datanode machine where NFS gateway is running.

But I still get this error:

Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied. user=root is not the owner of inode=sample3.txt
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkOwner(FSPermissionChecker.java:250)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:227)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1771)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1755)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkOwner(FSDirectory.java:1724)
        at org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setOwner(FSDirAttrOp.java:80)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setOwner(FSNamesystem.java:1669)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setOwner(NameNodeRpcServer.java:703)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setOwner(ClientNamenodeProtocolServerSideTranslatorPB.java:464)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)

Any idea how to get this done?

1 REPLY 1

avatar
Contributor

@Saurabh,

There seem to be 2 different issues at hand.

1) Make sure the user ID of all users is same across all nodes in the cluster (else, this will cause conflicts, as the NFS permissions configuration usage both the username, userID and groupID)

so, as you can see above in ur descreiption -

  1. uid 0594903//where 0 is uid of root on another machine, and 594903 is uid of hdfs which is superuser on datanode machine where NFS gateway is running.

This is cause by the same reason, so better you can keep 0 to root, and update the userID for HDFS (but once you do that, you need to update lot of directories to map to the new uid of HDFS user), not sure how complicated this might get, but has to be done.

2) Make sure the user you want to change the ownership too (chown), is part of the config file given when you changed the defualt.fs to NFS. The files (in my case it was user.json and group.json) mentions each user and it's user id (in user.txt), and also the group that all users we want to configure for NFS, goes to groups.txt (group name, and it's id)

Example entry in users.json:

{ "userName":"root", "userID":"0" }

Example entry in groups.json:

{ "groupName":"root", "groupID":"0" }

Also, for you to run the 'chown' command, make sure you are doing this as hdfs user (from your above log, it seems only hdfs user can do this).

Hope this helps.