Support Questions

Find answers, ask questions, and share your expertise
Celebrating as our community reaches 100,000 members! Thank you!

Is there a chance to bring the nfs gateway on HDP 2.5 sandbox up


NFS Gateway Service seems to be broken on HDP 2.5 sanbox. I only see 0/1 Services started. Is there a chance to fix it quickly?



Change the nfs dump directory to /tmp/.nfs OR /tmp/.hdfsnfs (remove - in the directory name) instead of /tmp/.hdfs-nfs via Ambari configs page. This will solve the issue.

View solution in original post


Master Mentor

@Marc Schriever according to release notes for HDP 2.5 Sandbox, nfs is up automatically

HDFS Portmap – org.apache.hadoop.portmap.Portmap NameNode – org.apache.hadoop.hdfs.server.namenode.NameNode DataNode – org.apache.hadoop.hdfs.server.datanode.DataNode Nfs

Portmap – Unlike the other processes that are launched by hdfs user, these are run as root user. The nfs process doesn’t show up as a name for jps output


Im wondering since the Amabari interface indicates, that the NFS Gateway is not started.


When I try to start it manually by typing

hdfs nfs3

I get the following error message

16/10/06 11:08:15 INFO nfs3.Nfs3Base: registered UNIX signal handlers for [TERM, HUP, INT]
16/10/06 11:08:15 INFO impl.MetricsConfig: loaded properties from
16/10/06 11:08:16 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
16/10/06 11:08:16 INFO impl.MetricsSystemImpl: Nfs3 metrics system started
16/10/06 11:08:16 INFO oncrpc.RpcProgram: Will accept client connections from unprivileged ports
16/10/06 11:08:16 INFO security.ShellBasedIdMapping: Not doing static UID/GID mapping because '/etc/' does not exist.
16/10/06 11:08:16 INFO nfs3.WriteManager: Stream timeout is 600000ms.
16/10/06 11:08:16 INFO nfs3.WriteManager: Maximum open streams is 256
16/10/06 11:08:16 INFO nfs3.OpenFileCtxCache: Maximum open streams is 256
16/10/06 11:08:16 INFO nfs3.RpcProgramNfs3: Configured HDFS superuser is
16/10/06 11:08:16 INFO nfs3.RpcProgramNfs3: Create new dump directory /tmp/.hdfs-nfs
Exception in thread "main" Cannot create dump directory /tmp/.hdfs-nfs
        at org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.clearDirectory(
        at org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.<init>(
        at org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.createRpcProgramNfs3(
        at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.<init>(
        at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(
        at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(
16/10/06 11:08:16 INFO nfs3.Nfs3Base: SHUTDOWN_MSG:
SHUTDOWN_MSG: Shutting down Nfs3 at

I didn't changed anything (except the admin password) on the sandbox environment.

Master Mentor

@Marc Schriever can you check permissions on /tmp directory

[root@sandbox ~]# ls -la / | grep tmp
drwxrwxrwt   1 root  root    4096 Oct  7 09:25 tmp

When I try to create the directory manually, it doesn't work because of the following error message:

[root@sandbox ~]# mkdir -p /tmp/.hdfs-nfs
mkdir: cannot create directory `/tmp/.hdfs-nfs': Invalid Argument

If I try to create any other Directory, it works fine.

[root@sandbox ~]# mkdir -p /tmp/.hdfs-nfs_
[root@sandbox ~]#

Something blocks the creation of the Directory, but I don't know what.

Master Mentor
@Marc Schriever

do ls on the directory, I believe it already exists, you just need to give it more permissions rather than try to create it.


The Directory doesn't exist in /tmp.

[root@sandbox tmp]# ls -latr *hdfs-nfs*
ls: cannot access *hdfs-nfs*: No such file or directory

New Contributor

up ? i've the same issue


Change the nfs dump directory to /tmp/.nfs OR /tmp/.hdfsnfs (remove - in the directory name) instead of /tmp/.hdfs-nfs via Ambari configs page. This will solve the issue.


Yes, after changing the directory name for 'NFSGateway dump directory' it works for me.