Support Questions

Find answers, ask questions, and share your expertise

Is there a chance to bring the nfs gateway on HDP 2.5 sandbox up

avatar

NFS Gateway Service seems to be broken on HDP 2.5 sanbox. I only see 0/1 Services started. Is there a chance to fix it quickly?

1 ACCEPTED SOLUTION

avatar
Explorer

Change the nfs dump directory to /tmp/.nfs OR /tmp/.hdfsnfs (remove - in the directory name) instead of /tmp/.hdfs-nfs via Ambari configs page. This will solve the issue.

View solution in original post

9 REPLIES 9

avatar
Master Mentor

@Marc Schriever according to release notes for HDP 2.5 Sandbox, nfs is up automatically http://hortonworks.com/hadoop-tutorial/hortonworks-sandbox-guide/#section_1

HDFS Portmap – org.apache.hadoop.portmap.Portmap NameNode – org.apache.hadoop.hdfs.server.namenode.NameNode DataNode – org.apache.hadoop.hdfs.server.datanode.DataNode Nfs

Portmap – Unlike the other processes that are launched by hdfs user, these are run as root user. The nfs process doesn’t show up as a name for jps output

avatar

Im wondering since the Amabari interface indicates, that the NFS Gateway is not started.

8320-summaryhortonworkssandbox.png

When I try to start it manually by typing

hdfs nfs3

I get the following error message

************************************************************/
16/10/06 11:08:15 INFO nfs3.Nfs3Base: registered UNIX signal handlers for [TERM, HUP, INT]
16/10/06 11:08:15 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
16/10/06 11:08:16 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
16/10/06 11:08:16 INFO impl.MetricsSystemImpl: Nfs3 metrics system started
16/10/06 11:08:16 INFO oncrpc.RpcProgram: Will accept client connections from unprivileged ports
16/10/06 11:08:16 INFO security.ShellBasedIdMapping: Not doing static UID/GID mapping because '/etc/nfs.map' does not exist.
16/10/06 11:08:16 INFO nfs3.WriteManager: Stream timeout is 600000ms.
16/10/06 11:08:16 INFO nfs3.WriteManager: Maximum open streams is 256
16/10/06 11:08:16 INFO nfs3.OpenFileCtxCache: Maximum open streams is 256
16/10/06 11:08:16 INFO nfs3.RpcProgramNfs3: Configured HDFS superuser is
16/10/06 11:08:16 INFO nfs3.RpcProgramNfs3: Create new dump directory /tmp/.hdfs-nfs
Exception in thread "main" java.io.IOException: Cannot create dump directory /tmp/.hdfs-nfs
        at org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.clearDirectory(RpcProgramNfs3.java:239)
        at org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.<init>(RpcProgramNfs3.java:210)
        at org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.createRpcProgramNfs3(RpcProgramNfs3.java:225)
        at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.<init>(Nfs3.java:45)
        at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:67)
        at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:73)
16/10/06 11:08:16 INFO nfs3.Nfs3Base: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down Nfs3 at sandbox.hortonworks.com/172.17.0.2
************************************************************/

I didn't changed anything (except the admin password) on the sandbox environment.

avatar
Master Mentor

@Marc Schriever can you check permissions on /tmp directory

avatar
[root@sandbox ~]# ls -la / | grep tmp
drwxrwxrwt   1 root  root    4096 Oct  7 09:25 tmp

When I try to create the directory manually, it doesn't work because of the following error message:

[root@sandbox ~]# mkdir -p /tmp/.hdfs-nfs
mkdir: cannot create directory `/tmp/.hdfs-nfs': Invalid Argument

If I try to create any other Directory, it works fine.

[root@sandbox ~]# mkdir -p /tmp/.hdfs-nfs_
[root@sandbox ~]#

Something blocks the creation of the Directory, but I don't know what.

avatar
Master Mentor
@Marc Schriever

do ls on the directory, I believe it already exists, you just need to give it more permissions rather than try to create it.

avatar

The Directory doesn't exist in /tmp.

[root@sandbox tmp]# ls -latr *hdfs-nfs*
ls: cannot access *hdfs-nfs*: No such file or directory

avatar
New Contributor

up ? i've the same issue

avatar
Explorer

Change the nfs dump directory to /tmp/.nfs OR /tmp/.hdfsnfs (remove - in the directory name) instead of /tmp/.hdfs-nfs via Ambari configs page. This will solve the issue.

avatar

Yes, after changing the directory name for 'NFSGateway dump directory' it works for me.

Regards

Niranjan