<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: NFS Gateway is failing automatically in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/NFS-Gateway-is-failing-automatically/m-p/206213#M168179</link>
    <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/29171/chirutnk.html" nodeid="29171"&gt;@Chiranjeevi Nimmala&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Do you see any error for NFS service ?   Can you please share the nfs logs from the "/var/log/hadoop/root/" directory? Like "nfs3_jsvc.out", "nfs3_jsvc.err", "hadoop-hdfs-nfs3-ip-10-0-0-223.ap-south-1.compute.internal.log"&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Can you check what is the ulimit value set for the NFS service ?&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Sometimes NFS crashes due to less file descriptor limit. If the value is set to too low then we can try increasing this value to alittle higher value from Ambari UI as:&lt;/P&gt;&lt;PRE&gt;Navigate to "Ambari UI --&amp;gt;  HDFS --&amp;gt; Configs --&amp;gt; Advanced --&amp;gt; Advanced hadoop-env --&amp;gt; hadoop-env template"&lt;BR /&gt;&lt;/PRE&gt;&lt;P&gt;&lt;BR /&gt;Now add the following entry in this "hadoop-env" template&lt;/P&gt;&lt;PRE&gt;if [ "$command" == "nfs3" ]; then ulimit -n 128000 ; fi&lt;/PRE&gt;&lt;P&gt;&lt;BR /&gt;Then try restarting the NFSGateway..&lt;/P&gt;</description>
    <pubDate>Sun, 06 Aug 2017 13:11:34 GMT</pubDate>
    <dc:creator>jsensharma</dc:creator>
    <dc:date>2017-08-06T13:11:34Z</dc:date>
    <item>
      <title>NFS Gateway is failing automatically</title>
      <link>https://community.cloudera.com/t5/Support-Questions/NFS-Gateway-is-failing-automatically/m-p/206212#M168178</link>
      <description>&lt;P&gt;Dear experts,&lt;/P&gt;&lt;P&gt;I am running HDP 2.4.3 with Ambari 2.4 on AWS EC2 instances running on Red Hat Enterprise Linux Server release 7.3 (Maipo). Whenever i start the NFSGATEWAY service on a host , it is automatically getting stopped after sometime. Could you please assist me on this ? &lt;/P&gt;&lt;P&gt;Even i try to kill the existing nfs3 process and restart the service, the issue still persists. Please find few details below,&lt;/P&gt;&lt;P&gt;ps -ef | grep nfs3&lt;/P&gt;&lt;P&gt;----------------------------------------------------------&lt;/P&gt;&lt;P&gt;root      9766     1  0 01:42 pts/0    00:00:00 jsvc.exec -Dproc_nfs3 -outfile /var/log/hadoop/root/nfs3_jsvc.out -errfile /var/log/hadoop/root/nfs3_jsvc.err -pidfile /var/run/hadoop/root/hadoop_privileged_nfs3.pid -nodetach -user hdfs -cp /usr/hdp/current/hadoop-client/conf:/usr/hdp/2.4.3.0-227/hadoop/lib/*:/usr/hdp/2.4.3.0-227/hadoop/.//*:/usr/hdp/2.4.3.0-227/hadoop-hdfs/./:/usr/hdp/2.4.3.0-227/hadoop-hdfs/lib/*:/usr/hdp/2.4.3.0-227/hadoop-hdfs/.//*:/usr/hdp/2.4.3.0-227/hadoop-yarn/lib/*:/usr/hdp/2.4.3.0-227/hadoop-yarn/.//*:/usr/hdp/2.4.3.0-227/hadoop-mapreduce/lib/*:/usr/hdp/2.4.3.0-227/hadoop-mapreduce/.//*::/usr/hdp/2.4.3.0-227/tez/*:/usr/hdp/2.4.3.0-227/tez/lib/*:/usr/hdp/2.4.3.0-227/tez/conf:/usr/hdp/2.4.3.0-227/tez/*:/usr/hdp/2.4.3.0-227/tez/lib/*:/usr/hdp/2.4.3.0-227/tez/conf -Xmx1024m -Dhdp.version=2.4.3.0-227 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.4.3.0-227/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.4.3.0-227/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.3.0-227/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.4.3.0-227 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop-hdfs-nfs3-ip-10-0-0-223.ap-south-1.compute.internal.log -Dhadoop.home.dir=/usr/hdp/2.4.3.0-227/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.4.3.0-227/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.3.0-227/hadoop/lib/native:/usr/hdp/2.4.3.0-227/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.3.0-227/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/root -Dhadoop.id.str=hdfs -Xmx1024m -Dhadoop.security.logger=ERROR,DRFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter &lt;/P&gt;&lt;P&gt;systemctl status rpcbind&lt;/P&gt;&lt;P&gt;--------------------------------------------------&lt;/P&gt;&lt;P&gt;
● rpcbind.service - RPC bind service
   Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; indirect; vendor preset: enabled)
   Active: active (running) since Sun 2017-08-06 01:29:31 EDT; 18min ago
 Main PID: 6164 (rpcbind)
   CGroup: /system.slice/rpcbind.service
           └─6164 /sbin/rpcbind -w&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 12:03:08 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/NFS-Gateway-is-failing-automatically/m-p/206212#M168178</guid>
      <dc:creator>chiru_tnk</dc:creator>
      <dc:date>2022-09-16T12:03:08Z</dc:date>
    </item>
    <item>
      <title>Re: NFS Gateway is failing automatically</title>
      <link>https://community.cloudera.com/t5/Support-Questions/NFS-Gateway-is-failing-automatically/m-p/206213#M168179</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/29171/chirutnk.html" nodeid="29171"&gt;@Chiranjeevi Nimmala&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Do you see any error for NFS service ?   Can you please share the nfs logs from the "/var/log/hadoop/root/" directory? Like "nfs3_jsvc.out", "nfs3_jsvc.err", "hadoop-hdfs-nfs3-ip-10-0-0-223.ap-south-1.compute.internal.log"&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Can you check what is the ulimit value set for the NFS service ?&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Sometimes NFS crashes due to less file descriptor limit. If the value is set to too low then we can try increasing this value to alittle higher value from Ambari UI as:&lt;/P&gt;&lt;PRE&gt;Navigate to "Ambari UI --&amp;gt;  HDFS --&amp;gt; Configs --&amp;gt; Advanced --&amp;gt; Advanced hadoop-env --&amp;gt; hadoop-env template"&lt;BR /&gt;&lt;/PRE&gt;&lt;P&gt;&lt;BR /&gt;Now add the following entry in this "hadoop-env" template&lt;/P&gt;&lt;PRE&gt;if [ "$command" == "nfs3" ]; then ulimit -n 128000 ; fi&lt;/PRE&gt;&lt;P&gt;&lt;BR /&gt;Then try restarting the NFSGateway..&lt;/P&gt;</description>
      <pubDate>Sun, 06 Aug 2017 13:11:34 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/NFS-Gateway-is-failing-automatically/m-p/206213#M168179</guid>
      <dc:creator>jsensharma</dc:creator>
      <dc:date>2017-08-06T13:11:34Z</dc:date>
    </item>
    <item>
      <title>Re: NFS Gateway is failing automatically</title>
      <link>https://community.cloudera.com/t5/Support-Questions/NFS-Gateway-is-failing-automatically/m-p/206214#M168180</link>
      <description>&lt;P&gt;I have tried changing ulimit as suggested and restarted the gateway but still no luck. I dont see any .log file but i am ale to get few details as below,&lt;/P&gt;&lt;P&gt;/var/log/hadoop/root&lt;/P&gt;&lt;P&gt; nfs3_jsvc.out&lt;/P&gt;&lt;P&gt;
------------------------- &lt;/P&gt;&lt;P&gt; A fatal error has been detected by the Java Runtime Environment:
#
#  SIGBUS (0x7) at pc=0x00007f7b0a23bb7c, pid=19469, tid=140166720608064
#
# JRE version:  (8.0_77-b03) (build )
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.77-b03 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# j  java.lang.Object.&amp;lt;clinit&amp;gt;()V+0
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /tmp/hs_err_pid19469.log
#
# If you would like to submit a bug report, please visit:
#   &lt;A href="http://bugreport.java.com/bugreport/crash.jsp" target="_blank"&gt;http://bugreport.java.com/bugreport/crash.jsp&lt;/A&gt;&lt;/P&gt;&lt;P&gt;
hadoop-hdfs-nfs3-XXXXXXX.out&lt;/P&gt;&lt;P&gt;------------------------------------------------------- &lt;/P&gt;&lt;P&gt;ulimit -a for privileged nfs user hdfs
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 63392
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited&lt;/P&gt;</description>
      <pubDate>Sun, 06 Aug 2017 13:30:48 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/NFS-Gateway-is-failing-automatically/m-p/206214#M168180</guid>
      <dc:creator>chiru_tnk</dc:creator>
      <dc:date>2017-08-06T13:30:48Z</dc:date>
    </item>
    <item>
      <title>Re: NFS Gateway is failing automatically</title>
      <link>https://community.cloudera.com/t5/Support-Questions/NFS-Gateway-is-failing-automatically/m-p/206215#M168181</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/29171/chirutnk.html" nodeid="29171"&gt;@Chiranjeevi Nimmala&lt;/A&gt;&lt;/P&gt;&lt;P&gt;The following line of error indicates that it's a JVM crash. &lt;/P&gt;&lt;PRE&gt;# An error report file with more information is saved as: # /tmp/hs_err_pid19469.log&lt;/PRE&gt;&lt;P&gt;.&lt;/P&gt;&lt;P&gt;So if you can share the complete&lt;STRONG&gt; "/tmp/hs_err_pid19469.log" &lt;/STRONG&gt;file here then we can check why the JVM crashed.&lt;/P&gt;</description>
      <pubDate>Sun, 06 Aug 2017 13:53:11 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/NFS-Gateway-is-failing-automatically/m-p/206215#M168181</guid>
      <dc:creator>jsensharma</dc:creator>
      <dc:date>2017-08-06T13:53:11Z</dc:date>
    </item>
    <item>
      <title>Re: NFS Gateway is failing automatically</title>
      <link>https://community.cloudera.com/t5/Support-Questions/NFS-Gateway-is-failing-automatically/m-p/206216#M168182</link>
      <description>&lt;P&gt;&lt;A href="https://community.cloudera.com/legacyfs/online/attachments/23446-hs-err-pid26771.txt"&gt;hs-err-pid26771.txt&lt;/A&gt; Adding the latest log file for pid26771.&lt;/P&gt;</description>
      <pubDate>Sun, 06 Aug 2017 14:07:03 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/NFS-Gateway-is-failing-automatically/m-p/206216#M168182</guid>
      <dc:creator>chiru_tnk</dc:creator>
      <dc:date>2017-08-06T14:07:03Z</dc:date>
    </item>
    <item>
      <title>Re: NFS Gateway is failing automatically</title>
      <link>https://community.cloudera.com/t5/Support-Questions/NFS-Gateway-is-failing-automatically/m-p/206217#M168183</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/29171/chirutnk.html" nodeid="29171"&gt;@Chiranjeevi Nimmala&lt;BR /&gt;&lt;/A&gt;&lt;/P&gt;&lt;P&gt;It looks somewhere similar to : &lt;A href="https://issues.apache.org/jira/browse/HDFS-12029" target="_blank"&gt;https://issues.apache.org/jira/browse/HDFS-12029&lt;/A&gt;  (although it is for DataNode) But looks like the issue is "jsvc" is crashing due to less "Xss" (Stack Size Value)&lt;BR /&gt;&lt;BR /&gt;So please try increasing the stack size to a higher value like "-Xss2m" inside the file "/usr/hdp/2.6.0.3-8/hadoop-hdfs/bin/hdfs.distro"&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt;Example:  &lt;/STRONG&gt; (Following line should be added to somewhere at the top like above&lt;STRONG&gt; &lt;/STRONG&gt;DEFAULT_LIBEXEC_DIR so that following script can utilize this value.&lt;/P&gt;&lt;PRE&gt;export HADOOP_OPTS="$HADOOP_OPTS -Xss2m"
DEFAULT_LIBEXEC_DIR="$bin"/../libexec&lt;/PRE&gt;&lt;P&gt;.&lt;/P&gt;&lt;P&gt;OR set the -Xss2m inside the following block of the &lt;STRONG&gt;"hadoop.distro"&lt;/STRONG&gt; To apply the setting specifically for NFS3&lt;STRONG&gt;, In all the "HADOOP_OPTS" of the following block:&lt;BR /&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;PRE&gt;&lt;STRONG&gt;&lt;/STRONG&gt;# Determine if we're starting a privileged NFS daemon, and if so, redefine appropriate variables
if [ "$COMMAND" == "nfs3" ] &amp;amp;&amp;amp; [ "$EUID" -eq 0 ] &amp;amp;&amp;amp; [ -n "$HADOOP_PRIVILEGED_NFS_USER" ]; then
  if [ -n "$JSVC_HOME" ]; then
    if [ -n "$HADOOP_PRIVILEGED_NFS_PID_DIR" ]; then
      HADOOP_PID_DIR=$HADOOP_PRIVILEGED_NFS_PID_DIR
    fi

    if [ -n "$HADOOP_PRIVILEGED_NFS_LOG_DIR" ]; then
      HADOOP_LOG_DIR=$HADOOP_PRIVILEGED_NFS_LOG_DIR
      HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.log.dir=$HADOOP_LOG_DIR -Xss2m"
    fi
   
    HADOOP_IDENT_STRING=$HADOOP_PRIVILEGED_NFS_USER
    HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.id.str=$HADOOP_IDENT_STRING -Xss2m"
    starting_privileged_nfs="true"
  else
    echo "It looks like you're trying to start a privileged NFS server, but"\
      "\$JSVC_HOME isn't set. Falling back to starting unprivileged NFS server."
  fi
fi&lt;STRONG&gt;&lt;BR /&gt;&lt;/STRONG&gt;&lt;/PRE&gt;&lt;P&gt;&lt;STRONG&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Then restart NFS.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Reference RHEL7 kernel issue:  &lt;/STRONG&gt;&lt;A href="https://access.redhat.com/errata/RHBA-2017:1674" target="_blank"&gt;https://access.redhat.com/errata/RHBA-2017:1674&lt;/A&gt;&lt;/P&gt;&lt;P&gt;.&lt;/P&gt;&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/29171/chirutnk.html" nodeid="29171"&gt;&lt;/A&gt; &lt;/P&gt;</description>
      <pubDate>Sun, 06 Aug 2017 14:19:29 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/NFS-Gateway-is-failing-automatically/m-p/206217#M168183</guid>
      <dc:creator>jsensharma</dc:creator>
      <dc:date>2017-08-06T14:19:29Z</dc:date>
    </item>
    <item>
      <title>Re: NFS Gateway is failing automatically</title>
      <link>https://community.cloudera.com/t5/Support-Questions/NFS-Gateway-is-failing-automatically/m-p/206218#M168184</link>
      <description>&lt;P&gt;Thanks alot, increasing the stack size as suggested for nfs gateway helped. Thanks again, you have resolved all my issues today &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Sun, 06 Aug 2017 18:24:36 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/NFS-Gateway-is-failing-automatically/m-p/206218#M168184</guid>
      <dc:creator>chiru_tnk</dc:creator>
      <dc:date>2017-08-06T18:24:36Z</dc:date>
    </item>
    <item>
      <title>Re: NFS Gateway is failing automatically</title>
      <link>https://community.cloudera.com/t5/Support-Questions/NFS-Gateway-is-failing-automatically/m-p/206219#M168185</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/29171/chirutnk.html" nodeid="29171"&gt;@Chiranjeevi Nimmala&lt;BR /&gt;&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Good to hear that your issue is resolved. It will be also wonderful if you can mark this thread as "Answered" (&lt;STRONG&gt;Accepted&lt;/STRONG&gt;) so that it will be useful for other HCC users to quickly browse the correct answer for their issues.&lt;BR /&gt;&lt;A rel="user" href="https://community.cloudera.com/users/29171/chirutnk.html" nodeid="29171"&gt;&lt;/A&gt; &lt;/P&gt;</description>
      <pubDate>Sun, 06 Aug 2017 18:29:01 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/NFS-Gateway-is-failing-automatically/m-p/206219#M168185</guid>
      <dc:creator>jsensharma</dc:creator>
      <dc:date>2017-08-06T18:29:01Z</dc:date>
    </item>
  </channel>
</rss>

