Support Questions
Find answers, ask questions, and share your expertise

NFSGateway could not start by "Unsupported verifier flavorAUTH_SYS" problem

NFSGateway could not start by "Unsupported verifier flavorAUTH_SYS" problem

Explorer

After adding the new hosts, I found the NFSGateway could not start.

Below is error information, Anyone could help me out of this probem?

Thanks in advance.

HDP version is :2.4.2

Ambari versin:2.4.1

OS: Ubuntu 14

stderr: /var/lib/ambari-agent/data/errors-3794.txt

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/nfsgateway.py", line 147, in <module>
    NFSGateway().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/nfsgateway.py", line 58, in start
    nfsgateway(action="start")
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_nfsgateway.py", line 74, in nfsgateway
    create_log_dir=True
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py", line 269, in service
    Execute(daemon_cmd, not_if=process_id_exists_command, environment=hadoop_env_exports)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 273, in action_run
    tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 71, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 93, in checked_call
    tries=tries, try_sleep=try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 141, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 294, in _call
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh  -H -E /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start nfs3' returned 1. starting nfs3, logging to /var/log/hadoop/root/hadoop-hdfs-nfs3-node8.56qq.com.out

stdout: /var/lib/ambari-agent/data/output-3794.txt

2017-04-18 17:32:40,526 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.2.0-258
2017-04-18 17:32:40,527 - Checking if need to create versioned conf dir /etc/hadoop/2.4.2.0-258/0
2017-04-18 17:32:40,529 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.4.2.0-258', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2017-04-18 17:32:40,588 - call returned (1, '/etc/hadoop/2.4.2.0-258/0 exist already', '')
2017-04-18 17:32:40,588 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.4.2.0-258', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2017-04-18 17:32:40,647 - checked_call returned (0, '')
2017-04-18 17:32:40,647 - Ensuring that hadoop has the correct symlink structure
2017-04-18 17:32:40,647 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-04-18 17:32:40,751 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.2.0-258
2017-04-18 17:32:40,753 - Checking if need to create versioned conf dir /etc/hadoop/2.4.2.0-258/0
2017-04-18 17:32:40,754 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.4.2.0-258', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2017-04-18 17:32:40,812 - call returned (1, '/etc/hadoop/2.4.2.0-258/0 exist already', '')
2017-04-18 17:32:40,813 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.4.2.0-258', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2017-04-18 17:32:40,871 - checked_call returned (0, '')
2017-04-18 17:32:40,872 - Ensuring that hadoop has the correct symlink structure
2017-04-18 17:32:40,872 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-04-18 17:32:40,873 - Group['hadoop'] {}
2017-04-18 17:32:40,874 - Group['users'] {}
2017-04-18 17:32:40,874 - Group['spark'] {}
2017-04-18 17:32:40,874 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-04-18 17:32:40,875 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-04-18 17:32:40,875 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-04-18 17:32:40,875 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-04-18 17:32:40,876 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-04-18 17:32:40,876 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-04-18 17:32:40,877 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-04-18 17:32:40,877 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-04-18 17:32:40,877 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-04-18 17:32:40,878 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-04-18 17:32:40,878 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-04-18 17:32:40,879 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-04-18 17:32:40,879 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-04-18 17:32:40,880 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-04-18 17:32:40,880 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-04-18 17:32:40,880 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-04-18 17:32:40,881 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-04-18 17:32:40,926 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-04-18 17:32:40,926 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-04-18 17:32:40,927 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-04-18 17:32:40,927 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-04-18 17:32:40,971 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2017-04-18 17:32:40,972 - Group['hdfs'] {}
2017-04-18 17:32:40,972 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2017-04-18 17:32:40,972 - FS Type: 
2017-04-18 17:32:40,972 - Directory['/etc/hadoop'] {'mode': 0755}
2017-04-18 17:32:40,982 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-04-18 17:32:40,983 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-04-18 17:32:40,992 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2017-04-18 17:32:41,037 - Skipping Execute[('setenforce', '0')] due to not_if
2017-04-18 17:32:41,038 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2017-04-18 17:32:41,039 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2017-04-18 17:32:41,039 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2017-04-18 17:32:41,042 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2017-04-18 17:32:41,044 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2017-04-18 17:32:41,044 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2017-04-18 17:32:41,053 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2017-04-18 17:32:41,053 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2017-04-18 17:32:41,053 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2017-04-18 17:32:41,057 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2017-04-18 17:32:41,100 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2017-04-18 17:32:41,338 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.2.0-258
2017-04-18 17:32:41,340 - Checking if need to create versioned conf dir /etc/hadoop/2.4.2.0-258/0
2017-04-18 17:32:41,341 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.4.2.0-258', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2017-04-18 17:32:41,401 - call returned (1, '/etc/hadoop/2.4.2.0-258/0 exist already', '')
2017-04-18 17:32:41,401 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.4.2.0-258', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2017-04-18 17:32:41,459 - checked_call returned (0, '')
2017-04-18 17:32:41,460 - Ensuring that hadoop has the correct symlink structure
2017-04-18 17:32:41,460 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-04-18 17:32:41,461 - Stack Feature Version Info: stack_version=2.4, version=2.4.2.0-258, current_cluster_version=2.4.2.0-258 -> 2.4.2.0-258
2017-04-18 17:32:41,473 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.2.0-258
2017-04-18 17:32:41,474 - Checking if need to create versioned conf dir /etc/hadoop/2.4.2.0-258/0
2017-04-18 17:32:41,476 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.4.2.0-258', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2017-04-18 17:32:41,537 - call returned (1, '/etc/hadoop/2.4.2.0-258/0 exist already', '')
2017-04-18 17:32:41,537 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.4.2.0-258', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2017-04-18 17:32:41,596 - checked_call returned (0, '')
2017-04-18 17:32:41,596 - Ensuring that hadoop has the correct symlink structure
2017-04-18 17:32:41,597 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-04-18 17:32:41,604 - checked_call['dpkg -s hdp-select | grep Version | awk '{print $2}''] {'stderr': -1}
2017-04-18 17:32:41,656 - checked_call returned (0, '2.4.2.0-258', '')
2017-04-18 17:32:41,658 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2017-04-18 17:32:41,663 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2017-04-18 17:32:41,663 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-04-18 17:32:41,670 - Generating config: /usr/hdp/current/hadoop-client/conf/hadoop-policy.xml
2017-04-18 17:32:41,670 - File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-04-18 17:32:41,678 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-04-18 17:32:41,683 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-client.xml
2017-04-18 17:32:41,683 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-04-18 17:32:41,688 - Directory['/usr/hdp/current/hadoop-client/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-04-18 17:32:41,689 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf/secure', 'configuration_attributes': {}, 'configurations': ...}
2017-04-18 17:32:41,695 - Generating config: /usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml
2017-04-18 17:32:41,695 - File['/usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-04-18 17:32:41,699 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-04-18 17:32:41,705 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-server.xml
2017-04-18 17:32:41,705 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-04-18 17:32:41,711 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {u'final': {u'dfs.datanode.failed.volumes.tolerated': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.support.append': u'true', u'dfs.webhdfs.enabled': u'true'}}, 'configurations': ...}
2017-04-18 17:32:41,717 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml
2017-04-18 17:32:41,717 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-04-18 17:32:41,757 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2017-04-18 17:32:41,763 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml
2017-04-18 17:32:41,763 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-04-18 17:32:41,784 - File['/usr/hdp/current/hadoop-client/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'hdfs'}
2017-04-18 17:32:41,784 - Directory['/tmp/.hdfs-nfs'] {'owner': 'hdfs', 'group': 'hadoop'}
2017-04-18 17:32:41,785 - check if native nfs server is running
2017-04-18 17:32:41,785 - call['pgrep nfsd'] {}
2017-04-18 17:32:41,840 - call returned (1, '')
2017-04-18 17:32:41,840 - check if rpcbind or portmap is running
2017-04-18 17:32:41,840 - call['pgrep rpcbind'] {}
2017-04-18 17:32:41,895 - call returned (0, '40016')
2017-04-18 17:32:41,895 - call['pgrep portmap'] {}
2017-04-18 17:32:41,948 - call returned (1, '')
2017-04-18 17:32:41,948 - now we are ready to start nfs gateway
2017-04-18 17:32:41,949 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'mode': 0755}
2017-04-18 17:32:41,949 - Directory['/var/run/hadoop/root'] {'owner': 'root', 'group': 'hadoop', 'create_parents': True}
2017-04-18 17:32:41,949 - Directory['/var/log/hadoop/root'] {'owner': 'root', 'group': 'hadoop', 'mode': 0775}
2017-04-18 17:32:41,950 - File['/var/run/hadoop/root/hadoop_privileged_nfs3.pid'] {'action': ['delete'], 'not_if': 'ambari-sudo.sh  -H -E test -f /var/run/hadoop/root/hadoop_privileged_nfs3.pid && ambari-sudo.sh  -H -E pgrep -F /var/run/hadoop/root/hadoop_privileged_nfs3.pid'}
2017-04-18 17:32:41,994 - Execute['ambari-sudo.sh  -H -E /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start nfs3'] {'environment': {'HADOOP_PRIVILEGED_NFS_LOG_DIR': u'/var/log/hadoop/root', 'HADOOP_PRIVILEGED_NFS_PID_DIR': u'/var/run/hadoop/root', 'HADOOP_PRIVILEGED_NFS_USER': u'hdfs', 'HADOOP_LIBEXEC_DIR': '/usr/hdp/current/hadoop-client/libexec'}, 'not_if': 'ambari-sudo.sh  -H -E test -f /var/run/hadoop/root/hadoop_privileged_nfs3.pid && ambari-sudo.sh  -H -E pgrep -F /var/run/hadoop/root/hadoop_privileged_nfs3.pid'}
2017-04-18 17:32:46,137 - Execute['find /var/log/hadoop/root -maxdepth 1 -type f -name '*' -exec echo '==> {} <==' \; -exec tail -n 40 {} \;'] {'logoutput': True, 'ignore_failures': True, 'user': 'root'}
stdin: is not a tty
==> /var/log/hadoop/root/hadoop-hdfs-nfs3-node8.56qq.com.out <==
ulimit -a for privileged nfs user hdfs
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 1030416
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 640000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 1030416
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/root/SecurityAuth.audit <==
==> /var/log/hadoop/root/hadoop-hdfs-nfs3-node8.56qq.com.log <==
2017-04-18 17:32:43,106 INFO  nfs3.OpenFileCtxCache (OpenFileCtxCache.java:<init>(54)) - Maximum open streams is 256
2017-04-18 17:32:43,371 INFO  nfs3.RpcProgramNfs3 (RpcProgramNfs3.java:<init>(205)) - Configured HDFS superuser is 
2017-04-18 17:32:43,371 INFO  nfs3.RpcProgramNfs3 (RpcProgramNfs3.java:clearDirectory(231)) - Delete current dump directory /tmp/.hdfs-nfs
2017-04-18 17:32:43,373 INFO  nfs3.RpcProgramNfs3 (RpcProgramNfs3.java:clearDirectory(237)) - Create new dump directory /tmp/.hdfs-nfs
2017-04-18 17:32:43,375 INFO  nfs3.Nfs3Base (Nfs3Base.java:<init>(45)) - NFS server port set to: 2049
2017-04-18 17:32:43,377 INFO  oncrpc.RpcProgram (RpcProgram.java:<init>(84)) - Will accept client connections from unprivileged ports
2017-04-18 17:32:43,893 INFO  oncrpc.SimpleUdpServer (SimpleUdpServer.java:run(72)) - Started listening to UDP requests at port 4242 for Rpc program: mountd at localhost:4242 with workerCount 1
2017-04-18 17:32:43,907 INFO  oncrpc.SimpleTcpServer (SimpleTcpServer.java:run(90)) - Started listening to TCP requests at port 4242 for Rpc program: mountd at localhost:4242 with workerCount 1
2017-04-18 17:32:43,915 FATAL mount.MountdBase (MountdBase.java:start(106)) - Failed to register the MOUNT service.
java.lang.UnsupportedOperationException: Unsupported verifier flavorAUTH_SYS
	at org.apache.hadoop.oncrpc.security.Verifier.readFlavorAndVerifier(Verifier.java:45)
	at org.apache.hadoop.oncrpc.RpcDeniedReply.read(RpcDeniedReply.java:50)
	at org.apache.hadoop.oncrpc.RpcReply.read(RpcReply.java:67)
	at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:71)
	at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:130)
	at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:101)
	at org.apache.hadoop.mount.MountdBase.start(MountdBase.java:103)
	at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:56)
	at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:69)
	at org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter.start(PrivilegedNfsGatewayStarter.java:60)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
2017-04-18 17:32:43,917 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1
2017-04-18 17:32:43,919 WARN  util.ShutdownHookManager (ShutdownHookManager.java:run(56)) - ShutdownHook 'Unregister' failed, java.lang.UnsupportedOperationException: Unsupported verifier flavorAUTH_SYS
java.lang.UnsupportedOperationException: Unsupported verifier flavorAUTH_SYS
	at org.apache.hadoop.oncrpc.security.Verifier.readFlavorAndVerifier(Verifier.java:45)
	at org.apache.hadoop.oncrpc.RpcDeniedReply.read(RpcDeniedReply.java:50)
	at org.apache.hadoop.oncrpc.RpcReply.read(RpcReply.java:67)
	at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:71)
	at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:130)
	at org.apache.hadoop.oncrpc.RpcProgram.unregister(RpcProgram.java:118)
	at org.apache.hadoop.mount.MountdBase$Unregister.run(MountdBase.java:120)
	at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
2017-04-18 17:32:43,921 INFO  nfs3.Nfs3Base (LogAdapter.java:info(45)) - SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down Nfs3 at node8.**.com/11.*.*.8
************************************************************/
==> /var/log/hadoop/root/hadoop-hdfs-nfs3-node8.56qq.com.out.5 <==
ulimit -a for privileged nfs user hdfs
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 1030416
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 640000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 1030416
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/root/hadoop-hdfs-nfs3-node8.56qq.com.out.2 <==
ulimit -a for privileged nfs user hdfs
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 1030416
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 640000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 1030416
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/root/hadoop-hdfs-nfs3-node8.56qq.com.out.1 <==
ulimit -a for privileged nfs user hdfs
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 1030416
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 640000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 1030416
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/root/hadoop-hdfs-nfs3-node8.56qq.com.out.3 <==
ulimit -a for privileged nfs user hdfs
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 1030416
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 640000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 1030416
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/root/nfs3_jsvc.out <==
==> /var/log/hadoop/root/hadoop-hdfs-nfs3-node8.56qq.com.out.4 <==
ulimit -a for privileged nfs user hdfs
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 1030416
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 640000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 1030416
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/root/nfs3_jsvc.err <==
Initializing privileged NFS client socket...
Initializing privileged NFS client socket...
Initializing privileged NFS client socket...
Initializing privileged NFS client socket...
Initializing privileged NFS client socket...
Initializing privileged NFS client socket...
Initializing privileged NFS client socket...
Initializing privileged NFS client socket...
Initializing privileged NFS client socket...
Initializing privileged NFS client socket...
Initializing privileged NFS client socket...
Initializing privileged NFS client socket...
Initializing privileged NFS client socket...
Initializing privileged NFS client socket...
Initializing privileged NFS client socket...
Initializing privileged NFS client socket...
Initializing privileged NFS client socket...
Initializing privileged NFS client socket...
Service exit with a return value of 1
Initializing privileged NFS client socket...
Service exit with a return value of 1
Initializing privileged NFS client socket...
Service exit with a return value of 1
Initializing privileged NFS client socket...
Service exit with a return value of 1
Initializing privileged NFS client socket...
Service exit with a return value of 1
Initializing privileged NFS client socket...
Service exit with a return value of 1
Initializing privileged NFS client socket...
Service exit with a return value of 1
Initializing privileged NFS client socket...
Service exit with a return value of 1
Initializing privileged NFS client socket...
Service exit with a return value of 1
Initializing privileged NFS client socket...
Service exit with a return value of 1

Command failed after 1 tries