Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

NameNode Not Starting: Issue in executing command

NameNode Not Starting: Issue in executing command

New Contributor

Getting following error on starting all services through ambari:

Unable to make out what exactly is causing error?

stderr:   /var/lib/ambari-agent/data/errors-316.txt


2017-10-29 23:59:27,595 - Error while executing command 'start':
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 214, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 86, in start
    namenode(action="start", rolling_restart=rolling_restart, env=env)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 75, in namenode
    create_log_dir=True
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py", line 219, in service
    environment=hadoop_env_exports
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 148, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 152, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 118, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 274, in action_run
    raise ex
Fail: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /etc/hadoop/conf start namenode'' returned 1. starting namenode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-namenode-mach-vm1.opl.out
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
stdout:   /var/lib/ambari-agent/data/output-316.txt


2017-10-29 23:59:13,966 - u"Group['hadoop']" {'ignore_failures': False}
2017-10-29 23:59:13,968 - Modifying group hadoop
2017-10-29 23:59:14,071 - u"Group['users']" {'ignore_failures': False}
2017-10-29 23:59:14,071 - Modifying group users
2017-10-29 23:59:14,159 - u"User['zookeeper']" {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2017-10-29 23:59:14,160 - Modifying user zookeeper
2017-10-29 23:59:14,208 - u"User['ams']" {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2017-10-29 23:59:14,209 - Modifying user ams
2017-10-29 23:59:14,258 - u"User['ambari-qa']" {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']}
2017-10-29 23:59:14,259 - Modifying user ambari-qa
2017-10-29 23:59:14,308 - u"User['hdfs']" {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2017-10-29 23:59:14,308 - Modifying user hdfs
2017-10-29 23:59:14,356 - u"User['yarn']" {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2017-10-29 23:59:14,357 - Modifying user yarn
2017-10-29 23:59:14,406 - u"User['mapred']" {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2017-10-29 23:59:14,407 - Modifying user mapred
2017-10-29 23:59:14,457 - u"File['/var/lib/ambari-agent/data/tmp/changeUid.sh']" {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-10-29 23:59:14,774 - u"Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa']" {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-10-29 23:59:14,822 - Skipping u"Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa']" due to not_if
2017-10-29 23:59:14,823 - u"Group['hdfs']" {'ignore_failures': False}
2017-10-29 23:59:14,824 - Modifying group hdfs
2017-10-29 23:59:14,912 - u"User['hdfs']" {'ignore_failures': False, 'groups': [u'hadoop', 'hadoop', 'hdfs', u'hdfs']}
2017-10-29 23:59:14,912 - Modifying user hdfs
2017-10-29 23:59:14,960 - u"Directory['/etc/hadoop']" {'mode': 0755}
2017-10-29 23:59:15,132 - u"Directory['/etc/hadoop/conf.empty']" {'owner': 'root', 'group': 'hadoop', 'recursive': True}
2017-10-29 23:59:15,299 - u"Link['/etc/hadoop/conf']" {'not_if': 'ls /etc/hadoop/conf', 'to': '/etc/hadoop/conf.empty'}
2017-10-29 23:59:15,350 - Skipping u"Link['/etc/hadoop/conf']" due to not_if
2017-10-29 23:59:15,365 - u"File['/etc/hadoop/conf/hadoop-env.sh']" {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-10-29 23:59:15,656 - u"Execute['('setenforce', '0')']" {'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2017-10-29 23:59:15,721 - Skipping u"Execute['('setenforce', '0')']" due to only_if
2017-10-29 23:59:15,724 - u"Directory['/usr/hdp/current/hadoop-client/lib/native/Linux-i386-32']" {'recursive': True}
2017-10-29 23:59:15,893 - u"Directory['/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64']" {'recursive': True}
2017-10-29 23:59:16,069 - u"Link['/usr/hdp/current/hadoop-client/lib/native/Linux-i386-32/libsnappy.so']" {'to': '/usr/hdp/current/hadoop-client/lib/libsnappy.so'}
2017-10-29 23:59:16,166 - u"Link['/usr/hdp/current/hadoop-client/lib/native/Linux-i386-32/libsnappy.so']" replacing old symlink to /usr/hdp/2.2.4.2-2/hadoop/lib/libsnappy.so
2017-10-29 23:59:16,259 - Warning: linking to nonexistent location /usr/hdp/current/hadoop-client/lib/libsnappy.so
2017-10-29 23:59:16,260 - Creating symbolic u"Link['/usr/hdp/current/hadoop-client/lib/native/Linux-i386-32/libsnappy.so']"
2017-10-29 23:59:16,306 - u"Link['/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64/libsnappy.so']" {'to': '/usr/hdp/current/hadoop-client/lib64/libsnappy.so'}
2017-10-29 23:59:16,404 - u"Link['/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64/libsnappy.so']" replacing old symlink to /usr/hdp/2.2.4.2-2/hadoop/lib64/libsnappy.so
2017-10-29 23:59:16,498 - Warning: linking to nonexistent location /usr/hdp/current/hadoop-client/lib64/libsnappy.so
2017-10-29 23:59:16,498 - Creating symbolic u"Link['/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64/libsnappy.so']"
2017-10-29 23:59:16,545 - u"Directory['/var/log/hadoop']" {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2017-10-29 23:59:17,000 - u"Directory['/var/run/hadoop']" {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'}
2017-10-29 23:59:17,165 - Changing owner for /var/run/hadoop from 504 to root
2017-10-29 23:59:17,212 - Changing group for /var/run/hadoop from 501 to root
2017-10-29 23:59:17,547 - u"Directory['/tmp/hadoop-hdfs']" {'owner': 'hdfs', 'recursive': True, 'cd_access': 'a'}
2017-10-29 23:59:17,919 - u"File['/etc/hadoop/conf/commons-logging.properties']" {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2017-10-29 23:59:18,191 - u"File['/etc/hadoop/conf/health_check']" {'content': Template('health_check-v2.j2'), 'owner': 'hdfs'}
2017-10-29 23:59:18,459 - u"File['/etc/hadoop/conf/log4j.properties']" {'content': '...', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2017-10-29 23:59:18,740 - u"File['/etc/hadoop/conf/hadoop-metrics2.properties']" {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
2017-10-29 23:59:19,002 - u"File['/etc/hadoop/conf/task-log4j.properties']" {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2017-10-29 23:59:19,511 - u"Directory['/etc/security/limits.d']" {'owner': 'root', 'group': 'root', 'recursive': True}
2017-10-29 23:59:19,710 - u"File['/etc/security/limits.d/hdfs.conf']" {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2017-10-29 23:59:19,979 - u"XmlConfig['hadoop-policy.xml']" {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/etc/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-10-29 23:59:20,002 - Generating config: /etc/hadoop/conf/hadoop-policy.xml
2017-10-29 23:59:20,002 - u"File['/etc/hadoop/conf/hadoop-policy.xml']" {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-10-29 23:59:20,214 - Writing u"File['/etc/hadoop/conf/hadoop-policy.xml']" because contents don't match
2017-10-29 23:59:20,383 - u"XmlConfig['hdfs-site.xml']" {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/etc/hadoop/conf', 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.namenode.http-address': u'true'}}, 'configurations': ...}
2017-10-29 23:59:20,401 - Generating config: /etc/hadoop/conf/hdfs-site.xml
2017-10-29 23:59:20,401 - u"File['/etc/hadoop/conf/hdfs-site.xml']" {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-10-29 23:59:20,640 - Writing u"File['/etc/hadoop/conf/hdfs-site.xml']" because contents don't match
2017-10-29 23:59:20,813 - u"XmlConfig['core-site.xml']" {'group': 'hadoop', 'conf_dir': '/etc/hadoop/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2017-10-29 23:59:20,832 - Generating config: /etc/hadoop/conf/core-site.xml
2017-10-29 23:59:20,832 - u"File['/etc/hadoop/conf/core-site.xml']" {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-10-29 23:59:21,042 - Writing u"File['/etc/hadoop/conf/core-site.xml']" because contents don't match
2017-10-29 23:59:21,220 - u"File['/etc/hadoop/conf/slaves']" {'content': Template('slaves.j2'), 'owner': 'hdfs'}
2017-10-29 23:59:21,494 - u"Directory['/mach/hadoop/hdfs/namenode']" {'owner': 'hdfs', 'recursive': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'}
2017-10-29 23:59:22,048 - Ranger admin not installed
/mach/hadoop/hdfs/namenode/namenode-formatted/ exists. Namenode DFS already formatted
2017-10-29 23:59:22,050 - u"Directory['/mach/hadoop/hdfs/namenode/namenode-formatted/']" {'recursive': True}
2017-10-29 23:59:22,226 - u"File['/etc/hadoop/conf/dfs.exclude']" {'owner': 'hdfs', 'content': Template('exclude_hosts_list.j2'), 'group': 'hadoop'}
2017-10-29 23:59:22,501 - u"Directory['/var/run/hadoop']" {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755}
2017-10-29 23:59:22,668 - Changing owner for /var/run/hadoop from 0 to hdfs
2017-10-29 23:59:22,717 - Changing group for /var/run/hadoop from 0 to hadoop
2017-10-29 23:59:22,769 - u"Directory['/var/run/hadoop/hdfs']" {'owner': 'hdfs', 'recursive': True}
2017-10-29 23:59:22,947 - u"Directory['/var/log/hadoop/hdfs']" {'owner': 'hdfs', 'recursive': True}
2017-10-29 23:59:23,128 - u"File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid']" {'action': ['delete'], 'not_if': 'ls /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid >/dev/null 2>&1 && ps -p `cat /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid` >/dev/null 2>&1'}
2017-10-29 23:59:23,271 - Deleting u"File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid']"
2017-10-29 23:59:23,321 - u"Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /etc/hadoop/conf start namenode'']" {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/current/hadoop-client/libexec'}, 'not_if': 'ls /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid >/dev/null 2>&1 && ps -p `cat /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid` >/dev/null 2>&1'}
2017-10-29 23:59:27,595 - Error while executing command 'start':
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 214, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 86, in start
    namenode(action="start", rolling_restart=rolling_restart, env=env)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 75, in namenode
    create_log_dir=True
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py", line 219, in service
    environment=hadoop_env_exports
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 148, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 152, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 118, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 274, in action_run
    raise ex
Fail: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /etc/hadoop/conf start namenode'' returned 1. starting namenode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-namenode-mach-vm1.opl.out
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
2017-10-29 23:59:27,651 - Command: /usr/bin/hdp-select status hadoop-hdfs-namenode > /tmp/tmphSPjym
Output: hadoop-hdfs-namenode - 2.2.4.2-2
5 REPLIES 5

Re: NameNode Not Starting: Issue in executing command

Super Mentor

@K D

Looks like we might find more detailed information about the NN startup failure inside the following file. Can you please share the file:

# ls -l  /var/log/hadoop/hdfs/hadoop-hdfs-namenode-mach-vm1.opl.out

.

Also please check if you are able to start the NN manually (this is to isolate the issue, If the issue is from Ambari Side or From HDFS itself).

# Execute this command on the NameNode host machine(s):
# su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh start namenode"

.

Please See the following link for more details : https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_reference/content/starting_hdp_services....

Re: NameNode Not Starting: Issue in executing command

New Contributor

Content of file /var/log/hadoop/hdfs/hadoop-hdfs-namenode-mach-vm1.opl.out:

ulimit -a for user hdfs
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 128357
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 32768
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited		

Re: NameNode Not Starting: Issue in executing command

Super Mentor

@K D

The out file looks fine.

Do you see any error inside the file "/var/log/hadoop/hdfs/hadoop-hdfs-namenode-mach-vm1.opl.log"

.

And also are you able to start the process manually using command line?

Re: NameNode Not Starting: Issue in executing command

New Contributor

Yes, I am able to start manually which doesn't throw any error.

Inside: "/var/log/hadoop/hdfs/hadoop-hdfs-namenode-mach-vm1.opl.log" file NN service was already running

After restarting ambari-agent, NN started working. Thanks for your help @Jay SenSharma

Re: NameNode Not Starting: Issue in executing command

Super Mentor

@K D

Good to know that your issue is resolved. As the issue is resolved, hence it will be also great if you can mark this HCC thread as Answered by clicking on the "Accept" Button on the correct answer. That way other HCC users can quickly find the solution when they encounter the same issue.

Don't have an account?
Coming from Hortonworks? Activate your account here