Support Questions

Find answers, ask questions, and share your expertise
Celebrating as our community reaches 100,000 members! Thank you!

Need help please. I have used Ambari and HDP 2.3 and all the services got started manually the first time but then it's not starting. Not able to start data node or name node or secondary node.

Expert Contributor

Below is the exception I am getting:

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HDFS/", line 433, in <module>
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/", line 219, in execute
  File "/var/lib/ambari-agent/cache/common-services/HDFS/", line 102, in start
    namenode(action="start", hdfs_binary=hdfs_binary, upgrade_type=upgrade_type, env=env)
  File "/usr/lib/python2.6/site-packages/ambari_commons/", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/", line 112, in namenode
  File "/var/lib/ambari-agent/cache/common-services/HDFS/", line 267, in service
    Execute(daemon_cmd, not_if=process_id_exists_command, environment=hadoop_env_exports)
  File "/usr/lib/python2.6/site-packages/resource_management/core/", line 154, in __init__
  File "/usr/lib/python2.6/site-packages/resource_management/core/", line 158, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/", line 121, in run_action
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/", line 238, in action_run
    tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/", line 70, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/", line 92, in checked_call
    tries=tries, try_sleep=try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/", line 140, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/", line 291, in _call
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of ' su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/current/hadoop-client/sbin/ --config /usr/hdp/current/hadoop-client/conf start namenode'' returned 1. starting namenode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-namenode-pp-hdp-m.out

Super Collaborator


Have you tried using internal ip instead?

Please give it a shot if not already done.

View solution in original post



What's the output of /var/log/hadoop/hdfs/hadoop-hdfs-namenode-pp-hdp-m.log ?


Can you please post logs from /var/log/hadoop/hdfs/hadoop-hdfs-namenode-pp-hdp-m.out. Also post ulimit -a from the nodes where DN and NN are running ?

Expert Contributor

Hi ! Saurabh - Thank you so much. @Saurabh Kumar

output of /var/log/hadoop/hdfs/hadoop-hdfs-namenode-pp-hdp-m.out

ulimit -a for user hdfs

core file size (blocks, -c) unlimited

data seg size (kbytes, -d) unlimited

scheduling priority (-e) 0

file size (blocks, -f) unlimited

pending signals (-i) 63413

max locked memory (kbytes, -l) 64

max memory size (kbytes, -m) unlimited

open files (-n) 128000

pipe size (512 bytes, -p) 8

POSIX message

queues (bytes, -q) 819200

real-time priority (-r) 0

stack size (kbytes, -s) 8192

cpu time (seconds, -t)

unlimited max user processes (-u) 65536

virtual memory (kbytes, -v)

unlimited file locks (-x) unlimited

Expert Contributor

@Saurabh Kumar

Looks like what is happening is it's not able to assign port 50070 to the hostname. Port in use: pp-hdp-m:50070

But when I checked this port is not used by any other process..


Prakash Punj

Master Mentor
@Prakash Punj

Are you using vm? Please provide more details on your environment. vagrant?

5:12:21,968 ERROR namenode.NameNode ( - Failed to start namenode. Port in use: pp-hdp-m:50070 at org.apache.hadoop.http.HttpServer2.openListeners( at org.apache.hadoop.http.HttpServer2.start( at

netstat -anp| grep 50070

Master Mentor

@Prakash Punj

See this from my environment.

There is issue with the networking in your environment. You have to find out whats running in your environment and kill those processes.


Master Mentor

Kill the process that is using the port 50070

15:12:21,968 ERROR namenode.NameNode ( - Failed to start namenode. Port in use: pp-hdp-m:50070 at org.apache.hadoop.http.HttpServer2.openListeners( at org.apache.hadoop.http.HttpServer2.start( at

Expert Contributor

Thanks @Neeraj Sabharwal. I am very new to to this. I am using HDP 2.3 on Ambari. Thanks for helping me out..

Yes I am using VM - centos7. Looks like something messed up on hostname configuration, Internal IP or something like that.

content of /etc/hosts file: localhost localhost.localdomain localhost4 localhost4.localdomain4 pp-hdp-s2 pp-hdp-s1 pp-hdp-m - This is where I am installing Ambari and Namenode

Content of cat /etc/resolv.conf

; generated by /usr/sbin/dhclient-script

search ASOTC ( where is this asotc coming from as I do see this name getting appended in some of the as host name)


Below is one entry from netstat ( looks like VM has one more internal IP

tcp 0 0 pp-hdp-m: eforward TIME_WAIT

hostname -f


hostname -i

NETWORK configuration file:




Master Mentor

@Prakash Punj try rebooting that server ans restart the Ambari server.. Then log you upload says "Port in use: pp-hdp-m:50070 at org.apache.hadoop.http.HttpServer2.openListeners"

If you want help then you need to follow some tips from this forum by elimination we cn come to your rescue !