Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Failed to startup HDFS NameNode

avatar
Contributor

Hello,

i have installed ambari 2.2.2 on Centos 6.8 Azure platform. i ve installed services i need successfully, but when i want to start them, zookeeper starts fine, but HDFS does not!

here is the log of /var/log/hadoop/hdfs/hadoop-hdfs-namenode-namenode.log:

2017-02-14 11:00:21,714 ERROR namenode.NameNode (NameNode.java:main(1714)) - Failed to start namenode. java.net.BindException: Port in use: namenode:50070 at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:919) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:856) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:156) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:892) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:720) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:951) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:935) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1641) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1709) Caused by: java.net.BindException: Cannot assign requested address at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:914) ... 8 more 2017-02-14 11:00:21,715 INFO util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1 2017-02-14 11:00:21,718 INFO namenode.NameNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG:

i want to mention that the port 50070 is not in use, cause the following command does not return any line.

netstat -tulpn | grep :50070

Also, i have changed the port number to 50071, restarted everything, but the issue was not solved.

Any help will be appreciated

thank you

1 ACCEPTED SOLUTION

avatar
Contributor

I found the source of the problem and i solved it!

i was using public ip in my hosts file. as Azure doesn't allow binding on public ip so it was impossible to start services.

i changed all to private IP and the issue is solved. thanks for your help

View solution in original post

7 REPLIES 7

avatar
Super Guru
@oussama ben lamine

Suspected you are hitting - https://issues.apache.org/jira/browse/AMBARI-10021

Can you just restart the Machine and check to start the namenode process again. Let me know if that helps.

Please do make sure you have iptables and selinux disabled and hostname set correctly.

Check $hostname and $hostname -f output. Both should match.

avatar
Contributor

@Sagar Shimpi

Hello,

- that bug was solved in the version i am using (2.2.2)

- iptables are disabled and so selinux

- hostname and hostname -f have the same output "namenode".

- i restarted the machine but the issue is the same.

avatar
Super Guru
@oussama ben lamine

Can you pass me the output of command -

$ps -aef |grep namenode

$netstat -taupen |grep <pid_of_namenode>

$cat /etc/hosts

$ifconfig

avatar
Contributor

thanks for the interest, but i ve solved the issue 🙂

avatar
Contributor

I found the source of the problem and i solved it!

i was using public ip in my hosts file. as Azure doesn't allow binding on public ip so it was impossible to start services.

i changed all to private IP and the issue is solved. thanks for your help

avatar
New Contributor

[root@sandbox ~]# ps -aef |grep namenode hdfs 616 0 7 05:50 ? 00:00:19 /usr/lib/jvm/java/bin/java -Dproc_namenode -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop-hdfs-namenode-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -Xloggc:/var/log/hadoop/hdfs/gc.log-201705260550 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -Xloggc:/var/log/hadoop/hdfs/gc.log-201705260550 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -Xloggc:/var/log/hadoop/hdfs/gc.log-201705260550 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.NameNode root 4642 4496 0 05:55 pts/0 00:00:00 grep namenode [root@sandbox ~]# netstat -taupen |grep 4496 [root@sandbox ~]# netstat -taupen |grep 4642 [root@sandbox ~]# cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.17.0.2 sandbox.hortonworks.com sandbox [root@sandbox ~]# ifconfig eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02 inet addr:172.17.0.2 Bcast:0.0.0.0 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3159 errors:0 dropped:0 overruns:0 frame:0 TX packets:1421 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:423909 (413.9 KiB) TX bytes:2673388 (2.5 MiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:20245 errors:0 dropped:0 overruns:0 frame:0 TX packets:20245 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:14721372 (14.0 MiB) TX bytes:14721372 (14.0 MiB)

avatar

I would like to say that same thing happens on RHEL 7 on AWS (EC2). It costed me both money and time! Thanks for this, took me quite some time to find it. Sorry for bumping old post!