Support Questions

Find answers, ask questions, and share your expertise

Datanode starts but doesn't connect to namenode

avatar
Contributor

I installed a 2 node hadoop cluster. The master and slave node starts separately but the datanode isn't shown in namenode webUI. The log file for datanode shows the following error :

	2016-02-09 23:30:53,920 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
	2016-02-09 23:30:53,920 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
	2016-02-09 23:30:54,976 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop-master/172.17.25.5:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
	2016-02-09 23:30:55,977 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop-master/172.17.25.5:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-02-09 23:21:15,062 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: hadoop-master/172.17.25.5:9000

The hosts file in slave is

172.17.25.5    hadoop-master
127.0.0.1    silp-ProLiant-DL360-Gen9
172.17.25.18    hadoop-slave-1
# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

the core-site.xml file is :

<configuration>
<property>
  <name>fs.default.name</name>
    <value>hdfs://hadoop-master:9000</value>
</property>
<property>
    <name>dfs.permissions</name>
       <value>false</value>
</property>
</configuration>

Kindly help with this issue.

2 ACCEPTED SOLUTIONS

avatar
Master Mentor
@Kumar Sanyam

Is there connectivity between the servers?

WARN org.apache.hadoop.hdfs.server.datanode.DataNode:Problem connecting to server: hadoop-master/172.17.25.5:9000

Please make sure that ssh works and iptable is off

View solution in original post

avatar
Contributor

thanks everyone..solved the problem. Actually the namenode was listening at localhost:9000 and datanode tried to connect at hadoop-master:9000, hence the error in connectivity. Changed the listening IP:port of namenode to hadoop-master:9000.

View solution in original post

12 REPLIES 12

avatar
New Contributor

Thanks for the Solution

avatar
New Contributor

Thanks all for contributing to this post. I had the same issue on Centos 7 on a 2 node hadoop cluster. In my case firewall was the cause. After allowing my namenode port (in my case 8020) through firewall on namenode machine allowed datanodes to connect to namenode.

$ sudo firewall-cmd --zone=public --add-port=8020/tcp --permanent
$ sudo firewall-cmd --reload

,

Thanks all for contributing to this post. I had the same issue on Centos 7 on a 2 node hadoop cluster. In my case firewall was the cause. After allowing my namenode port (in my case 8020) through firewall on namenode machine allowed datanodes to connect to namenode.

$ sudo firewall-cmd --zone=public --add-port=8020/tcp --permanent
$ sudo firewall-cmd --reload

avatar
New Contributor

add port to firewall and enjoy


sudo firewall-cmd --zone=public --add-port=8020/tcp --permanent
sudo firewall-cmd --reload