Support Questions

Find answers, ask questions, and share your expertise

data nodes can't reach name node

avatar
New Contributor

My data nodes in my new cloudera live cluster seem unable to reach the master node. I've tried restarting HDFS several times from Cloudera Manager and I get the following error message:

 

"Command aborted because of exception: Command timed-out after 150 seconds"

 

I can login to each machine using root and the passwords provided on the gogrid page. Should ssh be set up between these machines to not require entering a password? I'm not sure if I missed a step in the configuration or if it should be done automatically when the cluster is first deployed; of course it could be some other connectivity problem.

 

I think the following is a symptom of the same connectivity problem. If I ssh to one of the data nodes (I anonymized part of the hostname) and try to run this command, I get this error:

 

[root@XXXXX-cldraagent-01 ~]# hadoop fs -ls /user/hue
ls: No Route to Host from XXXXX-cldraagent-01/10.NNN.NNN.3 to XXXXX-cldramaster-01:8020 failed on socket timeout exception: java.net.NoRouteToHostException: No route to host; For more details see: http://wiki.apache.org/hadoop/NoRouteToHost

 

I can ping the master from the agent (again anonymizing):

ping XXXXX-cldramaster-01
PING XXXXX-cldramaster-01 (10.NNN.NNN.2) 56(84) bytes of data.
64 bytes from XXXXX-cldramaster-01 (10.NNN.NNN.2): icmp_seq=1 ttl=64 time=0.276 ms
64 bytes from XXXXX-cldramaster-01 (10.NNN.NNN.2): icmp_seq=2 ttl=64 time=0.358 ms

 

Appreciate any pointers, and apologies if this is a stupid question

 

 

1 ACCEPTED SOLUTION

avatar
New Contributor

For what it's worth in case somebody else encounters a similar problem, after adding ssh keys I had the same problem but I found the following instructions:

 

http://www.cyberciti.biz/tips/no-route-to-host-error-and-solution.html

 

I ran these commands on the master (name node), and now hdfs is working properly on all 4 nodes

 

/sbin/iptables -L -n

/etc/init.d/iptables save

/etc/init.d/iptables stop

/sbin/iptables -L -n

 

View solution in original post

3 REPLIES 3

avatar
New Contributor

For what it's worth in case somebody else encounters a similar problem, after adding ssh keys I had the same problem but I found the following instructions:

 

http://www.cyberciti.biz/tips/no-route-to-host-error-and-solution.html

 

I ran these commands on the master (name node), and now hdfs is working properly on all 4 nodes

 

/sbin/iptables -L -n

/etc/init.d/iptables save

/etc/init.d/iptables stop

/sbin/iptables -L -n

 

avatar
Rising Star

for my case I was just working on cloudera VM , I had to configure the node ip ,
=> ifconfig LOCAL_NODE_IP eth1:2 netmask XXXX
after that pinging to that ip is going well
thanks a lot

avatar
New Contributor

thanks