Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

hadoop no route error

avatar
Super Collaborator

I am keep getting these errors , and most of the time the application succeeds still. what is this error and how I can resolve this ? I have ssh setup between all hosts

16/08/26 14:59:54 INFO mapreduce.Job: Job job_1472231356029_0008 failed with state FAILED due to: Application application_1472231356029_0008 failed 2 times due to Error launching appattempt_1472231356029_0008_000002. Got exception: java.net.NoRouteToHostException: No Route to Host from  hadoop2.tolls.dot.state.fl.us/10.100.44.16 to hadoop3.tolls.dot.state.fl.us:45454 failed on socket timeout exception: java.net.NoRouteToHostException: No route to host; For more details see:  http://wiki.apache.org/hadoop/NoRouteToHost
1 ACCEPTED SOLUTION

avatar
Super Guru

@Sami Ahmad Did you tried to check the link given in error details - http://wiki.apache.org/hadoop/NoRouteToHost

Pasting here :

Some possible causes (not an exclusive list):

  • The hostname of the remote machine is wrong in the configuration files
  • The client's host table /etc/hosts has an invalid IPAddress for the target host.
  • The DNS server's host table has an invalid IPAddress for the target host.
  • The client's routing tables (In Linux, iptables) are wrong.
  • The DHCP server is publishing bad routing information.
  • Client and server are on different subnets, and are not set up to talk to each other. This may be an accident, or it is to deliberately lock down the Hadoop cluster.
  • The machines are trying to communicate using IPv6. Hadoop does not currently support IPv6
  • The host's IP address has changed but a long-lived JVM is caching the old value. This is a known problem with JVMs (search for "java negative DNS caching" for the details and solutions). The quick solution: restart the JVMs

View solution in original post

2 REPLIES 2

avatar
Super Guru

@Sami Ahmad Did you tried to check the link given in error details - http://wiki.apache.org/hadoop/NoRouteToHost

Pasting here :

Some possible causes (not an exclusive list):

  • The hostname of the remote machine is wrong in the configuration files
  • The client's host table /etc/hosts has an invalid IPAddress for the target host.
  • The DNS server's host table has an invalid IPAddress for the target host.
  • The client's routing tables (In Linux, iptables) are wrong.
  • The DHCP server is publishing bad routing information.
  • Client and server are on different subnets, and are not set up to talk to each other. This may be an accident, or it is to deliberately lock down the Hadoop cluster.
  • The machines are trying to communicate using IPv6. Hadoop does not currently support IPv6
  • The host's IP address has changed but a long-lived JVM is caching the old value. This is a known problem with JVMs (search for "java negative DNS caching" for the details and solutions). The quick solution: restart the JVMs

avatar
New Contributor
17/02/27 21:14:47 INFO mapreduce.Job: Task Id : attempt_1488256893995_0001_r_000000_2, Status : FAILED
Container launch failed for container_1488256893995_0001_01_000008 : java.net.NoRouteToHostException: No Route to Host from  mc1/--host-- to mc2:33382 failed on socket timeout exception: java.net.NoRouteToHostException: No route to host; For more details see:  http://wiki.apache.org/hadoop/NoRouteToHost

Its a firewall or iptables related issue.

Run this command worked for me as a solution.

# /etc/init.d/iptables stop

after this Run above MR job will be successful

To reproduce the same scenario run below cmd and checkout the failure upon job run.

# /etc/init.d/iptables start