- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
hadoop no route error
- Labels:
-
Apache Hadoop
Created ‎08-26-2016 07:03 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am keep getting these errors , and most of the time the application succeeds still. what is this error and how I can resolve this ? I have ssh setup between all hosts
16/08/26 14:59:54 INFO mapreduce.Job: Job job_1472231356029_0008 failed with state FAILED due to: Application application_1472231356029_0008 failed 2 times due to Error launching appattempt_1472231356029_0008_000002. Got exception: java.net.NoRouteToHostException: No Route to Host from hadoop2.tolls.dot.state.fl.us/10.100.44.16 to hadoop3.tolls.dot.state.fl.us:45454 failed on socket timeout exception: java.net.NoRouteToHostException: No route to host; For more details see: http://wiki.apache.org/hadoop/NoRouteToHost
Created ‎08-29-2016 03:54 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Sami Ahmad Did you tried to check the link given in error details - http://wiki.apache.org/hadoop/NoRouteToHost
Pasting here :
Some possible causes (not an exclusive list):
- The hostname of the remote machine is wrong in the configuration files
- The client's host table /etc/hosts has an invalid IPAddress for the target host.
- The DNS server's host table has an invalid IPAddress for the target host.
- The client's routing tables (In Linux, iptables) are wrong.
- The DHCP server is publishing bad routing information.
- Client and server are on different subnets, and are not set up to talk to each other. This may be an accident, or it is to deliberately lock down the Hadoop cluster.
- The machines are trying to communicate using IPv6. Hadoop does not currently support IPv6
- The host's IP address has changed but a long-lived JVM is caching the old value. This is a known problem with JVMs (search for "java negative DNS caching" for the details and solutions). The quick solution: restart the JVMs
Created ‎08-29-2016 03:54 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Sami Ahmad Did you tried to check the link given in error details - http://wiki.apache.org/hadoop/NoRouteToHost
Pasting here :
Some possible causes (not an exclusive list):
- The hostname of the remote machine is wrong in the configuration files
- The client's host table /etc/hosts has an invalid IPAddress for the target host.
- The DNS server's host table has an invalid IPAddress for the target host.
- The client's routing tables (In Linux, iptables) are wrong.
- The DHCP server is publishing bad routing information.
- Client and server are on different subnets, and are not set up to talk to each other. This may be an accident, or it is to deliberately lock down the Hadoop cluster.
- The machines are trying to communicate using IPv6. Hadoop does not currently support IPv6
- The host's IP address has changed but a long-lived JVM is caching the old value. This is a known problem with JVMs (search for "java negative DNS caching" for the details and solutions). The quick solution: restart the JVMs
Created ‎03-06-2017 05:23 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
17/02/27 21:14:47 INFO mapreduce.Job: Task Id : attempt_1488256893995_0001_r_000000_2, Status : FAILED Container launch failed for container_1488256893995_0001_01_000008 : java.net.NoRouteToHostException: No Route to Host from mc1/--host-- to mc2:33382 failed on socket timeout exception: java.net.NoRouteToHostException: No route to host; For more details see: http://wiki.apache.org/hadoop/NoRouteToHost
Its a firewall or iptables related issue.
Run this command worked for me as a solution.
# /etc/init.d/iptables stop
after this Run above MR job will be successful
To reproduce the same scenario run below cmd and checkout the failure upon job run.
# /etc/init.d/iptables start
