Support Questions

Find answers, ask questions, and share your expertise

HDFS put failing due to internal IP address use

avatar
New Contributor

Hello,

 

I have an 7 node cluster (6 datanodes) and I am executing HDFSClient Put from an application outside the cloudera cluster. Internally the

cluster is configured to use an internal IP (172.x.x.x range)

 

I get the error below when I issue a hdfs put to the name node ... note the IP returned is172.123.123.123:50010 which is the internal ip address

and not accessible from the application host. 

 HDFS

2015-09-22 19:26:48.292+01:00 INFO [Thread-11] org.apache.hadoop.hdfs.DFSClient - Exception in createBlockOutputStream
org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/172.123.123.123:50010]
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:534) ~[hadoop-common-2.7.0.jar!/:na]
at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1610) ~[hadoop-hdfs-2.6.0.jar!/:na]
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1408) [hadoop-hdfs-2.6.0.jar!/:na]
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361) [hadoop-hdfs-2.6.0.jar!/:na]
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588) [hadoop-hdfs-2.6.0.jar!/:na]
2015-09-22 19:26:48.292+01:00 INFO [Thread-11] org.apache.hadoop.hdfs.DFSClient - Abandoning BP-383231650-172.16.1.45-1435792324508:blk_1074408707_667883
2015-09-22 19:26:48.316+01:00 INFO [Thread-11] org.apache.hadoop.hdfs.DFSClient - Excluding datanode 172.123.123.123:50010

 

Wildcard addresses is being used on datanode/namenode

Also, I've tried enabling the following parameter to no avail:

 

dfs.datanode.use.datanode.hostname
dfs.client.use.datanode.hostname

 

 

 Is there any way to make the hostname be returned here instead of that IP?

 

Any pointers appreciated.. 

Brian

1 ACCEPTED SOLUTION

avatar
Mentor
Glad to hear! Please consider marking this thread as resolved so others with similar problems may find a solution quicker.

View solution in original post

10 REPLIES 10

avatar
Mentor

Wildcard addresses is being used on datanode/namenode

> dfs.client.use.datanode.hostname

 

This is your solution here, iff your client hosts will resolve the very same DN hostname but over a different IP. Is that true in your environment?

 

You mention you've tried this - could you elaborate? This setting needs to be applied at the HDFS client configuration, for it to be properly in effect. Is your 'edge host' that lies out of the cluster, or your Java application (if it is run standalone), configured with this set to true in its hdfs-site.xm/Configuration object?

avatar
New Contributor
FYI: I resolved this by setting the property on the client code....

avatar
Mentor
Glad to hear! Please consider marking this thread as resolved so others with similar problems may find a solution quicker.

avatar
New Contributor

can you please elborrate ?  what u did

avatar
New Contributor

Can you please explain the steps involved to resolve this issue?

avatar
New Contributor
How. Which property in the client code?

avatar
Contributor

I have the same problem.

 

Could you please elaborate on the solution to solve this?

avatar
New Contributor

Hi I am facing the same issue . Can anyone shed some light on this please

 

Thanks in Advance

avatar
Explorer

How did u reslove this issue...