Support Questions

Find answers, ask questions, and share your expertise

Change Data Node port from 1019 to 50010 forcefully!



We have opened port 50010 on all DNs but on one Data Node Server DN process is listening on port 1019.

To do distcp, we are allowed to transfer only via 50010 port but not via 1019 port.

How can I force DN running on DN server to listen to port 50010 instead of 1019.

Any advice please.


@Geoffrey Shelton Okot

can you help me on this please



Can you explain how you opened the port 50010 on all other datanodes? The port is controlled by the HDFS value


Can you check that value on the datanodes listening on port 1019




Thanks a lot for your time.. I am wondering why there are 2 ports for DN and how can we specify which port has to be used

dfs.datanode.address has 2 ports 1019 and 50010


a) Is port decision made on the value of property dfs.datanode.address?

b) If the property has 50070 as value can DN use the other port 1019 by any chance?

This information is valuable to open the ports for distcp between 2 clusters.



The ports 50010 and 1019 are Custom HDFS protocol Data transfer mapped to dfs.datanode.address parameter why is the value different ONLY in the offending datanode?

Can you change it to 50010 and restart the datanode and retry the DISTCP



I changed the value of property dfs.datanode.address to and restarted HDFS and none of the DNs did start and they failed with error:


2018-06-27 07:56:36,925 INFO datanode.DataNode ( - registered UNIX signal handlers for [TERM, HUP, INT] 2018-06-27 07:56:39,479 INFO impl.MetricsConfig ( - loaded properties from 2018-06-27 07:56:39,794 INFO impl.MetricsSystemImpl ( - Scheduled snapshot period at 10 second(s). 2018-06-27 07:56:39,795 INFO impl.MetricsSystemImpl ( - DataNode metrics system started 2018-06-27 07:56:39,838 INFO datanode.BlockScanner (<init>(172)) - Initialized block scanner with targetBytesPerSec 1048576 2018-06-27 07:56:39,850 INFO datanode.DataNode (<init>(437)) - File descriptor passing is enabled. 2018-06-27 07:56:39,851 INFO datanode.DataNode (<init>(448)) - Configured hostname is 2018-06-27 07:56:39,896 INFO datanode.DataNode ( - Starting DataNode with maxLockedMemory = 0 2018-06-27 07:56:40,037 INFO datanode.DataNode ( - Shutdown complete. 2018-06-27 07:56:40,039 ERROR datanode.DataNode ( - Exception in secureMain Call From to null:0 failed on socket exception: Permission denied; For more details see: at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance( at sun.reflect.DelegatingConstructorAccessorImpl.newInstance( at java.lang.reflect.Constructor.newInstance( at at at org.apache.hadoop.ipc.Server.bind( at org.apache.hadoop.ipc.Server.bind( at<init>( at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver( at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode( at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>( at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance( at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode( at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode( at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain( at org.apache.hadoop.hdfs.server.datanode.DataNode.main( Caused by: Permission denied at Method) at at at at at org.apache.hadoop.ipc.Server.bind( ... 10 more 2018-06-27 07:56:40,052 INFO util.ExitUtil ( - Exiting with status 1 2018-06-27 07:56:40,082 INFO datanode.DataNode ( - SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down DataNode at ************************************************************/

iptables is turned off on all servers

[root@node1 hdfs]# service iptables status iptables: Firewall is not running. [root@node1 hdfs]#

Any idea on what is going on here...



You talked of ONLY one DN listening on 1019 and that's the DN you should have changed to 50070 that's the port the NameNode listens.
Please revert and change as above ?


@Geoffrey Shelton Okot

Thanks for your time and I really appreciate your time on our queries.

In prod cluster we have one Cluster with DNs bind towards 50010 and another cluster with DNs bind towards 1019.

To verify whether I can change the port via ambari, I tried changing port from 50010 to 1019 in my local and personal cluster and I observed the above issue.

Also, may I know the best alternatives to distcp because at this point of time I dont have leisure to request networking team to open any port.

I need to transfer approximately 200G of HDFS data from one cluster to another and I am planning to give a try by transferring data to Linux and transfer to another cluster using SCP and then move it into HDFS.

To conclude below are the 2 points for which I need your kind help.

a) How easy is to change port of DNs

DNs are down and are not starting when port is changed from 1019 to 50010 in prod cluster.

DNs are down and are not starting when port is changed from 50010 to 1019 and error is pasted above.

b) Would you recommend traditional scp to another cluster to transfer 200G data?

If distcp seems to be a difficult option, could you recommend any other way?

Thanks a lot for your time.



I was trying to understand your context, do you have 2 separate clusters? You will need to provide the inputs

$ hadoop distcp hdfs://nn1:8020/user/bar hdfs://nn2:8020/user/foo

DISTCP uses the NameNode metadata service that runs on 8020 ,please can you try that and revert?