Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Change Data Node port from 1019 to 50010 forcefully!

Highlighted

Change Data Node port from 1019 to 50010 forcefully!

Contributor

Hi,

We have opened port 50010 on all DNs but on one Data Node Server DN process is listening on port 1019.

To do distcp, we are allowed to transfer only via 50010 port but not via 1019 port.

How can I force DN running on DN server to listen to port 50010 instead of 1019.

Any advice please.

8 REPLIES 8

Re: Change Data Node port from 1019 to 50010 forcefully!

Contributor
@Geoffrey Shelton Okot

can you help me on this please

Re: Change Data Node port from 1019 to 50010 forcefully!

Mentor

@Sriram

Can you explain how you opened the port 50010 on all other datanodes? The port is controlled by the HDFS value

 dfs.datanode.address 

Can you check that value on the datanodes listening on port 1019

HTH

Re: Change Data Node port from 1019 to 50010 forcefully!

Contributor

@Geoffery

Thanks a lot for your time.. I am wondering why there are 2 ports for DN and how can we specify which port has to be used

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_reference/content/hdfs-ports.html

dfs.datanode.address has 2 ports 1019 and 50010

and

a) Is port decision made on the value of property dfs.datanode.address?

b) If the property has 50070 as value can DN use the other port 1019 by any chance?

This information is valuable to open the ports for distcp between 2 clusters.

Re: Change Data Node port from 1019 to 50010 forcefully!

Mentor

@Sriram

The ports 50010 and 1019 are Custom HDFS protocol Data transfer mapped to dfs.datanode.address parameter why is the value different ONLY in the offending datanode?

Can you change it to 50010 and restart the datanode and retry the DISTCP

Re: Change Data Node port from 1019 to 50010 forcefully!

Contributor

@Geoffrey.

I changed the value of property dfs.datanode.address to 0.0.0.0:1019 and restarted HDFS and none of the DNs did start and they failed with error:

******************************************************

2018-06-27 07:56:36,925 INFO datanode.DataNode (LogAdapter.java:info(47)) - registered UNIX signal handlers for [TERM, HUP, INT] 2018-06-27 07:56:39,479 INFO impl.MetricsConfig (MetricsConfig.java:loadFirst(112)) - loaded properties from hadoop-metrics2.properties 2018-06-27 07:56:39,794 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(376)) - Scheduled snapshot period at 10 second(s). 2018-06-27 07:56:39,795 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:start(192)) - DataNode metrics system started 2018-06-27 07:56:39,838 INFO datanode.BlockScanner (BlockScanner.java:<init>(172)) - Initialized block scanner with targetBytesPerSec 1048576 2018-06-27 07:56:39,850 INFO datanode.DataNode (DataNode.java:<init>(437)) - File descriptor passing is enabled. 2018-06-27 07:56:39,851 INFO datanode.DataNode (DataNode.java:<init>(448)) - Configured hostname is node1.hortonworks.com 2018-06-27 07:56:39,896 INFO datanode.DataNode (DataNode.java:startDataNode(1211)) - Starting DataNode with maxLockedMemory = 0 2018-06-27 07:56:40,037 INFO datanode.DataNode (DataNode.java:shutdown(1915)) - Shutdown complete. 2018-06-27 07:56:40,039 ERROR datanode.DataNode (DataNode.java:secureMain(2630)) - Exception in secureMain java.net.SocketException: Call From 0.0.0.0 to null:0 failed on socket exception: java.net.SocketException: Permission denied; For more details see: http://wiki.apache.org/hadoop/SocketException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:801) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:774) at org.apache.hadoop.ipc.Server.bind(Server.java:541) at org.apache.hadoop.ipc.Server.bind(Server.java:513) at org.apache.hadoop.hdfs.net.TcpPeerServer.<init>(TcpPeerServer.java:116) at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:996) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1218) at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:449) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2508) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2395) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2442) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2623) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2647) Caused by: java.net.SocketException: Permission denied at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:524) ... 10 more 2018-06-27 07:56:40,052 INFO util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1 2018-06-27 07:56:40,082 INFO datanode.DataNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down DataNode at node1.hortonworks.com/192.168.194.201 ************************************************************/

iptables is turned off on all servers

[root@node1 hdfs]# service iptables status iptables: Firewall is not running. [root@node1 hdfs]#

Any idea on what is going on here...

Re: Change Data Node port from 1019 to 50010 forcefully!

Mentor

@Sriram

You talked of ONLY one DN listening on 1019 and that's the DN you should have changed to 50070 that's the port the NameNode listens.
Please revert and change as above ?

Re: Change Data Node port from 1019 to 50010 forcefully!

Contributor

@Geoffrey Shelton Okot

Thanks for your time and I really appreciate your time on our queries.

In prod cluster we have one Cluster with DNs bind towards 50010 and another cluster with DNs bind towards 1019.

To verify whether I can change the port via ambari, I tried changing port from 50010 to 1019 in my local and personal cluster and I observed the above issue.

Also, may I know the best alternatives to distcp because at this point of time I dont have leisure to request networking team to open any port.

I need to transfer approximately 200G of HDFS data from one cluster to another and I am planning to give a try by transferring data to Linux and transfer to another cluster using SCP and then move it into HDFS.

To conclude below are the 2 points for which I need your kind help.

a) How easy is to change port of DNs

DNs are down and are not starting when port is changed from 1019 to 50010 in prod cluster.

DNs are down and are not starting when port is changed from 50010 to 1019 and error is pasted above.

b) Would you recommend traditional scp to another cluster to transfer 200G data?

If distcp seems to be a difficult option, could you recommend any other way?

Thanks a lot for your time.

Re: Change Data Node port from 1019 to 50010 forcefully!

Mentor

@Sriram

I was trying to understand your context, do you have 2 separate clusters? You will need to provide the inputs

$ hadoop distcp hdfs://nn1:8020/user/bar hdfs://nn2:8020/user/foo

DISTCP uses the NameNode metadata service that runs on 8020 ,please can you try that and revert?

HTH