Support Questions
Find answers, ask questions, and share your expertise
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Who agreed with this topic

Failed to start datanode due to bind exception

New Contributor

I have been repeatedly trying to start datanode but it fails with bind exception saying address is already in use even though port is free


I used below commands to check

netstat -a -t --numeric-ports -p | grep 500


I have overridden default port 50070 with 50081 but the issue still persists.


Starting DataNode with maxLockedMemory = 0
Opened streaming server at /
Balancing bandwith is 10485760 bytes/s
Number threads for balancing is 5
Waiting for threadgroup to exit, active threads is 0
Shutdown complete.
Exception in secureMain bind(2) error: Address already in use when trying to bind to '/var/run/hdfs-sockets/datanode'
    at Method)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(
    at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(
    at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(
    at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(
    at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(
    at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(
    at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(
    at org.apache.hadoop.hdfs.server.datanode.DataNode.main(
Exiting with status 1


Who agreed with this topic