Support Questions

Find answers, ask questions, and share your expertise

only 2/3 dataNodes are live

New Contributor

Hi all...

We have 3 nodes in our cluster that almost were out of disk space.

I added 3 volumes to each node and mount the volume to be under: /hadoop/hdfs/data2

in each node I have those folders:

/hadoop/hdfs/data

/hadoop/hdfs/data2

in both, hdfs user is the owner.

In ambari, I added the new directory in the hdfs config under DataNode.

DataNode directories looks like: /hadoop/hdfs/data,/hadoop/hdfs/data2

after this change, I have only 2/3 nodes that live, the other one failed with:

Connection failed: [Errno 111] Connection refused to node1-hortonworks:50010

Any suggestions?

Thanks!

3 REPLIES 3

Guru

@Moti Ben Ivgi, Please look at below thread, you may be hitting issue with host setup.

https://community.hortonworks.com/questions/26802/data-node-process-not-starting-up.html

Explorer

Few things to check

1) Are the datanodes actually running

2) Are those datanodes setup with a different port for some reason?

3) Double check your dfs.exclude and dfs.include files too

,

Are you sure that other 1/3 of datanodes are running? You may also want to make sure the NN isn't rejecting those datanodes from joining the cluster. Check the dfs.include and dfs.exclude files

New Contributor

Thank you guys, I resolved it.

There is a VERSION file under each data directory, in my case:

/hadoop/hdfs/data2/current/VERSION

/hadoop/hdfs/data/current/VERSION

in this file, there is a layoutVersion property.

I checked this property for data2 and data directories and it was not the same.

Since I didn't have any data under data2, I remove the content of data2, restart the hdfs and it works.

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.