Support Questions

Find answers, ask questions, and share your expertise

Cannot start data node - did the troubleshooting and find something wired: could you help!!!

Explorer

I installed Hadoop 2.7.2 (1 master NN -1 second NN-3 datanodes) and tried to start my datanodes !!! Get stuck!

After trouble shouting the logs (see below), the fatal error is due to ClusterID mismatch... easy! just change the IDs. WRONG... when I checked my VERSION files on the NameNode and the DataNodes they are identical.(see by yourself below)

So the question is simple: INTO the log file --> Where the ClusterID of the NameNode is coming From????

LOG FILE:


WARN org.apache.hadoop.hdfs.server.common.Storage: java.io.IOException:Incompatible clusterIDs in/home/hduser/mydata/hdfs/datanode: namenode clusterID =**CID-8e09ff25-80fb-4834-878b-f23b3deb62d0**; datanode clusterID =**CID-cd85e59a-ed4a-4516-b2ef-67e213cfa2a1**
org.apache.hadoop.hdfs.server.datanode.DataNode:Initialization failed forBlock pool <registering>(DatanodeUuid unassigned) service to master/172.XX.XX.XX:9000.Exiting.
java.io.IOException:All specified directories are failed to load.
atorg.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:478)
atorg.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1358)
atorg.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1323)
atorg.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
atorg.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
atorg.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)
at java.lang.Thread.run(Thread.java:745)
WARN org.apache.hadoop.hdfs.server.datanode.DataNode:Ending block pool service for:Block pool <registering>(DatanodeUuid unassigned) service to master/172.XX.XX.XX:9000
INFO org.apache.hadoop.hdfs.server.datanode.DataNode:RemovedBlock pool <registering>(DatanodeUuid unassigned)
WARN org.apache.hadoop.hdfs.server.datanode.DataNode:ExitingDatanode

COPY of THE VERSION FILE


the master

storageID=DS-f72f5710-a869-489d-9f52-40dadc659937
clusterID=CID-cd85e59a-ed4a-4516-b2ef-67e213cfa2a1
cTime=0
datanodeUuid=54bc8b80-b84f-4893-8b96-36568acc5d4b
storageType=DATA_NODE
layoutVersion=-56

THE DataNode

storageID=DS-f72f5710-a869-489d-9f52-40dadc659937
clusterID=CID-cd85e59a-ed4a-4516-b2ef-67e213cfa2a1
cTime=0
datanodeUuid=54bc8b80-b84f-4893-8b96-36568acc5d4b
storageType=DATA_NODE
layoutVersion=-56
1 ACCEPTED SOLUTION

@luc tiber

I am guessing that it's non ambari install. Good thread

namenode clusterID =**CID-8e09ff25-80fb-4834-878b-f23b3deb62d0**;

datanode clusterID =**CID-cd85e59a-ed4a-4516-b2ef-67e213cfa2a1**

You can see this for detail on version file location

Resolution: As it's new cluster so we can reformat the namenode as discussed here thread

View solution in original post

7 REPLIES 7

@luc tiber

I am guessing that it's non ambari install. Good thread

namenode clusterID =**CID-8e09ff25-80fb-4834-878b-f23b3deb62d0**;

datanode clusterID =**CID-cd85e59a-ed4a-4516-b2ef-67e213cfa2a1**

You can see this for detail on version file location

Resolution: As it's new cluster so we can reformat the namenode as discussed here thread

@luc tiber I agree with your last comments and glad that you are trying latest version. Please submit your feedback once you fix the issue.

Explorer

Just to summarize (and close) this issue, I would like to share how I fixed this issue.

On the MASTER and the 2nd Namenode the Namenode VERSION file is under ~/.../namenode/current/VERSION.

BUT for DATANODES the path is different. it should look something like this ~/.../datanode/current/VERSION

ClusterIDs between the 2 VERSION files should be identical

Hope it helps!

New Contributor

Awesome.It worked for me.

@luc tiber

Thanks for sharing the update !!! I may reproduce and test this

Mentor

@luc tiber Hadoop 2.7.2 is not released by Hortonworks yet. Mixing a HWX release with any other including Apache will yeld unintended results

Explorer

Hi Artem,

I appreciate Hortonwork perspective on the issue I shared. This is a clear product management position and from that perspective, this is a very valid point. Point taken!