Support Questions

Find answers, ask questions, and share your expertise

Move data gives Found duplicated storage UUID error

avatar
Expert Contributor

Hi I tried moving my data to a different directory (/data/hdfs/data) by adding the new directory to datanode dir in HDP configs and then copying the data, but I get this error:

2017-06-21 15:29:53,432 ERROR impl.FsDatasetImpl (FsDatasetImpl.java:activateVolume(398)) - Found duplicated storage UUID: DS-011fd6ee-105d-4c21-ba03-8f43bc75f0b2 in /data/hdfs/data/current/VERSION.
2017-06-21 15:29:53,432 ERROR datanode.DataNode (BPServiceActor.java:run(772)) - Initialization failed for Block pool <registering> (Datanode Uuid 18224fd5-7fbe-4700-b22b-64352741f4a7) service to master.royble.co.uk/192.168.1.1:8020. Exiting. 
java.io.IOException: Found duplicated storage UUID: DS-011fd6ee-105d-4c21-ba03-8f43bc75f0b2 in /data/hdfs/data/current/VERSION.
at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.activateVolume(FsDatasetImpl.java:399)
at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.addVolume(FsDatasetImpl.java:425)
at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.<init>(FsDatasetImpl.java:329)
at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34)
at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:30)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1556)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1504)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:319)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:269)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:760)
at java.lang.Thread.run(Thread.java:745)

Should I just delete the original copied data files (/hadoop/hdfs/data)?

TIA!!

1 ACCEPTED SOLUTION

avatar
Rising Star

If you are doing this on single node in cluster then yes delete the original copied data files and Namenode will take care of recreating missing data files.

View solution in original post

2 REPLIES 2

avatar
Rising Star

If you are doing this on single node in cluster then yes delete the original copied data files and Namenode will take care of recreating missing data files.

avatar
Expert Contributor

Yes it did, thanks!