- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Move data gives Found duplicated storage UUID error
Created ‎06-21-2017 03:09 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi I tried moving my data to a different directory (/data/hdfs/data) by adding the new directory to datanode dir in HDP configs and then copying the data, but I get this error:
2017-06-21 15:29:53,432 ERROR impl.FsDatasetImpl (FsDatasetImpl.java:activateVolume(398)) - Found duplicated storage UUID: DS-011fd6ee-105d-4c21-ba03-8f43bc75f0b2 in /data/hdfs/data/current/VERSION. 2017-06-21 15:29:53,432 ERROR datanode.DataNode (BPServiceActor.java:run(772)) - Initialization failed for Block pool <registering> (Datanode Uuid 18224fd5-7fbe-4700-b22b-64352741f4a7) service to master.royble.co.uk/192.168.1.1:8020. Exiting. java.io.IOException: Found duplicated storage UUID: DS-011fd6ee-105d-4c21-ba03-8f43bc75f0b2 in /data/hdfs/data/current/VERSION. at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.activateVolume(FsDatasetImpl.java:399) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.addVolume(FsDatasetImpl.java:425) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.<init>(FsDatasetImpl.java:329) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:30) at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1556) at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1504) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:319) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:269) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:760) at java.lang.Thread.run(Thread.java:745)
Should I just delete the original copied data files (/hadoop/hdfs/data)?
TIA!!
Created ‎06-22-2017 05:05 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If you are doing this on single node in cluster then yes delete the original copied data files and Namenode will take care of recreating missing data files.
Created ‎06-22-2017 05:05 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If you are doing this on single node in cluster then yes delete the original copied data files and Namenode will take care of recreating missing data files.
Created ‎06-26-2017 01:43 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes it did, thanks!
