Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Upgrade from HDP 2.6.2 to 3.0.0 HDFS error Failed to upgrade storage directory

Upgrade from HDP 2.6.2 to 3.0.0 HDFS error Failed to upgrade storage directory

New Contributor

I am trying to upgrade my HDP 2.6.2 to 3.0.0 on a single node standalone cluster. HDP was installed manually, this is not the sandbox. I first upgraded Ambari to 2.7.0.0, registered HDP 3.0.0 and started the upgrade process. At the RESTART HDFS/DATANODE step, it stalls and eventually times out after 30 retries. The datanode log has the following:

	Caused by: java.nio.file.FileSystemException: /hadoop/hdfs/data/current/BP-398974976-10.18.50.95-1505159723149/current/finalized/subdir0/subdir0/blk_1073741825 -> /hadoop/hdfs/data/current/BP-398974976-10.18.50.95-1505159723149/previous.tmp/finalized/subdir0/subdir0/blk_1073741825: Operation not permitted
        at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
        at sun.nio.fs.UnixFileSystemProvider.createLink(UnixFileSystemProvider.java:476)
        at java.nio.file.Files.createLink(Files.java:1086)
        at org.apache.hadoop.fs.HardLink.createHardLink(HardLink.java:170)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage$3.call(DataStorage.java:1115)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage$3.call(DataStorage.java:1108)
        ... 4 more
2019-02-14 14:58:51,152
	ERROR datanode.DataNode (BPServiceActor.java:run(828)) - Initialization failed for Block pool <registering> (Datanode Uuid 4d5fe213-7fcd-404e-8c50-ffe58b9fbc30) service to magdev-v101-hdp02.otxlab.net/10.9.38.75:8020. Exiting.
java.io.IOException: All specified directories have failed to load.
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:552)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1705)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1665)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:390)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
        at java.lang.Thread.run(Thread.java:745)

I created a checkpoint right before starting the upgrade. /hadoop/hdfs/data/current/BP-398974976-10.18.50.95-1505159723149/current/finalized/subdir0/subdir0/blk_1073741825 does not exists, at least not at this step. How do I correct this or roll back the upgrade? The latter I don't see any option and I cannot restart HDFS from Ambari any longer.

Don't have an account?
Coming from Hortonworks? Activate your account here