I'm just getting started with cloudera and hadoop.
I've set up a single node cluster on centos 7.5 and got everything running great with path A install to do some testing.
It's on an azure VM so I added a disk to use for data and set up a couple of folders there;
/data/1/dfs/dn and /data/2/dfs/dn and set these both as the data directories for name node and data node.
Everything worked fine, I did some queries on Hue etc... all good.
That was yesterday;
Today I've come to my single node cluster and seen that both these directories had the status "failed directories"
restarting the cluster has lead to the error;
Service did not start successfully; not all of the required roles started: only 2/3 roles started. Reasons : Service has only 0 NameNode roles running instead of minimum required 1.
Some googling pointed me towards permissions, but the folders are owned by hdfs:hadoop and set to 777 but still I'm having the issue... I can't figure out why as the folders are clearly there and looks fine. I also found this error in the logs;
Failed to start namenode.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /nordata/1/dfs/dn is in an inconsistent state: file VERSION has namespaceID missing.
I can't understand how this has happened with no changes to the server just overnight? Is there a way to recover?
Is there a chance the data was on an ephemeral device that has since wiped out, or something external ran a deletion command over its contents?
From the error messages and your description it appears as if all the metadata (and perhaps data) content has been wiped clean, but nothing within the HDFS software does this unless explicitly asked to (such as a NameNode format request).
Perhaps begin with the logs and command histories to see if anything was accidentally invoked from an external user?