Support Questions

Find answers, ask questions, and share your expertise

Failed to start namenode - Directory is in an inconsistent state:

avatar
New Contributor

Hey all, 

 

I am having trouble with my Cluster Set-up (installing 5.4.7).  The 4th step "Checking if the name directories of the NameNode are empty. Formatting HDFS only if empty" is failing.

 

Reviewing the logs, I get the following error:

 

Failed to start namenode.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /u01/dfs/nn is in an inconsistent state: storage directory does not exist or is not accessible.

 

15/10/01 09:29:02 WARN namenode.NameNode: Encountered exception during format:
java.io.IOException: Cannot create directory /u01/dfs/nn/current

 

So I thought this was a permissions thing, but after chaning permissions and also ownership on the directory in question, the setup still fails.  

 

Has anyone see this before, or know how to solve?

 

Much apprecaite, take care.

 

-Salah Ayoubi

7 REPLIES 7

avatar
New Contributor

Figured out my issue...rookie mistake on my part.  The parent directory had the permissions problem.  Talking to the duck works... 🙂

avatar
Community Manager

Congratulations on solving your problem and thanks for updating the post too. 🙂


Cy Jervis, Manager, Community Program
Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.

avatar
New Contributor
Thanks!! Much appreciated!

avatar
Contributor

Hi

 

I'm getting similar problem while installing cloudera manager on ubuntu. I'm installing with cloudera-manager-installer.bin

 

16/10/15 15:48:18 INFO namenode.NNConf: XAttrs enabled? true
16/10/15 15:48:18 INFO namenode.NNConf: Maximum size of an xattr: 16384
16/10/15 15:48:18 INFO namenode.FSImage: Allocated new BlockPoolId: BP-466403930-192.168.0.5-1476568098276
16/10/15 15:48:18 WARN namenode.NameNode: Encountered exception during format:
java.io.IOException: Cannot create directory /dfs/nn/current
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:343)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:548)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:569)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:159)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1046)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1484)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1611)
16/10/15 15:48:18 ERROR namenode.NameNode: Failed to start namenode.
java.io.IOException: Cannot create directory /dfs/nn/current
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:343)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:548)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:569)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:159)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1046)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1484)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1611)
16/10/15 15:48:18 INFO util.ExitUtil: Exiting with status 1
16/10/15 15:48:18 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at venu-HP-Pavilion-Sleekbook-14-PC.exam.com/192.168.0.5

avatar
Rising Star

Check what is the admin user of hdfs service and then check if this has permission to create /dfs/nn/current

avatar
Contributor

Thank you. I just moved to cloudera quick VM installation since it is giving me very hard time on Ubuntu.

avatar

Hi,

 

I am seeing a similar kind of issue on my namenode servers.

When I tried to restart my namenode, it failed initially with permission issues on /data1/dfs/nn.

The group identifier didn't exist in the list of groups I had.

So, I changed the permissions on the /data1/dfs/nn and restarted it.

This fixed the issue with /data1/dfs/nn but later starting throwing similar kind of errors with /data2/dfs/nn.

I repeated the same steps on /data2/dfs/nn hoping that this will fix the issue.But no luck.

For the same name node, why is it different for the data dirs.

Any pointers ?