Created on 10-01-2015 09:43 AM - last edited on 11-08-2016 08:29 AM by cjervis
Hey all,
I am having trouble with my Cluster Set-up (installing 5.4.7). The 4th step "Checking if the name directories of the NameNode are empty. Formatting HDFS only if empty" is failing.
Reviewing the logs, I get the following error:
Failed to start namenode.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /u01/dfs/nn is in an inconsistent state: storage directory does not exist or is not accessible.
15/10/01 09:29:02 WARN namenode.NameNode: Encountered exception during format:
java.io.IOException: Cannot create directory /u01/dfs/nn/current
So I thought this was a permissions thing, but after chaning permissions and also ownership on the directory in question, the setup still fails.
Has anyone see this before, or know how to solve?
Much apprecaite, take care.
-Salah Ayoubi
Created 10-01-2015 11:37 AM
Figured out my issue...rookie mistake on my part. The parent directory had the permissions problem. Talking to the duck works... 🙂
Created 10-01-2015 12:17 PM
Congratulations on solving your problem and thanks for updating the post too. 🙂
Created 10-01-2015 01:11 PM
Created 10-15-2016 02:53 PM
Hi
I'm getting similar problem while installing cloudera manager on ubuntu. I'm installing with cloudera-manager-installer.bin
16/10/15 15:48:18 INFO namenode.NNConf: XAttrs enabled? true
16/10/15 15:48:18 INFO namenode.NNConf: Maximum size of an xattr: 16384
16/10/15 15:48:18 INFO namenode.FSImage: Allocated new BlockPoolId: BP-466403930-192.168.0.5-1476568098276
16/10/15 15:48:18 WARN namenode.NameNode: Encountered exception during format:
java.io.IOException: Cannot create directory /dfs/nn/current
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:343)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:548)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:569)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:159)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1046)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1484)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1611)
16/10/15 15:48:18 ERROR namenode.NameNode: Failed to start namenode.
java.io.IOException: Cannot create directory /dfs/nn/current
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:343)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:548)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:569)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:159)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1046)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1484)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1611)
16/10/15 15:48:18 INFO util.ExitUtil: Exiting with status 1
16/10/15 15:48:18 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at venu-HP-Pavilion-Sleekbook-14-PC.exam.com/192.168.0.5
Created 10-18-2016 04:52 PM
Check what is the admin user of hdfs service and then check if this has permission to create /dfs/nn/current
Created 10-24-2016 12:51 PM
Thank you. I just moved to cloudera quick VM installation since it is giving me very hard time on Ubuntu.
Created 07-07-2017 07:40 AM
Hi,
I am seeing a similar kind of issue on my namenode servers.
When I tried to restart my namenode, it failed initially with permission issues on /data1/dfs/nn.
The group identifier didn't exist in the list of groups I had.
So, I changed the permissions on the /data1/dfs/nn and restarted it.
This fixed the issue with /data1/dfs/nn but later starting throwing similar kind of errors with /data2/dfs/nn.
I repeated the same steps on /data2/dfs/nn hoping that this will fix the issue.But no luck.
For the same name node, why is it different for the data dirs.
Any pointers ?