Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Unable to format namenode using the Cloudera Manager UI

Highlighted

Unable to format namenode using the Cloudera Manager UI

Explorer

 

I have CDH 5 installed on Cent OS 64 bit OS , when I try to format the namenode using Cloudera Manager , I only see a " Failed to format namenode" error.

 

The role log displayed in the popup does not get updated at all , how can I find out what the issue is , there is just not enough information to find out what the issue is . The namenode log specified in the popup does not get updated after the HDFS service was stop to start the format process. Its very sad to see that the CDH platform still keeps us new comers busy with these issues....

 

Screenshot from 2016-12-31 00-08-40.png

2 REPLIES 2
Highlighted

Re: Unable to format namenode using the Cloudera Manager UI

Super Guru

Hello,

 

It appears that some of the log that you showed was cut off at a key point where there was a Permission Denied message.

Based on what I see, this appears to be more of an HDFS-side issue, but, it would really help of we could see the last 50 or so lines of your NameNode log to judge what the failure is.

 

When setting up CDH via Cloudera Manager, normally when you add the HDFS service, the formatting is done for you if the disk locations are clean.  I suspect this is not a clean install, but if you could outline more of what was happening on these hosts (hadoop-wise) before you tried to format (and why were attempting to format), that would fill in some of the backstory.

 

Thanks,

 

Ben

Re: Unable to format namenode using the Cloudera Manager UI

Explorer

Thank you for your reply , I will itemize my response to make it more clear.

 

  1. The log showed in the does not get updated when the namenode fails to format. The failure error message refers to that log file , but that particular log file was last updated when the HDFS service was stopped as a pre-requisite to format the name node.
  2. There are no updates to the namenode log after the shutdown of the HDFS. I will paste the last 50 lines anyway at the end of this reply.
  3. I am attempting to format the namenode because I had a three node cluster with a replication factor of 1 , I downsized the cluster to a single node and want to format the namenode so that its  a clean cluster after the downsizing.

 

The namenode log message only refers to the shutting down of the namenode as can be seen below.

 

2016-12-30 23:18:45,107 INFO BlockStateChange: BLOCK* addToInvalidates: blk_1074514303_773653 69.30.216.2:50010
2016-12-30 23:18:47,978 INFO BlockStateChange: BLOCK* BlockManager: ask 69.30.216.2:50010 to delete [blk_1074514303_773653]
2016-12-30 23:18:50,561 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /user/gd/crawldirectory/segments/20161230231747/crawl_generate/_temporary/1/_temporary/attempt_1483079926406_0009_r_000002_0/part-00002. BP-1989333537-69.30.216.2-1454724545643 blk_1074514304_773654{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-42f80e69-a127-480d-9a4e-c9ab9a695c32:NORMAL:69.30.216.2:50010|RBW]]}
2016-12-30 23:18:50,862 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 69.30.216.2:50010 is added to blk_1074514304_773654{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-42f80e69-a127-480d-9a4e-c9ab9a695c32:NORMAL:69.30.216.2:50010|RBW]]} size 0
2016-12-30 23:18:50,877 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/gd/crawldirectory/segments/20161230231747/crawl_generate/_temporary/1/_temporary/attempt_1483079926406_0009_r_000002_0/part-00002 is closed by DFSClient_attempt_1483079926406_0009_r_000002_0_-1387170630_1
2016-12-30 23:18:54,213 WARN org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:mapred (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Permission denied by sticky bit setting: user=mapred, inode=job_1454651686226_0001-1454914179431-gd-inject+%2Furls-1454892002623-2-12-SUCCEEDED-root.gd-1454891968703.jhist
2016-12-30 23:18:54,213 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.delete from 69.30.216.2:53870 Call#14111 Retry#0: org.apache.hadoop.security.AccessControlException: Permission denied by sticky bit setting: user=mapred, inode=job_1454651686226_0001-1454914179431-gd-inject+%2Furls-1454892002623-2-12-SUCCEEDED-root.gd-1454891968703.jhist
2016-12-30 23:18:56,581 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: list corrupt file blocks returned: 100
2016-12-30 23:18:58,789 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /user/gd/crawldirectory/segments/20161230231747/crawl_generate/_temporary/1/_temporary/attempt_1483079926406_0009_r_000003_0/part-00003. BP-1989333537-69.30.216.2-1454724545643 blk_1074514305_773655{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-42f80e69-a127-480d-9a4e-c9ab9a695c32:NORMAL:69.30.216.2:50010|RBW]]}
2016-12-30 23:18:58,851 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30001 milliseconds
2016-12-30 23:18:58,855 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 3 millisecond(s).
2016-12-30 23:18:59,133 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 69.30.216.2:50010 is added to blk_1074514305_773655{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-42f80e69-a127-480d-9a4e-c9ab9a695c32:NORMAL:69.30.216.2:50010|RBW]]} size 0
2016-12-30 23:18:59,161 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/gd/crawldirectory/segments/20161230231747/crawl_generate/_temporary/1/_temporary/attempt_1483079926406_0009_r_000003_0/part-00003 is closed by DFSClient_attempt_1483079926406_0009_r_000003_0_885284025_1
2016-12-30 23:19:05,572 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /user/gd/crawldirectory/segments/20161230231747/crawl_generate/_temporary/1/_temporary/attempt_1483079926406_0009_r_000004_0/part-00004. BP-1989333537-69.30.216.2-1454724545643 blk_1074514306_773656{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-42f80e69-a127-480d-9a4e-c9ab9a695c32:NORMAL:69.30.216.2:50010|RBW]]}
2016-12-30 23:19:05,871 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 69.30.216.2:50010 is added to blk_1074514306_773656{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-42f80e69-a127-480d-9a4e-c9ab9a695c32:NORMAL:69.30.216.2:50010|RBW]]} size 0
2016-12-30 23:19:05,887 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/gd/crawldirectory/segments/20161230231747/crawl_generate/_temporary/1/_temporary/attempt_1483079926406_0009_r_000004_0/part-00004 is closed by DFSClient_attempt_1483079926406_0009_r_000004_0_1308624312_1
2016-12-30 23:19:09,223 WARN org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:mapred (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Permission denied by sticky bit setting: user=mapred, inode=job_1454651686226_0001-1454914179431-gd-inject+%2Furls-1454892002623-2-12-SUCCEEDED-root.gd-1454891968703.jhist
2016-12-30 23:19:09,223 INFO org.apache.hadoop.ipc.Server: IPC Server handler 16 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.delete from 69.30.216.2:54336 Call#14127 Retry#0: org.apache.hadoop.security.AccessControlException: Permission denied by sticky bit setting: user=mapred, inode=job_1454651686226_0001-1454914179431-gd-inject+%2Furls-1454892002623-2-12-SUCCEEDED-root.gd-1454891968703.jhist
2016-12-30 23:19:12,594 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /user/gd/crawldirectory/segments/20161230231747/crawl_generate/_temporary/1/_temporary/attempt_1483079926406_0009_r_000005_0/part-00005. BP-1989333537-69.30.216.2-1454724545643 blk_1074514307_773657{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-42f80e69-a127-480d-9a4e-c9ab9a695c32:NORMAL:69.30.216.2:50010|RBW]]}
2016-12-30 23:19:12,981 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 69.30.216.2:50010 is added to blk_1074514307_773657{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-42f80e69-a127-480d-9a4e-c9ab9a695c32:NORMAL:69.30.216.2:50010|RBW]]} size 0
2016-12-30 23:19:13,013 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/gd/crawldirectory/segments/20161230231747/crawl_generate/_temporary/1/_temporary/attempt_1483079926406_0009_r_000005_0/part-00005 is closed by DFSClient_attempt_1483079926406_0009_r_000005_0_-1994527039_1
2016-12-30 23:19:15,701 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: list corrupt file blocks returned: 100
2016-12-30 23:19:19,190 WARN org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:mapred (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Permission denied by sticky bit setting: user=mapred, inode=job_1454651686226_0001-1454914179431-gd-inject+%2Furls-1454892002623-2-12-SUCCEEDED-root.gd-1454891968703.jhist
2016-12-30 23:19:19,190 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.delete from 69.30.216.2:54336 Call#14143 Retry#0: org.apache.hadoop.security.AccessControlException: Permission denied by sticky bit setting: user=mapred, inode=job_1454651686226_0001-1454914179431-gd-inject+%2Furls-1454892002623-2-12-SUCCEEDED-root.gd-1454891968703.jhist
2016-12-30 23:19:19,693 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /user/gd/crawldirectory/segments/20161230231747/crawl_generate/_temporary/1/_temporary/attempt_1483079926406_0009_r_000006_0/part-00006. BP-1989333537-69.30.216.2-1454724545643 blk_1074514308_773658{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-42f80e69-a127-480d-9a4e-c9ab9a695c32:NORMAL:69.30.216.2:50010|RBW]]}
2016-12-30 23:19:20,005 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 69.30.216.2:50010 is added to blk_1074514308_773658{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-42f80e69-a127-480d-9a4e-c9ab9a695c32:NORMAL:69.30.216.2:50010|RBW]]} size 0
2016-12-30 23:19:20,156 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/gd/crawldirectory/segments/20161230231747/crawl_generate/_temporary/1/_temporary/attempt_1483079926406_0009_r_000006_0/part-00006 is closed by DFSClient_attempt_1483079926406_0009_r_000006_0_-1284167788_1
2016-12-30 23:19:25,608 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: RECEIVED SIGNAL 15: SIGTERM
2016-12-30 23:19:25,619 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at server1.xyz.com/269.130.216.312
************************************************************/


  1. @bgooley wrote:

    Hello,

     

    It appears that some of the log that you showed was cut off at a key point where there was a Permission Denied message.

    Based on what I see, this appears to be more of an HDFS-side issue, but, it would really help of we could see the last 50 or so lines of your NameNode log to judge what the failure is.

     

    When setting up CDH via Cloudera Manager, normally when you add the HDFS service, the formatting is done for you if the disk locations are clean.  I suspect this is not a clean install, but if you could outline more of what was happening on these hosts (hadoop-wise) before you tried to format (and why were attempting to format), that would fill in some of the backstory.

     

    Thanks,

     

    Ben


Don't have an account?
Coming from Hortonworks? Activate your account here