<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Name Node is not starting after performing the disk expansion activity on the name node HDFS disk in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Name-Node-is-not-starting-after-performing-the-disk/m-p/412873#M253741</link>
    <description>&lt;P&gt;&lt;SPAN&gt;While loading fsimage why the permissions=nifi:hdfs:rwx in-place of hdfs:hadoop&lt;BR /&gt;Encountered exception on operation MkdirOp&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;</description>
    <pubDate>Wed, 19 Nov 2025 09:30:20 GMT</pubDate>
    <dc:creator>Asfahan</dc:creator>
    <dc:date>2025-11-19T09:30:20Z</dc:date>
    <item>
      <title>Name Node is not starting after performing the disk expansion activity on the name node HDFS disk</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Name-Node-is-not-starting-after-performing-the-disk/m-p/342884#M233823</link>
      <description>&lt;P&gt;Hi All,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I have a 3 node HDP cluster and on which 2 nodes act as namenode as data node. We have recently extended the disk of HDFS from 1 tb to 3 TB. After that system got reboot and since that my name nodes are comming up.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Name node logs are mentioned below&lt;/P&gt;
&lt;P&gt;2022-05-01 14:00:49,272 INFO namenode.FSNamesystem (FSNamesystem.java:initRetryCache(979)) - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis&lt;BR /&gt;2022-05-01 14:00:49,274 INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map NameNodeRetryCache&lt;BR /&gt;2022-05-01 14:00:49,274 INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit&lt;BR /&gt;2022-05-01 14:00:49,275 INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.029999999329447746% max memory 2.0 GB = 621.3 KB&lt;BR /&gt;2022-05-01 14:00:49,275 INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^16 = 65536 entries&lt;BR /&gt;2022-05-01 14:00:49,299 INFO common.Storage (Storage.java:tryLock(776)) - Lock on /grid1/hadoop/hdfs/namenode/in_use.lock acquired by nodename 3716@digaudanaqamn2.gt.com&lt;BR /&gt;2022-05-01 14:00:49,354 INFO common.Storage (Storage.java:tryLock(776)) - Lock on /mnt/resource/hadoop/hdfs/namenode/in_use.lock acquired by nodename 3716@digaudanaqamn2.gt.com&lt;BR /&gt;2022-05-01 14:00:49,354 INFO namenode.FSImage (FSImage.java:recoverTransitionRead(277)) - Storage directory /mnt/resource/hadoop/hdfs/namenode is not formatted.&lt;BR /&gt;2022-05-01 14:00:49,354 INFO namenode.FSImage (FSImage.java:recoverTransitionRead(278)) - Formatting ...&lt;BR /&gt;2022-05-01 14:00:49,354 INFO common.Storage (Storage.java:clearDirectory(340)) - Will remove files: []&lt;BR /&gt;2022-05-01 14:00:49,355 WARN namenode.FSImage (NNStorage.java:readAndInspectDirs(1049)) - Storage directory Storage Directory /mnt/resource/hadoop/hdfs/namenode contains no VERSION file. Skipping...&lt;BR /&gt;2022-05-01 14:00:49,379 INFO namenode.FSImageTransactionalStorageInspector (FSImageTransactionalStorageInspector.java:inspectDirectory(85)) - No version file in /mnt/resource/hadoop/hdfs/namenode&lt;BR /&gt;2022-05-01 14:00:49,773 INFO namenode.FSImage (FSImage.java:loadFSImageFile(745)) - Planning to load image: FSImageFile(file=/grid1/hadoop/hdfs/namenode/current/fsimage_0000000000029731317, cpktTxId=0000000000029731317)&lt;BR /&gt;2022-05-01 14:00:49,868 INFO namenode.FSImageFormatPBINode (FSImageFormatPBINode.java:loadINodeSection(257)) - Loading 187768 INodes.&lt;BR /&gt;2022-05-01 14:00:50,935 INFO namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:load(184)) - Loaded FSImage in 1 seconds.&lt;BR /&gt;2022-05-01 14:00:50,935 INFO namenode.FSImage (FSImage.java:loadFSImage(911)) - Loaded image for txid 29731317 from /grid1/hadoop/hdfs/namenode/current/fsimage_0000000000029731317&lt;BR /&gt;2022-05-01 14:00:50,940 INFO namenode.FSImage (FSImage.java:loadEdits(849)) - Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@52eacb4b expecting start txid #29731318&lt;BR /&gt;2022-05-01 14:00:50,941 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(142)) - Start loading edits file &lt;A href="http://digaudanaqamn2.gt.com:8480/getJournal?jid=gtqa&amp;amp;segmentTxId=29606282&amp;amp;storageInfo=-63%3A897785728%3A0%3ACID-dc49d075-7a70-4ddf-b66b-b39b4b445af0" target="_blank" rel="noopener"&gt;http://digaudanaqamn2.gt.com:8480/getJournal?jid=gtqa&amp;amp;segmentTxId=29606282&amp;amp;storageInfo=-63%3A897785728%3A0%3ACID-dc49d075-7a70-4ddf-b66b-b39b4b445af0&lt;/A&gt;, &lt;A href="http://digaudanaqamn3.gt.com:8480/getJournal?jid=gtqa&amp;amp;segmentTxId=29606282&amp;amp;storageInfo=-63%3A897785728%3A0%3ACID-dc49d075-7a70-4ddf-b66b-b39b4b445af0" target="_blank" rel="noopener"&gt;http://digaudanaqamn3.gt.com:8480/getJournal?jid=gtqa&amp;amp;segmentTxId=29606282&amp;amp;storageInfo=-63%3A897785728%3A0%3ACID-dc49d075-7a70-4ddf-b66b-b39b4b445af0&lt;/A&gt;&lt;BR /&gt;2022-05-01 14:00:50,946 INFO namenode.RedundantEditLogInputStream (RedundantEditLogInputStream.java:nextOp(177)) - Fast-forwarding stream '&lt;A href="http://digaudanaqamn2.gt.com:8480/getJournal?jid=gtqa&amp;amp;segmentTxId=29606282&amp;amp;storageInfo=-63%3A897785728%3A0%3ACID-dc49d075-7a70-4ddf-b66b-b39b4b445af0" target="_blank" rel="noopener"&gt;http://digaudanaqamn2.gt.com:8480/getJournal?jid=gtqa&amp;amp;segmentTxId=29606282&amp;amp;storageInfo=-63%3A897785728%3A0%3ACID-dc49d075-7a70-4ddf-b66b-b39b4b445af0&lt;/A&gt;, &lt;A href="http://digaudanaqamn3.gt.com:8480/getJournal?jid=gtqa&amp;amp;segmentTxId=29606282&amp;amp;storageInfo=-63%3A897785728%3A0%3ACID-dc49d075-7a70-4ddf-b66b-b39b4b445af0" target="_blank" rel="noopener"&gt;http://digaudanaqamn3.gt.com:8480/getJournal?jid=gtqa&amp;amp;segmentTxId=29606282&amp;amp;storageInfo=-63%3A897785728%3A0%3ACID-dc49d075-7a70-4ddf-b66b-b39b4b445af0&lt;/A&gt;' to transaction ID 29731318&lt;BR /&gt;2022-05-01 14:00:50,946 INFO namenode.RedundantEditLogInputStream (RedundantEditLogInputStream.java:nextOp(177)) - Fast-forwarding stream '&lt;A href="http://digaudanaqamn2.gt.com:8480/getJournal?jid=gtqa&amp;amp;segmentTxId=29606282&amp;amp;storageInfo=-63%3A897785728%3A0%3ACID-dc49d075-7a70-4ddf-b66b-b39b4b445af0" target="_blank" rel="noopener"&gt;http://digaudanaqamn2.gt.com:8480/getJournal?jid=gtqa&amp;amp;segmentTxId=29606282&amp;amp;storageInfo=-63%3A897785728%3A0%3ACID-dc49d075-7a70-4ddf-b66b-b39b4b445af0&lt;/A&gt;' to transaction ID 29731318&lt;BR /&gt;2022-05-01 14:00:51,588 ERROR namenode.FSEditLogLoader (FSEditLogLoader.java:loadEditRecords(242)) - Encountered exception on operation MkdirOp [length=0, inodeId=11343504, path=/tmp/hive/nifi/9de63ed4-db2a-4164-b142-2e331cd008e3/hive_2022-04-06_10-25-40_878_569887095825365790-87, timestamp=1649240741042, permissions=nifi:hdfs:rwx------, aclEntries=null, opCode=OP_MKDIR, txid=29731504, xAttrs=[]]&lt;BR /&gt;java.lang.IllegalStateException&lt;BR /&gt;at com.google.common.base.Preconditions.checkState(Preconditions.java:129)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirForEditLog(FSDirMkdirOp.java:182)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:572)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:234)&lt;BR /&gt;2022-05-01 14:00:51,588 ERROR namenode.FSEditLogLoader (FSEditLogLoader.java:loadEditRecords(242)) - Encountered exception on operation MkdirOp [length=0, inodeId=11343504, path=/tmp/hive/nifi/9de63ed4-db2a-4164-b142-2e331cd008e3/hive_2022-04-06_10-25-40_878_569887095825365790-87, timestamp=1649240741042, permissions=nifi:hdfs:rwx------, aclEntries=null, opCode=OP_MKDIR, txid=29731504, xAttrs=[]]&lt;BR /&gt;java.lang.IllegalStateException&lt;BR /&gt;at com.google.common.base.Preconditions.checkState(Preconditions.java:129)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirForEditLog(FSDirMkdirOp.java:182)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:572)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:234)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:143)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:852)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:707)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:303)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1077)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:724)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:697)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:761)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode.&amp;lt;init&amp;gt;(NameNode.java:1001)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode.&amp;lt;init&amp;gt;(NameNode.java:985)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1710)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1778)&lt;BR /&gt;2022-05-01 14:00:51,695 INFO namenode.FSNamesystem (FSNamesystem.java:writeUnlock(1689)) - FSNamesystem write lock held for 2416 ms via&lt;BR /&gt;java.lang.Thread.getStackTrace(Thread.java:1556)&lt;BR /&gt;org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:945)&lt;BR /&gt;org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1690)&lt;BR /&gt;org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1105)&lt;BR /&gt;org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:724)&lt;BR /&gt;org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:697)&lt;BR /&gt;org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:761)&lt;BR /&gt;org.apache.hadoop.hdfs.server.namenode.NameNode.&amp;lt;init&amp;gt;(NameNode.java:1001)&lt;BR /&gt;org.apache.hadoop.hdfs.server.namenode.NameNode.&amp;lt;init&amp;gt;(NameNode.java:985)&lt;BR /&gt;org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1710)&lt;BR /&gt;org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1778)&lt;BR /&gt;Number of suppressed write-lock reports: 0&lt;BR /&gt;Longest write-lock held interval: 2416&lt;BR /&gt;2022-05-01 14:00:51,695 WARN namenode.FSNamesystem (FSNamesystem.java:loadFromDisk(726)) - Encountered exception loading fsimage&lt;BR /&gt;java.io.IOException: java.lang.IllegalStateException&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:244)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:143)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:852)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:707)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:303)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1077)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:724)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:707)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:303)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1077)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:724)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:697)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:761)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode.&amp;lt;init&amp;gt;(NameNode.java:1001)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode.&amp;lt;init&amp;gt;(NameNode.java:985)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1710)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1778)&lt;BR /&gt;Caused by: java.lang.IllegalStateException&lt;BR /&gt;at com.google.common.base.Preconditions.checkState(Preconditions.java:129)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirForEditLog(FSDirMkdirOp.java:182)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:572)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:234)&lt;BR /&gt;... 12 more&lt;BR /&gt;2022-05-01 14:00:51,719 INFO mortbay.log (Slf4jLog.java:info(67)) - Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@digaudanaqamn2.gt.com:50070&lt;BR /&gt;2022-05-01 14:00:51,820 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(211)) - Stopping NameNode metrics system...&lt;BR /&gt;2022-05-01 14:00:51,821 INFO impl.MetricsSinkAdapter (MetricsSinkAdapter.java:publishMetricsFromQueue(141)) - timeline thread interrupted.&lt;BR /&gt;2022-05-01 14:00:51,824 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(217)) - NameNode metrics system stopped.&lt;BR /&gt;2022-05-01 14:00:51,824 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(606)) - NameNode metrics system shutdown complete.&lt;BR /&gt;2022-05-01 14:00:51,824 ERROR namenode.NameNode (NameNode.java:main(1783)) - Failed to start namenode.&lt;BR /&gt;java.io.IOException: java.lang.IllegalStateException&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:244)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:143)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:852)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:707)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:303)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1077)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:724)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:697)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:761)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode.&amp;lt;init&amp;gt;(NameNode.java:1001)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode.&amp;lt;init&amp;gt;(NameNode.java:985)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1710)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1778)&lt;BR /&gt;Caused by: java.lang.IllegalStateException&lt;BR /&gt;at com.google.common.base.Preconditions.checkState(Preconditions.java:129)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirForEditLog(FSDirMkdirOp.java:182)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:572)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:234)&lt;BR /&gt;... 12 more&lt;BR /&gt;2022-05-01 14:00:51,826 INFO util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Any leads ??&lt;/P&gt;</description>
      <pubDate>Mon, 02 May 2022 14:34:24 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Name-Node-is-not-starting-after-performing-the-disk/m-p/342884#M233823</guid>
      <dc:creator>Maddy2</dc:creator>
      <dc:date>2022-05-02T14:34:24Z</dc:date>
    </item>
    <item>
      <title>Re: Name Node is not starting after performing the disk expansion activity on the name node HDFS disk</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Name-Node-is-not-starting-after-performing-the-disk/m-p/412873#M253741</link>
      <description>&lt;P&gt;&lt;SPAN&gt;While loading fsimage why the permissions=nifi:hdfs:rwx in-place of hdfs:hadoop&lt;BR /&gt;Encountered exception on operation MkdirOp&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 19 Nov 2025 09:30:20 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Name-Node-is-not-starting-after-performing-the-disk/m-p/412873#M253741</guid>
      <dc:creator>Asfahan</dc:creator>
      <dc:date>2025-11-19T09:30:20Z</dc:date>
    </item>
    <item>
      <title>Re: Name Node is not starting after performing the disk expansion activity on the name node HDFS disk</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Name-Node-is-not-starting-after-performing-the-disk/m-p/413311#M253996</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/97666"&gt;@Maddy2&lt;/a&gt;&amp;nbsp;FYI&lt;BR /&gt;&lt;BR /&gt;➤ Based on the logs you provided, your NameNode is failing to start because it has encountered a metadata inconsistency while replaying the Edit Logs. This is a critical issue where the NameNode's current state (from the FSImage) contradicts the instructions in the Edit Logs it is trying to process.&lt;/P&gt;&lt;P&gt;➤ The Root Cause&lt;BR /&gt;The specific error is a java.lang.IllegalStateException during an OP_MKDIR operation (Transaction ID: 29731504).&lt;BR /&gt;The NameNode is trying to create a directory (/tmp/hive/nifi/...), but the checkState fails because the parent directory for that path does not exist in the namespace it just loaded from the FSImage.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;This likely happened because:&lt;BR /&gt;Disk Expansion/Reboot Out of Sync: When you expanded the disk and rebooted, one of the storage directories (/mnt/resource/hadoop/hdfs/namenode) was flagged as unformatted or empty.&lt;BR /&gt;Metadata Corruption: There is a mismatch between your last successful checkpoint (fsimage_0000000000029731317) and the subsequent edits stored in your Journal Nodes.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;➤ Recommended Solution: Metadata Recovery&lt;BR /&gt;Since this is an HDP (Hortonworks Data Platform) cluster with High Availability (HA), you should attempt to recover by syncing from the "good" metadata or forcing a metadata skip.&lt;/P&gt;&lt;P&gt;=&amp;gt; Step 1: Identify the Healthy NameNode&lt;BR /&gt;Ensure you are working on the NameNode that has the most recent and intact data. Check the other NameNode's logs to see if it also fails at the same Transaction ID.&lt;/P&gt;&lt;P&gt;=&amp;gt; Step 2:On Standby or failing Namenode kindly Check the permission of edits log and fsimage present in path dfs.namenode.name.dir&lt;BR /&gt;and see if it matches with permission mentioned in Active Namenode&lt;/P&gt;&lt;P&gt;=&amp;gt;Step 3: Bootstrap from the Standby (If HA is healthy)&lt;BR /&gt;If one NameNode is able to start or has better metadata, you can re-sync the failing node:&lt;/P&gt;&lt;P&gt;=&amp;gt; Stop the failing NameNode.&lt;BR /&gt;On the failing node, clear the NameNode storage directories (as defined in dfs.namenode.name.dir).&lt;BR /&gt;Run the bootstrap command to pull metadata from the active/healthy NameNode:&lt;/P&gt;&lt;P&gt;$ hdfs namenode -bootstrapStandby&lt;/P&gt;&lt;P&gt;4. Start the NameNode.&lt;/P&gt;</description>
      <pubDate>Sat, 10 Jan 2026 05:41:23 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Name-Node-is-not-starting-after-performing-the-disk/m-p/413311#M253996</guid>
      <dc:creator>9een</dc:creator>
      <dc:date>2026-01-10T05:41:23Z</dc:date>
    </item>
  </channel>
</rss>

