<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: HDFS Slow start in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/HDFS-Slow-start/m-p/413567#M254143</link>
    <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/11754"&gt;@ganzuoni&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;Thanks for reaching us out below is your answer:&lt;/P&gt;&lt;P&gt;As per the current config you have reached to the limition as CLDR doesnt recommend to have block more than 300M its the dead end and 10M block in each DN but you have 40 M blocks&lt;/P&gt;&lt;P&gt;Setting&amp;nbsp;&lt;SPAN&gt;dfs.blockreport.split.threshold to 0&amp;nbsp;is good plan but does this is causing the slowness please confirm that first while checking "&lt;STRONG&gt;Block report queue is full" &lt;/STRONG&gt;in Namenode logs,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;More-over please check the NN logs for read lock held and write lock held anyone surpassing more than 10sec if yes then read the thread where its stuck, also check out:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;1. Any snapshot policy is running&lt;BR /&gt;2. Balancer is running&lt;BR /&gt;3. Which user is doing which operation mostly in NN audit logs you will find&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;4.&amp;nbsp;check out the pause if you have any in NN and DN logs, we recommend 1GB for 1M of blocks&lt;BR /&gt;&lt;BR /&gt;Suppose in audit logs you find &lt;A href="mailto:XXX@user" target="_blank"&gt;XXX@user&lt;/A&gt;&amp;nbsp;is doing 1000's of getfile rpc more than other user then try to stop that job for sometimes to confirm if other speed up.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
    <pubDate>Mon, 23 Feb 2026 06:09:50 GMT</pubDate>
    <dc:creator>Asfahan</dc:creator>
    <dc:date>2026-02-23T06:09:50Z</dc:date>
    <item>
      <title>HDFS Slow start</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-Slow-start/m-p/413566#M254142</link>
      <description>&lt;P&gt;I'm experiencing very slow HDFS start in CDP 7.1.7SP1 for a cluster with a huge number of blocks (over 300 million, with each server having up to 40 million)&lt;/P&gt;&lt;P&gt;I've checked this&lt;/P&gt;&lt;P&gt;&lt;A href="https://community.cloudera.com/t5/Community-Articles/Scaling-the-HDFS-NameNode-part-5/ta-p/327450" target="_blank"&gt;https://community.cloudera.com/t5/Community-Articles/Scaling-the-HDFS-NameNode-part-5/ta-p/327450&lt;/A&gt;&lt;/P&gt;&lt;P&gt;and I wonder if setting&amp;nbsp;dfs.blockreport.split.threshold to 0 might somehow speed up the process&lt;/P&gt;&lt;P&gt;I've seen that the setting should go in&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;NameNode Advanced Configuration Snippet (Safety Valve) for hdfs-site.xm&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Is this setting service wide so that a full restart is necessary?&lt;/P&gt;</description>
      <pubDate>Tue, 21 Apr 2026 06:10:34 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-Slow-start/m-p/413566#M254142</guid>
      <dc:creator>ganzuoni</dc:creator>
      <dc:date>2026-04-21T06:10:34Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS Slow start</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-Slow-start/m-p/413567#M254143</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/11754"&gt;@ganzuoni&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;Thanks for reaching us out below is your answer:&lt;/P&gt;&lt;P&gt;As per the current config you have reached to the limition as CLDR doesnt recommend to have block more than 300M its the dead end and 10M block in each DN but you have 40 M blocks&lt;/P&gt;&lt;P&gt;Setting&amp;nbsp;&lt;SPAN&gt;dfs.blockreport.split.threshold to 0&amp;nbsp;is good plan but does this is causing the slowness please confirm that first while checking "&lt;STRONG&gt;Block report queue is full" &lt;/STRONG&gt;in Namenode logs,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;More-over please check the NN logs for read lock held and write lock held anyone surpassing more than 10sec if yes then read the thread where its stuck, also check out:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;1. Any snapshot policy is running&lt;BR /&gt;2. Balancer is running&lt;BR /&gt;3. Which user is doing which operation mostly in NN audit logs you will find&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;4.&amp;nbsp;check out the pause if you have any in NN and DN logs, we recommend 1GB for 1M of blocks&lt;BR /&gt;&lt;BR /&gt;Suppose in audit logs you find &lt;A href="mailto:XXX@user" target="_blank"&gt;XXX@user&lt;/A&gt;&amp;nbsp;is doing 1000's of getfile rpc more than other user then try to stop that job for sometimes to confirm if other speed up.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 23 Feb 2026 06:09:50 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-Slow-start/m-p/413567#M254143</guid>
      <dc:creator>Asfahan</dc:creator>
      <dc:date>2026-02-23T06:09:50Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS Slow start</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-Slow-start/m-p/413569#M254145</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/79963"&gt;@Asfahan&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;thank you for the answer, yes, I understand that the cluster it'a little oversized&lt;/P&gt;&lt;P&gt;About the topic, I don't find any "Block report queue full" message but several write-lock with long duration but, strange enough, not during hdfs service startup&lt;/P&gt;&lt;P&gt;What I find is a number of request coming via NFS Gateway (around 3000/minute) and several&amp;nbsp;GC (Allocation Failure) in gc log in the first 20 minutes of startup and several about a the end when all the datanodes reported thei blocks&lt;/P&gt;&lt;P&gt;The NN has 160GB of heap and DN 30GB&lt;/P&gt;&lt;P&gt;What I found strange is&amp;nbsp;dfs_datanode_handler_count set to 3, that might be the cause of the original issue that forced me to restart the service&lt;/P&gt;&lt;P&gt;In fact, I was decommissioning one node and when I started, suddenly I've experience a huge performance degradation, even if network, hdfs and disk I/O were not so critical&amp;nbsp;&lt;/P&gt;&lt;P&gt;(cluster Net I/O peak was 280 MB/s, hdfs I/O 190 MB/s, disk I/O write peak of 300 MB/s)&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 23 Feb 2026 08:49:34 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-Slow-start/m-p/413569#M254145</guid>
      <dc:creator>ganzuoni</dc:creator>
      <dc:date>2026-02-23T08:49:34Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS Slow start</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-Slow-start/m-p/413570#M254146</link>
      <description>&lt;P&gt;Thanks&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/11754"&gt;@ganzuoni&lt;/a&gt;&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;As per the current size of NN 160GB for 300M blocks is very less you will see these type of GC allocation failure in the cluster, please increase it to 300-320GB and DN to 40GB atleas&lt;/P&gt;&lt;P&gt;handler count has the calculation you can review the below blog for such:&lt;/P&gt;&lt;P&gt;&lt;A href="https://community.cloudera.com/t5/Support-Questions/Can-we-check-how-many-namenode-handler-count-are-used-in-a/td-p/281142" target="_blank"&gt;https://community.cloudera.com/t5/Support-Questions/Can-we-check-how-many-namenode-handler-count-are-used-in-a/td-p/281142&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Your cluster is already over-utilised with not enough resource the moment you decommission the DN ti will start a replication policy which is again a huge BW job causing much performance issue.&lt;BR /&gt;First we need to fix the cluster with enough resources ,&amp;nbsp;&lt;BR /&gt;can you give us a write lock held complete thread Also you find anything in audit logs&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 23 Feb 2026 10:04:51 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-Slow-start/m-p/413570#M254146</guid>
      <dc:creator>Asfahan</dc:creator>
      <dc:date>2026-02-23T10:04:51Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS Slow start</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-Slow-start/m-p/413571#M254147</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/79963"&gt;@Asfahan&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;yes, heap should be around 300GB but these is what NN say on webui&lt;/P&gt;&lt;P&gt;Heap Memory used 111.53 GB of 169.41 GB Heap Memory. Max Heap Memory is 169.41 GB.&lt;/P&gt;&lt;P&gt;For what concerns handlers,&amp;nbsp;dfs_namenode_handler_count is 70 (it should be 80 with 17 datanodes) while&amp;nbsp;dfs_datanode_handler_count is at it's default value of 3&lt;/P&gt;&lt;P&gt;On a different cluster I had this set to 24&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;this is the stack trace for a write-lock held in active NN&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;2026-02-20 11:01:44,596 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of suppressed write-lock reports: 0&lt;BR /&gt;Longest write-lock held at 1972-02-11 21:18:16,333+0100 for 6157ms via java.lang.Thread.getStackTrace(Thread.java:1559)&lt;BR /&gt;org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1058)&lt;BR /&gt;org.apache.hadoop.hdfs.server.namenode.FSNamesystemLock.writeUnlock(FSNamesystemLock.java:262)&lt;BR /&gt;org.apache.hadoop.hdfs.server.namenode.FSNamesystemLock.writeUnlock(FSNamesystemLock.java:226)&lt;BR /&gt;org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1696)&lt;BR /&gt;org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminManager$Monitor.processBlocksInternal(DatanodeAdminManager.java:703)&lt;BR /&gt;org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminManager$Monitor.pruneReliableBlocks(DatanodeAdminManager.java:644)&lt;BR /&gt;org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminManager$Monitor.check(DatanodeAdminManager.java:572)&lt;BR /&gt;org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminManager$Monitor.run(DatanodeAdminManager.java:506)&lt;BR /&gt;java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)&lt;BR /&gt;java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)&lt;BR /&gt;java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)&lt;BR /&gt;java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)&lt;BR /&gt;java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)&lt;BR /&gt;java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)&lt;BR /&gt;java.lang.Thread.run(Thread.java:748)&lt;/P&gt;&lt;P&gt;Total suppressed write-lock held time: 0.0&lt;/P&gt;</description>
      <pubDate>Mon, 23 Feb 2026 10:31:46 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-Slow-start/m-p/413571#M254147</guid>
      <dc:creator>ganzuoni</dc:creator>
      <dc:date>2026-02-23T10:31:46Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS Slow start</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-Slow-start/m-p/413572#M254148</link>
      <description>&lt;P&gt;On the data node the typical stack trace were these&lt;/P&gt;&lt;P&gt;2026-02-20 12:01:41,486 WARN org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Lock held time above threshold: lock identifier: FsDatasetRWLock lockHeldTimeMs=8582 ms. Supp&lt;BR /&gt;ressed 0 lock warnings. Longest suppressed LockHeldTimeMs=0. The stack trace is: java.lang.Thread.getStackTrace(Thread.java:1559)&lt;BR /&gt;org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1058)&lt;BR /&gt;org.apache.hadoop.util.InstrumentedLock.logWarning(InstrumentedLock.java:160)&lt;BR /&gt;org.apache.hadoop.util.InstrumentedLock.check(InstrumentedLock.java:220)&lt;BR /&gt;org.apache.hadoop.util.InstrumentedReadLock.unlock(InstrumentedReadLock.java:78)&lt;BR /&gt;org.apache.hadoop.util.AutoCloseableLock.release(AutoCloseableLock.java:84)&lt;BR /&gt;org.apache.hadoop.util.AutoCloseableLock.close(AutoCloseableLock.java:96)&lt;BR /&gt;org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockReports(FsDatasetImpl.java:1920)&lt;BR /&gt;org.apache.hadoop.hdfs.server.datanode.BPServiceActor.blockReport(BPServiceActor.java:376)&lt;BR /&gt;org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:719)&lt;BR /&gt;org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:872)&lt;BR /&gt;java.lang.Thread.run(Thread.java:748)&lt;/P&gt;&lt;P&gt;2026-02-20 12:01:41,486 WARN org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Waited above threshold to acquire lock: lock identifier: FsDatasetRWLock waitTimeMs=7442 ms.&lt;BR /&gt;Suppressed 3 lock wait warnings. Longest suppressed WaitTimeMs=414. The stack trace is: java.lang.Thread.getStackTrace(Thread.java:1559)&lt;BR /&gt;org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1058)&lt;BR /&gt;org.apache.hadoop.util.InstrumentedLock.logWaitWarning(InstrumentedLock.java:171)&lt;BR /&gt;org.apache.hadoop.util.InstrumentedLock.check(InstrumentedLock.java:222)&lt;BR /&gt;org.apache.hadoop.util.InstrumentedLock.lock(InstrumentedLock.java:105)&lt;BR /&gt;org.apache.hadoop.util.AutoCloseableLock.acquire(AutoCloseableLock.java:67)&lt;BR /&gt;org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createTemporary(FsDatasetImpl.java:1646)&lt;BR /&gt;org.apache.hadoop.hdfs.server.datanode.BlockReceiver.&amp;lt;init&amp;gt;(BlockReceiver.java:212)&lt;BR /&gt;org.apache.hadoop.hdfs.server.datanode.DataXceiver.getBlockReceiver(DataXceiver.java:1303)&lt;BR /&gt;org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:762)&lt;BR /&gt;org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:178)&lt;BR /&gt;org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:112)&lt;BR /&gt;org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:291)&lt;BR /&gt;java.lang.Thread.run(Thread.java:748)&lt;/P&gt;&lt;P&gt;2026-02-20 11:06:02,845 WARN org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Waited above threshold to acquire lock: lock identifier: FsDatasetRWLock waitTimeMs=688 ms. S&lt;BR /&gt;uppressed 5 lock wait warnings. Longest suppressed WaitTimeMs=397. The stack trace is: java.lang.Thread.getStackTrace(Thread.java:1559)&lt;BR /&gt;org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1058)&lt;BR /&gt;org.apache.hadoop.util.InstrumentedLock.logWaitWarning(InstrumentedLock.java:171)&lt;BR /&gt;org.apache.hadoop.util.InstrumentedLock.check(InstrumentedLock.java:222)&lt;BR /&gt;org.apache.hadoop.util.InstrumentedLock.lock(InstrumentedLock.java:105)&lt;BR /&gt;org.apache.hadoop.util.AutoCloseableLock.acquire(AutoCloseableLock.java:67)&lt;BR /&gt;org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:1750)&lt;BR /&gt;org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:997)&lt;BR /&gt;org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:899)&lt;BR /&gt;org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:178)&lt;BR /&gt;org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:112)&lt;BR /&gt;org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:291)&lt;BR /&gt;java.lang.Thread.run(Thread.java:748)&lt;/P&gt;&lt;P&gt;and this&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;2026-02-20 11:11:44,500 WARN org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Waited above threshold to acquire lock: lock identifier: FsDatasetRWLock waitTimeMs=443 ms. Suppressed 1 lock wait warnings. Longest suppressed WaitTimeMs=412. The stack trace is: java.lang.Thread.getStackTrace(Thread.java:1559)&lt;BR /&gt;org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1058)&lt;BR /&gt;org.apache.hadoop.util.InstrumentedLock.logWaitWarning(InstrumentedLock.java:171)&lt;BR /&gt;org.apache.hadoop.util.InstrumentedLock.check(InstrumentedLock.java:222)&lt;BR /&gt;org.apache.hadoop.util.InstrumentedLock.lock(InstrumentedLock.java:105)&lt;BR /&gt;org.apache.hadoop.util.AutoCloseableLock.acquire(AutoCloseableLock.java:67)&lt;BR /&gt;org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaMap.get(ReplicaMap.java:115)&lt;BR /&gt;org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.validateBlockFile(FsDatasetImpl.java:2036)&lt;BR /&gt;org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockReplica(FsDatasetImpl.java:808)&lt;BR /&gt;org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockReplica(FsDatasetImpl.java:801)&lt;BR /&gt;org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getLength(FsDatasetImpl.java:794)&lt;BR /&gt;org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.checkBlock(FsDatasetImpl.java:1988)&lt;BR /&gt;org.apache.hadoop.hdfs.server.datanode.DataNode.transferBlock(DataNode.java:2315)&lt;BR /&gt;org.apache.hadoop.hdfs.server.datanode.DataNode.transferBlocks(DataNode.java:2372)&lt;BR /&gt;org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActive(BPOfferService.java:726)&lt;BR /&gt;org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:684)&lt;BR /&gt;org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessingThread.processCommand(BPServiceActor.java:1334)&lt;BR /&gt;org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessingThread.lambda$enqueue$2(BPServiceActor.java:1380)&lt;BR /&gt;org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessingThread.processQueue(BPServiceActor.java:1307)&lt;BR /&gt;org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessingThread.run(BPServiceActor.java:1290)&lt;/P&gt;</description>
      <pubDate>Mon, 23 Feb 2026 10:40:50 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-Slow-start/m-p/413572#M254148</guid>
      <dc:creator>ganzuoni</dc:creator>
      <dc:date>2026-02-23T10:40:50Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS Slow start</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-Slow-start/m-p/413573#M254149</link>
      <description>&lt;P&gt;The lock held above has 6-8 sec lock this will not cause the slowness also above is from service rpc while block is reporting to NN,&lt;/P&gt;&lt;P&gt;Check any lock held more than 10-15 sec,&lt;/P&gt;&lt;P&gt;Heap utilisation and Heap required is completely different for keeping 300M block you should required 300GB although the heap utilisation is for current utilising jobs please review below doc:&lt;/P&gt;&lt;P&gt;&lt;A href="https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/upgrade/topics/cdpdc-hdfs.html" target="_blank"&gt;https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/upgrade/topics/cdpdc-hdfs.html&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 23 Feb 2026 11:55:45 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-Slow-start/m-p/413573#M254149</guid>
      <dc:creator>Asfahan</dc:creator>
      <dc:date>2026-02-23T11:55:45Z</dc:date>
    </item>
  </channel>
</rss>

