<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: NIFI - The JVM heap memory on the primary node gradually grows until it reaches full capacity at 100% memory utilization. in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/NIFI-The-JVM-heap-memory-on-the-primary-node-gradually-grows/m-p/378947#M243729</link>
    <description>&lt;P&gt;It looks like the high heap on the primary was caused by too many open files,Is there a way to identify the processor that open many files ?&lt;BR /&gt;&lt;BR /&gt;We are using openjdk 17.0.7 ,nifi v&amp;nbsp;&lt;SPAN&gt;1.21.0 and custom processors .&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;HR /&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/35454"&gt;@MattWho&lt;/a&gt;&amp;nbsp;wrote:&lt;BR /&gt;&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/106502"&gt;@edim2525&lt;/a&gt;&amp;nbsp;&lt;BR /&gt;GC kicks in around 80% of heap memory usage.&amp;nbsp; You could certainly enable GC debug logging to verify that GC is executing.&amp;nbsp; GC can only clean-up unused memory (Memory no linger being held by a process).&amp;nbsp; I see you have three NiFi nodes. Are you only having Heap memory usage issues on the one node?&amp;nbsp;&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;I see the node with growing heap usage is the elected primary node.&lt;BR /&gt;What processors do you have running as "primary node" execution?&lt;BR /&gt;Does your primary node have a lot more queued FlowFiles than the other nodes?&lt;BR /&gt;If you disconnect the primary node from your cluster which will force a new primary node to be elected, does the heap then start to grow on the new elected primary node?&lt;BR /&gt;&lt;BR /&gt;What version of NiFi are you running?&lt;BR /&gt;What version of Java is your NiFi running with?&lt;BR /&gt;&lt;BR /&gt;Have you collected heap dumps and analyzed them to see where the heap usage is being used?&lt;BR /&gt;Do you have any custom processors added to your NiFi?&lt;BR /&gt;Are you using any scripting based processors where you have written your own code that is being executed within NiFi?&lt;BR /&gt;&lt;BR /&gt;Matt&lt;/P&gt;&lt;HR /&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&lt;SPAN&gt;2023-11-11 04:13:18,240 ERROR&amp;nbsp;&lt;SPAN class="error"&gt;[Timer-Driven Process Thread-30]&lt;/SPAN&gt;&amp;nbsp;o.a.n.p.kafka.pubsub.PublishKafka_2_6 PublishKafka_2_6&lt;SPAN class="error"&gt;[id=c8d6883a-8b07-3d38-b28f-fc8263c83a39]&lt;/SPAN&gt;&amp;nbsp;Processing halted: yielding&amp;nbsp;&lt;SPAN class="error"&gt;[1 sec]&lt;/SPAN&gt;&lt;BR /&gt;java.lang.IllegalStateException: Cannot complete publishing to Kafka because Publisher Lease was already closed&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.nifi.processors.kafka.pubsub.PublisherLease.complete(PublisherLease.java:476)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.nifi.processors.kafka.pubsub.PublishKafka_2_6.onTrigger(PublishKafka_2_6.java:490)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1360)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:246)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:102)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/java.lang.Thread.run(Thread.java:833)&lt;BR /&gt;2023-11-11 04:13:18,241 WARN&amp;nbsp;&lt;SPAN class="error"&gt;[Timer-Driven Process Thread-30]&lt;/SPAN&gt;&amp;nbsp;o.a.n.controller.tasks.ConnectableTask Processing halted: uncaught exception in Component [PublishKafka_2_6&lt;SPAN class="error"&gt;[id=c8d6883a-8b07-3d38-b28f-fc8263c83a39]&lt;/SPAN&gt;]&lt;BR /&gt;java.lang.IllegalStateException: Cannot complete publishing to Kafka because Publisher Lease was already closed&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.nifi.processors.kafka.pubsub.PublisherLease.complete(PublisherLease.java:476)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.nifi.processors.kafka.pubsub.PublishKafka_2_6.onTrigger(PublishKafka_2_6.java:490)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1360)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:246)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:102)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/java.lang.Thread.run(Thread.java:833)&lt;BR /&gt;2023-11-11 04:13:18,247 ERROR&amp;nbsp;&lt;SPAN class="error"&gt;[Listen to Bootstrap]&lt;/SPAN&gt;&amp;nbsp;org.apache.nifi.BootstrapListener Failed to process request from Bootstrap due to java.io.IOException: Too many open files&lt;BR /&gt;java.io.IOException: Too many open files&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/sun.nio.ch.Net.accept(Native Method)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/sun.nio.ch.NioSocketImpl.timedAccept(NioSocketImpl.java:711)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/sun.nio.ch.NioSocketImpl.accept(NioSocketImpl.java:752)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/java.net.ServerSocket.implAccept(ServerSocket.java:675)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/java.net.ServerSocket.platformImplAccept(ServerSocket.java:641)&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="image.png" style="width: 999px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/38922iD7336120603CA18A/image-size/large?v=v2&amp;amp;px=999" role="button" title="image.png" alt="image.png" /&gt;&lt;/span&gt;&lt;/P&gt;</description>
    <pubDate>Mon, 13 Nov 2023 13:44:09 GMT</pubDate>
    <dc:creator>edim2525</dc:creator>
    <dc:date>2023-11-13T13:44:09Z</dc:date>
    <item>
      <title>NIFI - The JVM heap memory on the primary node gradually grows until it reaches full capacity at 100% memory utilization.</title>
      <link>https://community.cloudera.com/t5/Support-Questions/NIFI-The-JVM-heap-memory-on-the-primary-node-gradually-grows/m-p/377175#M243165</link>
      <description>&lt;P&gt;Hi ,&lt;/P&gt;&lt;P&gt;I have NIFI cluster with three nodes .&lt;/P&gt;&lt;P&gt;Each node has 16 CPU and 100G ram .&lt;/P&gt;&lt;P&gt;Currently, the&amp;nbsp;JVM heap memory on the primary node gradually grows until it reaches full RAM capacity and crashes on OOM.&amp;nbsp;&lt;/P&gt;&lt;P&gt;Can you please help me to address this issue .&lt;/P&gt;&lt;P&gt;Why does the memory on the primary node keep growing ? Is the GC not supposed to clean the heap ?&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;bootstrap.conf&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;# Java command to use when running NiFi
java=java

# Username to use when running NiFi. This value will be ignored on Windows.
run.as=nifi

# Preserve shell environment while runnning as "run.as" user
preserve.environment=false

# Configure where NiFi's lib and conf directories live
lib.dir=/opt/nifi/lib
conf.dir=/opt/nifi/conf

# How long to wait after telling NiFi to shutdown before explicitly killing the Process
graceful.shutdown.seconds=20

# Disable JSR 199 so that we can use JSP's without running a JDK
java.arg.1=-Dorg.apache.jasper.compiler.disablejsr199=true

# JVM memory settings
java.arg.2=-Xms40g
java.arg.3=-Xmx40g

# Enable Remote Debugging
java.arg.4=-Djava.net.preferIPv4Stack=true

# allowRestrictedHeaders is required for Cluster/Node communications to work properly
java.arg.5=-Dsun.net.http.allowRestrictedHeaders=true
java.arg.6=-Djava.protocol.handler.pkgs=sun.net.www.protocol

# The G1GC is known to cause some problems in Java 8 and earlier, but the issues were addressed in Java 9. If using Java 8 or earlier,
# it is recommended that G1GC not be used, especially in conjunction with the Write Ahead Provenance Repository. However, if using a newer
# version of Java, it can result in better performance without significant "stop-the-world" delays.
#java.arg.13=-XX:+UseG1GC
java.arg.7=-XX:+UseG1GC
java.arg.9=-XX:+UseLargePages
java.arg.10=-XX:+AlwaysPreTouch
java.arg.11=-XX:MetaspaceSize=256m
java.arg.12=-XX:MaxGCPauseMillis=100
java.arg.13=-XX:G1HeapRegionSize=32M

#Set headless mode by default
java.arg.14=-Djava.awt.headless=true&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The green line is the primary node .&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Screenshot 2023-10-03 at 14.05.15.png" style="width: 771px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/38594i828E6312C15D6E36/image-size/large?v=v2&amp;amp;px=999" role="button" title="Screenshot 2023-10-03 at 14.05.15.png" alt="Screenshot 2023-10-03 at 14.05.15.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Screenshot 2023-10-03 at 14.14.08.png" style="width: 999px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/38595iA3984574ED3C7CA5/image-size/large?v=v2&amp;amp;px=999" role="button" title="Screenshot 2023-10-03 at 14.14.08.png" alt="Screenshot 2023-10-03 at 14.14.08.png" /&gt;&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 03 Oct 2023 11:17:52 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/NIFI-The-JVM-heap-memory-on-the-primary-node-gradually-grows/m-p/377175#M243165</guid>
      <dc:creator>edim2525</dc:creator>
      <dc:date>2023-10-03T11:17:52Z</dc:date>
    </item>
    <item>
      <title>Re: NIFI - The JVM heap memory on the primary node gradually grows until it reaches full capacity at 100% memory utilization.</title>
      <link>https://community.cloudera.com/t5/Support-Questions/NIFI-The-JVM-heap-memory-on-the-primary-node-gradually-grows/m-p/377179#M243168</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/106502"&gt;@edim2525&lt;/a&gt;&amp;nbsp;&lt;BR /&gt;GC kicks in around 80% of heap memory usage.&amp;nbsp; You could certainly enable GC debug logging to verify that GC is executing.&amp;nbsp; GC can only clean-up unused memory (Memory no linger being held by a process).&amp;nbsp; I see you have three NiFi nodes. Are you only having Heap memory usage issues on the one node?&amp;nbsp;&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;I see the node with growing heap usage is the elected primary node.&lt;BR /&gt;What processors do you have running as "primary node" execution?&lt;BR /&gt;Does your primary node have a lot more queued FlowFiles than the other nodes?&lt;BR /&gt;If you disconnect the primary node from your cluster which will force a new primary node to be elected, does the heap then start to grow on the new elected primary node?&lt;BR /&gt;&lt;BR /&gt;What version of NiFi are you running?&lt;BR /&gt;What version of Java is your NiFi running with?&lt;BR /&gt;&lt;BR /&gt;Have you collected heap dumps and analyzed them to see where the heap usage is being used?&lt;BR /&gt;Do you have any custom processors added to your NiFi?&lt;BR /&gt;Are you using any scripting based processors where you have written your own code that is being executed within NiFi?&lt;BR /&gt;&lt;BR /&gt;Matt&lt;/P&gt;</description>
      <pubDate>Tue, 03 Oct 2023 14:36:40 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/NIFI-The-JVM-heap-memory-on-the-primary-node-gradually-grows/m-p/377179#M243168</guid>
      <dc:creator>MattWho</dc:creator>
      <dc:date>2023-10-03T14:36:40Z</dc:date>
    </item>
    <item>
      <title>Re: NIFI - The JVM heap memory on the primary node gradually grows until it reaches full capacity at 100% memory utilization.</title>
      <link>https://community.cloudera.com/t5/Support-Questions/NIFI-The-JVM-heap-memory-on-the-primary-node-gradually-grows/m-p/378713#M243638</link>
      <description>&lt;P&gt;Thank you for your reply, I'll check and update.&lt;/P&gt;</description>
      <pubDate>Tue, 07 Nov 2023 13:27:29 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/NIFI-The-JVM-heap-memory-on-the-primary-node-gradually-grows/m-p/378713#M243638</guid>
      <dc:creator>edim2525</dc:creator>
      <dc:date>2023-11-07T13:27:29Z</dc:date>
    </item>
    <item>
      <title>Re: NIFI - The JVM heap memory on the primary node gradually grows until it reaches full capacity at 100% memory utilization.</title>
      <link>https://community.cloudera.com/t5/Support-Questions/NIFI-The-JVM-heap-memory-on-the-primary-node-gradually-grows/m-p/378947#M243729</link>
      <description>&lt;P&gt;It looks like the high heap on the primary was caused by too many open files,Is there a way to identify the processor that open many files ?&lt;BR /&gt;&lt;BR /&gt;We are using openjdk 17.0.7 ,nifi v&amp;nbsp;&lt;SPAN&gt;1.21.0 and custom processors .&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;HR /&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/35454"&gt;@MattWho&lt;/a&gt;&amp;nbsp;wrote:&lt;BR /&gt;&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/106502"&gt;@edim2525&lt;/a&gt;&amp;nbsp;&lt;BR /&gt;GC kicks in around 80% of heap memory usage.&amp;nbsp; You could certainly enable GC debug logging to verify that GC is executing.&amp;nbsp; GC can only clean-up unused memory (Memory no linger being held by a process).&amp;nbsp; I see you have three NiFi nodes. Are you only having Heap memory usage issues on the one node?&amp;nbsp;&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;I see the node with growing heap usage is the elected primary node.&lt;BR /&gt;What processors do you have running as "primary node" execution?&lt;BR /&gt;Does your primary node have a lot more queued FlowFiles than the other nodes?&lt;BR /&gt;If you disconnect the primary node from your cluster which will force a new primary node to be elected, does the heap then start to grow on the new elected primary node?&lt;BR /&gt;&lt;BR /&gt;What version of NiFi are you running?&lt;BR /&gt;What version of Java is your NiFi running with?&lt;BR /&gt;&lt;BR /&gt;Have you collected heap dumps and analyzed them to see where the heap usage is being used?&lt;BR /&gt;Do you have any custom processors added to your NiFi?&lt;BR /&gt;Are you using any scripting based processors where you have written your own code that is being executed within NiFi?&lt;BR /&gt;&lt;BR /&gt;Matt&lt;/P&gt;&lt;HR /&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&lt;SPAN&gt;2023-11-11 04:13:18,240 ERROR&amp;nbsp;&lt;SPAN class="error"&gt;[Timer-Driven Process Thread-30]&lt;/SPAN&gt;&amp;nbsp;o.a.n.p.kafka.pubsub.PublishKafka_2_6 PublishKafka_2_6&lt;SPAN class="error"&gt;[id=c8d6883a-8b07-3d38-b28f-fc8263c83a39]&lt;/SPAN&gt;&amp;nbsp;Processing halted: yielding&amp;nbsp;&lt;SPAN class="error"&gt;[1 sec]&lt;/SPAN&gt;&lt;BR /&gt;java.lang.IllegalStateException: Cannot complete publishing to Kafka because Publisher Lease was already closed&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.nifi.processors.kafka.pubsub.PublisherLease.complete(PublisherLease.java:476)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.nifi.processors.kafka.pubsub.PublishKafka_2_6.onTrigger(PublishKafka_2_6.java:490)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1360)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:246)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:102)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/java.lang.Thread.run(Thread.java:833)&lt;BR /&gt;2023-11-11 04:13:18,241 WARN&amp;nbsp;&lt;SPAN class="error"&gt;[Timer-Driven Process Thread-30]&lt;/SPAN&gt;&amp;nbsp;o.a.n.controller.tasks.ConnectableTask Processing halted: uncaught exception in Component [PublishKafka_2_6&lt;SPAN class="error"&gt;[id=c8d6883a-8b07-3d38-b28f-fc8263c83a39]&lt;/SPAN&gt;]&lt;BR /&gt;java.lang.IllegalStateException: Cannot complete publishing to Kafka because Publisher Lease was already closed&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.nifi.processors.kafka.pubsub.PublisherLease.complete(PublisherLease.java:476)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.nifi.processors.kafka.pubsub.PublishKafka_2_6.onTrigger(PublishKafka_2_6.java:490)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1360)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:246)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:102)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/java.lang.Thread.run(Thread.java:833)&lt;BR /&gt;2023-11-11 04:13:18,247 ERROR&amp;nbsp;&lt;SPAN class="error"&gt;[Listen to Bootstrap]&lt;/SPAN&gt;&amp;nbsp;org.apache.nifi.BootstrapListener Failed to process request from Bootstrap due to java.io.IOException: Too many open files&lt;BR /&gt;java.io.IOException: Too many open files&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/sun.nio.ch.Net.accept(Native Method)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/sun.nio.ch.NioSocketImpl.timedAccept(NioSocketImpl.java:711)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/sun.nio.ch.NioSocketImpl.accept(NioSocketImpl.java:752)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/java.net.ServerSocket.implAccept(ServerSocket.java:675)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.base/java.net.ServerSocket.platformImplAccept(ServerSocket.java:641)&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="image.png" style="width: 999px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/38922iD7336120603CA18A/image-size/large?v=v2&amp;amp;px=999" role="button" title="image.png" alt="image.png" /&gt;&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 13 Nov 2023 13:44:09 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/NIFI-The-JVM-heap-memory-on-the-primary-node-gradually-grows/m-p/378947#M243729</guid>
      <dc:creator>edim2525</dc:creator>
      <dc:date>2023-11-13T13:44:09Z</dc:date>
    </item>
    <item>
      <title>Re: NIFI - The JVM heap memory on the primary node gradually grows until it reaches full capacity at 100% memory utilization.</title>
      <link>https://community.cloudera.com/t5/Support-Questions/NIFI-The-JVM-heap-memory-on-the-primary-node-gradually-grows/m-p/395766#M249022</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/106502"&gt;@edim2525&lt;/a&gt;&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;NiFi needs access to a lot of file handles since your dataflow can consist of a lot of components with multiples of concurrency plus you can have a lot of individual FlowFiles traversing your dataflows.&amp;nbsp; The typical default open file limit is 10,000.&amp;nbsp; &amp;nbsp;I'd recommend setting a much larger open file limit of&amp;nbsp; 100,000 to 999,999.&amp;nbsp; &amp;nbsp;This will solve your Too many open files error.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Please help our community thrive. If you found&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;any&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "&lt;SPAN&gt;&lt;EM&gt;&lt;STRONG&gt;&lt;FONT color="#FF0000"&gt;Accept as Solution&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/EM&gt;" on&amp;nbsp;&lt;STRONG&gt;one or more&lt;/STRONG&gt;&amp;nbsp;of them that helped.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Thank you,&lt;BR /&gt;Matt&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 22 Oct 2024 20:36:33 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/NIFI-The-JVM-heap-memory-on-the-primary-node-gradually-grows/m-p/395766#M249022</guid>
      <dc:creator>MattWho</dc:creator>
      <dc:date>2024-10-22T20:36:33Z</dc:date>
    </item>
  </channel>
</rss>

