Support Questions

Find answers, ask questions, and share your expertise

NIFI not responding.

avatar

nifi flow is unresponsive.

did nifi.sh status.

2017-10-17 12:09:15,040 INFO [main] org.apache.nifi.bootstrap.Command Apache NiFi is running at PID 24007 but is not responding to ping requests

1 ACCEPTED SOLUTION

avatar

We have resolved the issue. As usual in hindsight it seems obvious. One of the processes, a transformxml, had the thread count set at 20. This seemed fine as most files going through were about 3m each. Our Java heap was set at 64g. However new files were introduced, that were in the 3g range. Simple math, 3 x 20=60g. used up for that 1 process. Since there was usually about 200k files flowing through at any one time, memory quickly got exhausted. We put a route attribute, to route these large files to a process that only has a couple of threads, while the smaller files could still filter through the process with 20 threads. Thanks for all the help suggestions. @Hans Feldmann

View solution in original post

5 REPLIES 5

avatar
Super Collaborator

Hi @Jonathan Bell,

can you please have a look at /var/log/nifi/nifi-app.log file (tail from the bottom),

I presume, that would be because of the insufficient heap, default heap size is about 512MB. That can be adjusted to something bigger value(4G or 8G), depends up on the host Memory availability.

More on configuring the heap for HDF can be found here, under the section bootstrap configuration.

hope this helps !!

avatar

Thanks, but we had the heap set to 64g. I tried downsizing to 32g. Memory on machine is 80g I will see how that works.Usually putting through about 50000 files every 1/2 hour.

# JVM memory settings java.arg.2=-Xms32g

java.arg.3=-Xmx32g

avatar
Expert Contributor

Try increasing the polling intervals on some of the processors. This can help the CPU + UI.

avatar

Thanks, but i see that polling is mostly for input files,and that doesn't seem to be an issue, Most times there is nothing in the input directory. but still unresponsive.

avatar

We have resolved the issue. As usual in hindsight it seems obvious. One of the processes, a transformxml, had the thread count set at 20. This seemed fine as most files going through were about 3m each. Our Java heap was set at 64g. However new files were introduced, that were in the 3g range. Simple math, 3 x 20=60g. used up for that 1 process. Since there was usually about 200k files flowing through at any one time, memory quickly got exhausted. We put a route attribute, to route these large files to a process that only has a couple of threads, while the smaller files could still filter through the process with 20 threads. Thanks for all the help suggestions. @Hans Feldmann