Created on 10-17-2017 04:13 PM - edited 09-16-2022 05:24 AM
nifi flow is unresponsive.
did nifi.sh status.
2017-10-17 12:09:15,040 INFO [main] org.apache.nifi.bootstrap.Command Apache NiFi is running at PID 24007 but is not responding to ping requests
Created 10-25-2017 10:13 PM
We have resolved the issue. As usual in hindsight it seems obvious. One of the processes, a transformxml, had the thread count set at 20. This seemed fine as most files going through were about 3m each. Our Java heap was set at 64g. However new files were introduced, that were in the 3g range. Simple math, 3 x 20=60g. used up for that 1 process. Since there was usually about 200k files flowing through at any one time, memory quickly got exhausted. We put a route attribute, to route these large files to a process that only has a couple of threads, while the smaller files could still filter through the process with 20 threads. Thanks for all the help suggestions. @Hans Feldmann
Created 10-17-2017 10:42 PM
Hi @Jonathan Bell,
can you please have a look at /var/log/nifi/nifi-app.log file (tail from the bottom),
I presume, that would be because of the insufficient heap, default heap size is about 512MB. That can be adjusted to something bigger value(4G or 8G), depends up on the host Memory availability.
More on configuring the heap for HDF can be found here, under the section bootstrap configuration.
hope this helps !!
Created 10-18-2017 12:51 AM
Thanks, but we had the heap set to 64g. I tried downsizing to 32g. Memory on machine is 80g I will see how that works.Usually putting through about 50000 files every 1/2 hour.
# JVM memory settings java.arg.2=-Xms32g
java.arg.3=-Xmx32g
Created 10-20-2017 03:49 PM
Try increasing the polling intervals on some of the processors. This can help the CPU + UI.
Created 10-23-2017 09:17 PM
Thanks, but i see that polling is mostly for input files,and that doesn't seem to be an issue, Most times there is nothing in the input directory. but still unresponsive.
Created 10-25-2017 10:13 PM
We have resolved the issue. As usual in hindsight it seems obvious. One of the processes, a transformxml, had the thread count set at 20. This seemed fine as most files going through were about 3m each. Our Java heap was set at 64g. However new files were introduced, that were in the 3g range. Simple math, 3 x 20=60g. used up for that 1 process. Since there was usually about 200k files flowing through at any one time, memory quickly got exhausted. We put a route attribute, to route these large files to a process that only has a couple of threads, while the smaller files could still filter through the process with 20 threads. Thanks for all the help suggestions. @Hans Feldmann