Created 02-24-2019 11:20 PM
Hi All,
We have just productionized our Nifi flows which is a real time streaming application . We have a cluster of 2 Nodes and each node has 6 processors and a high memory of 28 GB. On the Nifi side the memory has been set to 24 GB. Moreover , we tend to see a queue size reaching to around 90,000 or more. For this I have set the property nifi.queue.swap.threshold to 90000 . We also have a disk space of 200 GB.
Despite having such a powerful configuration , we still get flow file repository failed to update. I understand that initially the JVM memory is uitilized for the flow file processing and if that gets overused , it spills over to disk space . Our disk space is also pretty high at 200 GB however we still keep on seeing this error.
Any pointers or suggestion in this space will be of great help.
Created 02-25-2019 12:39 AM
For now we set the nifi.queue.swap.threshold to 150000 and this seems to have done the trick. Loads are performing well and there is no build up of the queue. thanks all
Created 02-25-2019 12:39 AM
For now we set the nifi.queue.swap.threshold to 150000 and this seems to have done the trick. Loads are performing well and there is no build up of the queue. thanks all