Hi All,
We have just productionized our Nifi flows which is a real time streaming application . We have a cluster of 2 Nodes and each node has 6 processors and a high memory of 28 GB. On the Nifi side the memory has been set to 24 GB. Moreover , we tend to see a queue size reaching to around 90,000 or more. For this I have set the property nifi.queue.swap.threshold to 90000 . We also have a disk space of 200 GB.
Despite having such a powerful configuration , we still get flow file repository failed to update. I understand that initially the JVM memory is uitilized for the flow file processing and if that gets overused , it spills over to disk space . Our disk space is also pretty high at 200 GB however we still keep on seeing this error.
Any pointers or suggestion in this space will be of great help.