Created 05-29-2025 10:15 PM
Hello experts,
We are running nifi 1.25.0 version in Kubernetes, single pod in cluster mode.
Memory limit for the pod is 40G and JVM configured for Nifi is 28G.
Now we are facing OOM error once in a while.
When we checked nifi GUI System diagnostic it shows only 3G usage (Heap + non-heap)
But when we checked at pod level memory usage for nifi process it shows very high usage.
What else could be consuming so much memroy?
This pod has only nifi container running
Thanks in advance,
Mahendra
Created 06-04-2025 11:36 PM
This is a classic case of off-heap memory consumption in NiFi. The 3G you see in the GUI only represents JVM heap + non-heap memory, but NiFi uses significant additional memory outside the JVM that doesn't appear in those metrics. Next time could you share your deployment YAML files that would help with solutioning
Root Causes of Off-Heap Memory Usage:
Network and file I/O operations
Possible Solutions:
1. Reduce JVM Heap Size
# Instead of 28G JVM heap, try:
NIFI_JVM_HEAP_INIT: "16g"
NIFI_JVM_HEAP_MAX: "16g"
This leaves more room (24G) for off-heap usage.
Add JVM arguments:
-XX:MaxDirectMemorySize=8g
In nifi.properties:
# Limit content repository size
nifi.content.repository.archive.max.retention.period=1 hour
nifi.content.repository.archive.max.usage.percentage=50%
# Use file-based instead of memory-mapped (if possible)
nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository
4. Provenance Repository Tuning
# Reduce provenance retention
nifi.provenance.repository.max.storage.time=6 hours
nifi.provenance.repository.max.storage.size=10 GB
Long-term Solutions:
resources:
limits:
memory: "60Gi" # Increase from 40G
requests:
memory: "50Gi"
Enable JVM flags for better monitoring:
-XX:NativeMemoryTracking=summary
-XX:+UnlockDiagnosticVMOptions
-XX:+PrintNMTStatistics
Instead of single large pod, use multiple smaller pods:
# 3 pods with 20G each instead of 1 pod with 40G
replicas: 3
resources:
limits:
memory: "20Gi"
Monitoring Commands:
# Check native memory tracking
kubectl exec -it <nifi-pod> -- jcmd <pid> VM.native_memory summary
# Monitor process memory
kubectl top pod <nifi-pod>
# Check memory breakdown
kubectl exec -it <nifi-pod> -- cat /proc/<pid>/status | grep -i mem
Start with reducing JVM heap to 16G and implementing content repository limits. This should immediately reduce OOM occurrences while you plan for longer-term solutions. Always remember to share your configuration files with the vital data masked or scramble.
Happy hadooping
Created 06-04-2025 11:36 PM
This is a classic case of off-heap memory consumption in NiFi. The 3G you see in the GUI only represents JVM heap + non-heap memory, but NiFi uses significant additional memory outside the JVM that doesn't appear in those metrics. Next time could you share your deployment YAML files that would help with solutioning
Root Causes of Off-Heap Memory Usage:
Network and file I/O operations
Possible Solutions:
1. Reduce JVM Heap Size
# Instead of 28G JVM heap, try:
NIFI_JVM_HEAP_INIT: "16g"
NIFI_JVM_HEAP_MAX: "16g"
This leaves more room (24G) for off-heap usage.
Add JVM arguments:
-XX:MaxDirectMemorySize=8g
In nifi.properties:
# Limit content repository size
nifi.content.repository.archive.max.retention.period=1 hour
nifi.content.repository.archive.max.usage.percentage=50%
# Use file-based instead of memory-mapped (if possible)
nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository
4. Provenance Repository Tuning
# Reduce provenance retention
nifi.provenance.repository.max.storage.time=6 hours
nifi.provenance.repository.max.storage.size=10 GB
Long-term Solutions:
resources:
limits:
memory: "60Gi" # Increase from 40G
requests:
memory: "50Gi"
Enable JVM flags for better monitoring:
-XX:NativeMemoryTracking=summary
-XX:+UnlockDiagnosticVMOptions
-XX:+PrintNMTStatistics
Instead of single large pod, use multiple smaller pods:
# 3 pods with 20G each instead of 1 pod with 40G
replicas: 3
resources:
limits:
memory: "20Gi"
Monitoring Commands:
# Check native memory tracking
kubectl exec -it <nifi-pod> -- jcmd <pid> VM.native_memory summary
# Monitor process memory
kubectl top pod <nifi-pod>
# Check memory breakdown
kubectl exec -it <nifi-pod> -- cat /proc/<pid>/status | grep -i mem
Start with reducing JVM heap to 16G and implementing content repository limits. This should immediately reduce OOM occurrences while you plan for longer-term solutions. Always remember to share your configuration files with the vital data masked or scramble.
Happy hadooping
Created 09-26-2025 04:06 AM
@Shelton Thank you for the detailed answer, much appreciated !