Created 04-28-2023 05:20 AM
We have done the NiFi upgradation from 1.18.0 to 1.19.1 and we also upgrade the java version from 8 to 11 and after upgradation we observe nifi using more memory than allocated and caching build in the nifi. we allocate memory to nifi as java.arg.2=-Xms1g and java.arg.3=-Xmx50g. we are using RAM-64 GB and Core-8 and the solid-state high-speed hard disk is 2T. OS is CentOS7.9, ARM64 hardware architecture. we uncomment the Garbage collector from bootstarp.conf but this is not useful for us. we keep the databasr repository, content repository, provenance repository, and flowfile repository in different disks.
I need a help to resolve the issue
nifi-app.log for reference
id=1682607799237-15502, container=contS1R1, section=142], offset=605303, length=7664],offset=0,name=47ffd206-8189-4fbb-9d19-2b82113da956,size=7664], StandardFlowFileRecord[uuid=a06c6ecd-167f-41d6-9ec5-1bcadc37fcf6,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1682607797216-15364, container=contS1R1, section=4], offset=874085, length=11888],offset=0,name=675ad88e-c9e1-4b2f-85a0-c2b842e6c35b,size=11888], StandardFlowFileRecord[uuid=6b21da95-95d9-46f5-8510-4710ef197819,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1682607797216-15364, container=contS1R1, section=4], offset=864517, length=9568],offset=0,name=3b608669-9dee-4286-8865-695ca9f60105,size=9568], StandardFlowFileRecord[uuid=dadb08dd-ee7d-4c70-ade8-14e6b42f89e5,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1682607799314-15505, container=contS1R2, section=145], offset=0, length=12576],offset=0,name=52a965fe-4d18-499f-9f88-3ab1c84f836d,size=12576]] (40.72 KB) from Peer[url=nifi://212.76.88.137:8445] in 2625 milliseconds at a rate of 15.51 KB/sec
2023-04-27 15:03:22,539 INFO [FileSystemRepository Workers Thread-2] o.a.n.c.repository.FileSystemRepository Successfully archived 23 Resource Claims for Container contS1R2 in 2 millis
2023-04-27 15:03:22,663 INFO [FileSystemRepository Workers Thread-2] o.a.n.c.repository.FileSystemRepository Successfully archived 38 Resource Claims for Container contS1R1 in 124 millis
2023-04-27 15:03:22,740 INFO [Timer-Driven Process Thread-3586] o.a.n.c.r.WriteAheadFlowFileRepository Successfully swapped out 10000 FlowFiles from FlowFileQueue[id=ad7834f7-38a1-16f4-4961-0e68a6faca29] to Swap File /data1/flowfile_repository/swap/1682607800425-ad7834f7-38a1-16f4-4961-0e68a6faca29-2ca046bb-321a-43f0-8ffc-12f6eebb77e3.swap
2023-04-27 15:03:22,788 INFO [Timer-Driven Process Thread-2522] o.a.n.r.p.s.SocketFlowFileServerProtocol SocketFlowFileServerProtocol[CommsID=ce5942ca-30bc-44c3-80bd-07ee6391e5cf] Successfully received 22 FlowFiles (455.78 KB) from Peer[url=nifi://212.76.88.137:25294] in 4740 milliseconds at a rate of 96.14 KB/sec
2023-04-27 15:03:22,838 INFO [Timer-Driven Process Thread-1165] o.a.n.r.p.s.SocketFlowFileServerProtocol SocketFlowFileServerProtocol[CommsID=eac02335-3ac7-42bd-8779-4253e20fd36f] Successfully received 25 FlowFiles (168.09 KB) from Peer[url=nifi://212.76.88.137:9047] in 1217 milliseconds at a rate of 138.06 KB/sec
Created 04-28-2023 06:56 AM
hi @Amit_barnwal,
First of all, why are you using the java.arg.2and java.arg.3= with such a big difference? You know that Xms = initial memory allocation and Xmx = maximum memory allocation... and all of them refer to the HEAP memory. In addition, the minimum recommended size for your heap is 4GB.
Have a look on the following Article, which provides you everything you need in terms of configuring the nifi cluster: https://community.cloudera.com/t5/Community-Articles/HDF-CFM-NIFI-Best-practices-for-setting-up-a-hi...
Now, regarding the fact that you NiFi Instance is eating a lot of RAM Memory, you need to know that most of the processors divide into two categories: RAM eating processors and CPU eating processors. If in your workflow you have many RAM eating processors it is normal that you eat a lot of the available RAM Memory.
PS: it is not really recommended to assign so much memory to your heap 🙂
Created 05-02-2023 10:06 AM
@Amit_barnwal Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks
Regards,
Diana Torres,