Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

NiFi Nodes getting disconnected frequently

avatar
New Contributor

I have 5 nodes of NiFi cluster. using 3000 processors. which contains 150+ pipelines(which have some simple use cases like reading the date from RDBMS or from file directory, applying some transformation, and writing it to Kudu table or HDFS. and in some use cases we are making http calls by using InvokeHTTP and JOLTTransformation processor).

 

we are using PutKudu processor in many pipelines. some of the custom processors also we used(like ExecuteGroovyScript, GetSMPP).

PFB the cluster details :

Each node has 24 GB ram.

2 nodes have 16 core, another 2 nodes have 32 core and the remaining node have 48 cores.

 

 

Almost all the times, I am not seeing much files in the queue and active threads are also like less than 50 in each node. but still we find the NiFi UI slow. and each node consuming more than 80% of jvm heap space regularly. CPU utilization are also going beyond 80%

Can someone review my configuration or guide me here to resolve this issue.

 

3 REPLIES 3

avatar
Super Guru

@Vinay91 ,

 

Do you see any errors in the nifi-app.log file?

Do you have a subscription with Cloudera Support? This is something the support team could help you quickly resolve.

 

Cheers,

André

--
Was your question answered? Please take some time to click on "Accept as Solution" below this post.
If you find a reply useful, say thanks by clicking on the thumbs up button.

avatar
New Contributor

no error in nifi-app.log. only I can see the warning message "Memory Pool 'PS Old Gen' has exceeded the configured Threshold of 80%, having used 15.49 GB / 16 GB (96.84%)" for the reporting task.

 

avatar
New Contributor

Yes, we have Cloudera subscription. We have contacted to support team, and they have asked us to use 'Maximum Timer Driven Thread Count ' as 64. Earlier we were using this count as 280. we have tried to set it to 160 but it didn't help. So, we have reverted it back to 280 again.