Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

HandleHttpRequest processor stops receiving request for 10-15 second starts again

avatar
Master Collaborator

Hello Experts,

 

I have HandleHttpRequest processors which works fine when I start my load test and after 100K + records suddenly stops taking any more request (does not emit any ff also from processor) for 10-15 seconds and then again resumes !

 

Any idea what could be the reason, it does not show any log also.

 

Thanks

Mahendra

3 REPLIES 3

avatar
Super Guru

@hegdemahendra ,

 

Have you checked the NiFi logs? Are there any errors or warnings in there?

 

Cheers,

André

 

--
Was your question answered? Please take some time to click on "Accept as Solution" below this post.
If you find a reply useful, say thanks by clicking on the thumbs up button.

avatar
Master Collaborator

Unfortunately dont find anything.
I enabled DEBUG logger also for HandleHttpRequest processors but nothing over there

avatar
Super Mentor

@hegdemahendra 
1. Do you see any logging related to the content_repository.  Perhaps something related to NiFi not allowing writes to the content repository waiting on archive clean-up?

2. Is any outbound connection from the handleHTTPRequest processor red at the time of the pause?  This indicates backpressure is being applied which would stop the source processor from being scheduled until back pressure ends.

3. How large is your Timer Driven thread pool?  This is the pool of threads from which the scheduled components can use.  If it is set to 10 and and all are currently in use by components, the HandleHTTPRequest processor , while scheduled, will be waiting for a free thread from that pool before it can execute.  Adjusting the "Max Timer Driven Thread pool" requires careful consideration of average CPU load average across on every node in your NiFi cluster since same value is applied to each node separately.  General starting pool size should be 2 to 4 times the number of cores on a single node.  Form there you monitor CPU load average across all nodes and use the one with the highest CPU load average to determine if you can add more threads to that pool.  If you have a single node that is always has a much higher CPU load average, you should take a closer look at that server.  Does it have other service running on it tat are not running on other nodes?  Does it unproportionately consistently have more FlowFiles then any other node (This typically is a result of dataflow design and not handling FlowFile load balancing redistribution optimally.)?

4. How many concurrent tasks on your HandleHttpRequest processor.  The concurrent tasks are responsible for obtaining threads (1 per concurrent task if available) to read data from the Container queue and create the FlowFiles.  Perhaps the request come in so fast that there are not enough available threads to keep the container queue from filling and thus blocking new requests. 

Hope the above helps you get to the root of your issue.

If you found this response assisted with your query, please take a moment to login and click on "Accept as Solution" below this post.

Thank you,

Matt