Support Questions

Find answers, ask questions, and share your expertise

​I am using HandleHttpRequest processor to ingest my data. In this processor, I configured Container Queue Size property as 30K. Can it cause a heap memory issue? Default value is 50.

avatar

I am using HandleHttpRequest processor to ingest my data. In this processor, I configured Container Queue Size property as 30K. Can it cause a heap memory issue? Default value is 50.

Also, there is one known issue in HandleHTTPReqeust processor reported here "https://issues.apache.org/jira/browse/NIFI-4959" and fix is available in HDF 3.2. Currently, we are not in state of upgrading our HDF version. Is there any alternative to HandleHTTPRequest processor?

3 REPLIES 3

avatar
Master Mentor

@Shailendra Lohia

-

The internal queue of which the size is determined by what is configured for "Container Queue Size" does exist within NiFi's java heap space. Each incoming HTTP request is placed here and NiFi generates a FlowFile for each request based on the run schedule of the HandleHTTPRequest processor. If NiFi is is not creating these FlowFiles fast enough, that queue can grow all the way to 30K. The amount of impact that will have on your java heap really depends on the size of those individual requests.

-

The purpose of this queue is to provide a way to pass the request coming form the embedded jetty server that is started when you start the HandleHTTPRequest processor and NiFi. So 10k or 20k or 30k will not matter much if the HandleHTTPRequest processor can't create FlowFIles fast enough from that internal queue. The Concurrent Tasks setting on the HandleHTTPRequest processor equates the max number of possible concurrent executions/threads reading from that internal queue and creating FlowFiles.

-

I tend to look at that internal container queue as a way to buffer incoming surges of requests. If the rate of request coming in is steady, you need to make sure NiFi can keep up or the request will eventually start getting service unavailable responses if the internal queue fills.

-

The Bug you mentioned while fixed in Apache NiFi 1.6.0, it was included as a fix in HDF 3.1.2 which has been released.

The only alternative to the HandleHTTPRequest processor is the ListenHTTP processor. It is not going to provide you same robust set of capabilities.

Refer here https://nifi.apache.org/docs/nifi-docs/ for latest processor documentation or look at the embedded docs in the release you are currently running for more details.

-

Thank you,

Matt

-

If you found this Answer addressed your original question, please take a moment to login and click "Accept" below the answer.

avatar
Master Mentor

@Shailendra Lohia

-

*** Forum tip: Please try to avoid responding to an Answer by starting a new answer. Instead use the "add comment" tp respond to en existing answer. There is no guaranteed order to different answers which can make following a response thread difficult especially when multiple people are trying to assist you.

-

When you start the HandleHTTPRequest processor it spins off an imbedded Jetty web-server. It is less a matter of determining how many stacked up requests you have and more of a "Am I reading off that internal queue fast enough to keep form hitting max container size"

-

How many unique source systems may be hitting this handleHTTPRequest end-point concurrently?
I would suggest setting the "Container Queue Size" to at least that value or double that value to start with.

Then monitor for issues and adjust processor setting from there.

-

I am not familiar with a way to get a value for current queue size. Even if that was possible, it is highly likely the value would have changed by the time just got the result.

-

Thank you,

Matt

avatar

Thank you @Matt Clarke to give detailed answer! It really helped me to understand the use of "container queue size" configuration. Though one more question I have. Is there a way, I can find, how many queue are being created? This would help me to determine queue size needed for my application.

Thanks for your help!

Regards

Shailendra