Support Questions

Find answers, ask questions, and share your expertise

Apache NiFi ListenSyslog drop incoming data when InvokeHTTP tries to resend failure data

avatar
New Contributor

Hello Community,
I have a configuration where ListenSysylog is listenig and provides the received data to InvokeHTTP. Also I use a funnel to redirect Http failures to the InvokeHttp once again. The problem occurs when InvokeHttp tries to resend data and it consumes so many threads that ListenSyslog starts droping incoming data. I do not want to lost any incoming data and in the same time I want to be sure that data will be send. Do you have a similar problem? Is there any way to limit proccessors?

Thanks

1 ACCEPTED SOLUTION

avatar
Super Guru

@Curry5103 ,

 

ListenSyslog can receive a very large amount of data and it may be hard to match the throughput of that with an InvokeHTTP processor.

 

To avoid dropping syslog messages, especially if using UDP protocol, you will probably need to provide enough buffer for those messages to decouple the receipt of the messages from the InvokeHTTP execution. You can do that in NiFi by increase the maximum queue size limits, or maybe use something like Kafka, where you can temporarily store the syslog messages and another flow can read from Kafka and call InvokeHTTP for each of them.

 

Cheers,

André

 

--
Was your question answered? Please take some time to click on "Accept as Solution" below this post.
If you find a reply useful, say thanks by clicking on the thumbs up button.

View solution in original post

4 REPLIES 4

avatar
Master Collaborator

This line "The problem occurs when InvokeHttp tries to resend data and it consumes so many threads that ListenSyslog starts dropping incoming data" 

My understanding is that InvokeHttp will try run with max number of threads based upon what is configured concurrent task settings, beyond that it will not use any more threads at the same ListenSyslog is also running with it own concurrent task setting and max threads it can use and which can be further tune to hold more request based on settings Max Size of Message Queue and Max Size of Socket Buffer and Max Number of TCP Connections.

 

To hold your initial anaylsys between InvokeHTTP and ListenSyslog you can provide more details around your flow design when the issue occurs like screen shots of flow, configs setting for ListenSyslg and InvokeHTTP would help more to understand.

 

Thank You

avatar
Super Guru

@Curry5103 ,

 

ListenSyslog can receive a very large amount of data and it may be hard to match the throughput of that with an InvokeHTTP processor.

 

To avoid dropping syslog messages, especially if using UDP protocol, you will probably need to provide enough buffer for those messages to decouple the receipt of the messages from the InvokeHTTP execution. You can do that in NiFi by increase the maximum queue size limits, or maybe use something like Kafka, where you can temporarily store the syslog messages and another flow can read from Kafka and call InvokeHTTP for each of them.

 

Cheers,

André

 

--
Was your question answered? Please take some time to click on "Accept as Solution" below this post.
If you find a reply useful, say thanks by clicking on the thumbs up button.

avatar
New Contributor

The issue was not related to the size of the queue, but instead to the fact that when to destination that we are sending those logs to had connectivity issues (which was causing quite a lot of timeout errors), NiFi was using a lot of a significant number of threads to retry sending the logs to the InvokeHTTP destination. This high usage of threads was causing ListenSyslog to not function properly - it was dropping the logs that it was supposed to receive.

avatar
Community Manager

@Curry5103, Has any of the replies helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. 



Regards,

Vidya Sargur,
Community Manager


Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.
Learn more about the Cloudera Community: