Member since
07-30-2019
3467
Posts
1641
Kudos Received
1016
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 191 | 05-04-2026 05:20 AM | |
| 450 | 03-23-2026 05:44 AM | |
| 341 | 02-18-2026 09:59 AM | |
| 590 | 01-27-2026 12:46 PM | |
| 1024 | 01-20-2026 05:42 AM |
05-08-2026
11:46 AM
Thank you. Setting the load balancer timeout to 25seconds worked for me with NiFi 2.8
... View more
05-07-2026
08:48 PM
@AlokKumar Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
05-07-2026
08:47 PM
@fnimi Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
05-04-2026
05:41 AM
@oka Perhaps others in the community may hav additional suggestions here, but since the "-" is not a valid character in JMS, you would need to use a AMQP processor to support these headers. As mentioned before there is a https://issues.apache.org/jira/browse/NIFI-14670 jira for adding AMQP 1.0 support to ConsumeAMQP processor, but it is still open and unassigned. Now that jira points to using the Qpid JMS Client in ConsumeJMS and as you experienced it works but still has limitations. Those limitations impact these specific properties with the "-" in the name. I would suggest adding your experience with trying to use Qpid AMQP in the above jira and what impacts it has on the two headers you require to maybe push the Apache community to adding AMQP 1.0 support to the AMQP specific processors. Additionally, there is this jira (https://issues.apache.org/jira/browse/QPID-4992), where an individual expressed some success preserving the content type header by using ActiveMQ JMS API instead of the Qpid AMQP JMS API. So you may want to give this jira a read and maybe try this for yourself. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
05-04-2026
05:20 AM
@nisaar I expect that retry set on the Success relationship out of ListSMB is impacting your scheduling. I suspect that retry is blocking "successful" attempts by until both retry have been made which aligns with the skip twice you are seeing in scheduling. As I mentioned before, you should not be setting "retry" on any success relationships. This is an anti-pattern that will delay processing of any successful execution for the duration of the retry (Each retry would occur at the processors scheduled execution times). Note: Scheduling of a NiFi component does not mean immediate execution. That execution depends on availability of threads to service the execution. NiFi has a "max timer driven thread count" configuration that establishes the thread pool from which all scheduled component threads come from. So things like number of running components, number of concurrent tasks set on a given component, CPU insensitive components, etc can impact when a "scheduled" component is given a thread from the thread pool to execute. I Thread pool does not impact scheduler unless the processor has been scheduled and never executes until within the 1.5 hours since it was initially scheduled (which I doubt in this case). The more logical cause because of the consistent behavior is the "retry" you have set on Success from ListSMB. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
04-28-2026
06:18 AM
1 Kudo
Issue is resolved after the vendor whitelisted the IP's Thanks
... View more
04-20-2026
04:54 AM
1 Kudo
@RohanBajaj This is a very old thread with a very old version of Apache NiFi from the very early days of NiFi's introduction of the load balancing capability through the connections. I recommend you start a new community question with the specifics of any issue you are having to get the best possible assistance from the community members. Thank you, Matt
... View more
04-10-2026
07:00 AM
@donaldo71 I have not been able to identify a known issue that aligns with the description you have shared. That is an interesting sequence of events on a single FlowFile (SEND followed by clones). Can you share the "Relationships" configuration of your putSQL processor? Make sure you have not checked the "retry" box on the "success" relationship. Something you might want to try to see if same issue persists is to check the box for "retry" on the retry relationship. This allows the original FlowFile to remain in the inbound connection up to the configured number of retry attempts (default 10) before being routed to retry relationship. I'd be curious of your observations post the above configuration change. Would you be willing to download the your flow definition json for this dataflow and share it? This is your full "SQL Statement" set in your ptSQL processor? UPDATE tbl SET status = 'proceed', startDate = GETDATE() WHERE messageId = ${messageId} Where are your utilizing those two attributes that go missing? can you share your UpdateAttribute processor configuration? Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
04-06-2026
01:34 PM
@AlokKumar You are correct that the ConsumePOP3 processor does not support an inbound connection to it. Even if it did, the username and password fields do not support NiFi Expression language allowing you to pass either of those values in from the source FlowFile. There aren't any other native processors that can support this dynamic credentials use case. You would need to create a custom script that could be called by scripting processors or create your own custom processor. ExecuteScript ExecuteProcess ExecuteGroovyScript The reason processors like ConsumePOP3 do not support inbound connections is because they are designed to execute continuously on a run schedule and produce an individual FlowFile for each new email message consumed. So supporting an inbound connection raises the question... What do you do with the source FlowFile that you would use as the trigger? Then you also have the challenge of continuously consumption. You would need to keep producing an input FlowFile for each email account to make sure you keep consuming from each source account. Plus this processor does not write any attributes to outbound FlowFile to distinguish which account message came from. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
03-31-2026
07:18 AM
Sorry for the delayed response. We were able to kind of resolve the issue by adding a retry on ListSMB and FetchSMB processors. Number of Attempts : 2 Retry Back Off Policy: Penalize Retry maximum backoff period : 1 minute To test the working we have scheduled it to run every 30 min. However, we are observing that whenever a retry happens the scheduler won't run on scheduled time. Not sure how retry is affecting scheduler. Thanks!
... View more