Member since
07-30-2019
3467
Posts
1641
Kudos Received
1015
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 416 | 03-23-2026 05:44 AM | |
| 323 | 02-18-2026 09:59 AM | |
| 568 | 01-27-2026 12:46 PM | |
| 1001 | 01-20-2026 05:42 AM | |
| 1326 | 01-13-2026 11:14 AM |
04-24-2026
08:15 AM
@nisaar What version of Apache NIFi is being used? How is your PutSFPT processor configured? Can you share the complete stack trace from the nifi-app.log? Matt
... View more
04-24-2026
08:09 AM
@nisaar Just so I am clear in your dataflow setup... - ListSMB processor configured to use cron scheduling (get scheduled for ru every ~30 mins) - ListSMB "success relationship" routed via a connection to FetchSMB processor - FetchSMB configured to use Timer Driven scheduling with: - Flow looks like this: The "retry" when set on a relationship controls whether a FlowFile remains on the inbound connection to the processor or gets routed immediately to the destination relationship. The number of retries is how many attempts will be made to reprocess the source FlowFile before finally routing to the destination relationship. You have 2 so that FlowFile will get terminated if it is not successfully fetched after 2 failed attempts. I am not clear on this statement you made: On first run it fails to either list/fetch the file and the retry kicks in and the files is listed and fetched successfully. There is no "retry" on the listSMB. It simply get scheduled to run at the configured run schedule and listed based upon previous stored state and processor property configuration. 1. How are files being added to the source SMB directory? 2. As files are added how is the the last modified timestamp being updated? (if they are being moved to the SMB as an atomic move, the timestamp on the file may not change which can result in ignored files because another file already listed resulted in sate holding a more recent timestamp). 3. What "Listing strategy" are you using? I recommend using "Tracking Entities" to avoid issue from question 2 above. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
04-23-2026
06:56 AM
@nisaar When you execute the SFTP CLI, are you doing so as the same user that owns the running NiFi process? If not, switch to teh NiFi service user and then try your SFTP CLI command again to see if it is successful or if it ask you to accept the host key for the SFTP target. If you are asked to accept the host key, try running your NiFi putSFTP processor again after you accept the host key on the NiFi service user account. It does not matter what user is authenticated in NiFi or which user created the dataflow on the NiFi canvas. All components are executed as the NiFi service user. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
04-20-2026
04:54 AM
1 Kudo
@RohanBajaj This is a very old thread with a very old version of Apache NiFi from the very early days of NiFi's introduction of the load balancing capability through the connections. I recommend you start a new community question with the specifics of any issue you are having to get the best possible assistance from the community members. Thank you, Matt
... View more
04-10-2026
07:00 AM
@donaldo71 I have not been able to identify a known issue that aligns with the description you have shared. That is an interesting sequence of events on a single FlowFile (SEND followed by clones). Can you share the "Relationships" configuration of your putSQL processor? Make sure you have not checked the "retry" box on the "success" relationship. Something you might want to try to see if same issue persists is to check the box for "retry" on the retry relationship. This allows the original FlowFile to remain in the inbound connection up to the configured number of retry attempts (default 10) before being routed to retry relationship. I'd be curious of your observations post the above configuration change. Would you be willing to download the your flow definition json for this dataflow and share it? This is your full "SQL Statement" set in your ptSQL processor? UPDATE tbl SET status = 'proceed', startDate = GETDATE() WHERE messageId = ${messageId} Where are your utilizing those two attributes that go missing? can you share your UpdateAttribute processor configuration? Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
04-08-2026
11:23 AM
@donaldo71 It would be helpful to know the full Apache NiFi version being used so I can see if there are any known issue that align here. I assume that you have a prepared SQL statement configured in your PutSQL processor that uses NiFi expression language to insert values from the Source FlowFile's attributes? Are you saying that on the very first failed execution of a "retry" routed FlowFile (the attempt that still had the "formatDateFrom" attribute set, there was no exception different from all subsequent retry requests made with FlowFile now missing that attribute)? Thank you, Matt
... View more
04-08-2026
05:56 AM
@donaldo71 I am trying to follow your flow description here and am not very clear. Can you share your flow definition and indicate where you are seeing the failure. What version of Apache NiFi are you using? You have an UpdateAttribute processor that has its success relationship routed via a connection to the PutSQL processor. How is putSQL configured? When "something is wrong", what exception is being thrown in the nifi-app.log? The more detail you can provide the better chance. community member may be able to provide guidance and suggestions. Thank you, Matt
... View more
04-06-2026
01:34 PM
@AlokKumar You are correct that the ConsumePOP3 processor does not support an inbound connection to it. Even if it did, the username and password fields do not support NiFi Expression language allowing you to pass either of those values in from the source FlowFile. There aren't any other native processors that can support this dynamic credentials use case. You would need to create a custom script that could be called by scripting processors or create your own custom processor. ExecuteScript ExecuteProcess ExecuteGroovyScript The reason processors like ConsumePOP3 do not support inbound connections is because they are designed to execute continuously on a run schedule and produce an individual FlowFile for each new email message consumed. So supporting an inbound connection raises the question... What do you do with the source FlowFile that you would use as the trigger? Then you also have the challenge of continuously consumption. You would need to keep producing an input FlowFile for each email account to make sure you keep consuming from each source account. Plus this processor does not write any attributes to outbound FlowFile to distinguish which account message came from. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
03-23-2026
05:44 AM
1 Kudo
@nisaar The exception indicates the an initial connection issue resulting in a failing to complete the connection. This would be network or server side issue and not a client (ListSMB/FetchSMB) issue. Usually the files listed and fetched are done by Primary node itself This statement is not clear. What does "Usually" mean. The ListSMB processor should be configured to only execute on the "Primary node" only to prevent multiple nodes in your NiFi cluster from listing the same files multiple times. If the ListSMB processor is configured for "primary node" execution and you are seeing FlowFile specific to this flow being listed on different nodes then the node that was elected as primary node is changing. I'd suggest taking a closer look at the logs or node events via the NiFi UI to see why the cluster coordinator role is changing nodes. Maybe you are experiencing some long stop the world Garbage Collection pauses (could lead to timed out connections). Maybe you Primary nodes Core load average is exceptionally high as well since you are not distributing the workload across all your nodes or you have concurrent tasks set to high. How many concurrent tasks do you have configured on the FetchSMB processor? Have you inspected the SMB server logs at the times of these failed connections for any errors or events during these connection attempts? How many nodes in your NiFi cluster? Is their a reason that you are not using load balancing on the connection between ListSMB and FetchSMB so that all your NiFi cluster nodes share the workload on fetch the content and processing it? Since it is intermittent failure, have you built retry into your design? You can set "retry" on the failure relationship that will trigger NiFi to re-queue the failed FlowFile so it is retried a configurable number of times before finally being routed to the connection containing the "failure" relationship. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
03-20-2026
06:05 AM
@nisaar The ListSMB processor only fetches metadata about the files in the target SMB location. For each file found it creates a 0 byte NiFi FlowFile that includes a bunch of metadata that can be used to fetch the content later by the FetchSMB processor. The List<type> and Fetch<type> processors are used to make sure one node in a multi-node NiFi cluster si not doing all the heavy work. The List<type> processor would be configured to run on "Primary Node" only. The success relationship would be connected to the FetchSMB via a connection. That connection would the need to be configured to load balance the 0 Byte FlowFiles across all your NiFi nodes so that each could Fetch a fair share of the content and process a fair share of the workload of this dataflow. What are the difference between the files that fail on content fetch versus those that are successful? Are these files larger resulting in a timeout exception? Are those that are timing out always being fetched by one specific node in your NiFi cluster? Have you verified the all nodes can successfully connect to the SMB server? Have you tried increasing the timeout set in the SmbjClientProviderService used by the SMB processors? Try setting it to 60 seconds or higher to see if the failed files can successfully fetch the content from SMB. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more