Created 09-29-2021 09:32 AM
I am working off of a 3-node NiFi cluster and that is kicked off by a GenerateFlowfileProcessor run on the primary node, performs some NiFi processing, and then writes the files to the server that I will then run an ExecuteStreamCommand Python script on. The problem I’m running into is I can’t figure out a way to ensure that the processors picking up the first output are run on the same node as the processors that produced the first output.
Created 09-30-2021 05:48 AM
@TRSS_Cloudera
Your use case is not completely clear to me.
Each Node in a NiFi cluster executes its own copy of the dataflow against its own set fo FlowFiles (FlowFiles are what the NiFI components execute upon). NiFi components can be processors, controller services, reporting tasks, input/output ports, RPG, etc. Each node maintains its own set of repositories. Two of those repositories (flowfile_repository and content_repository) hold the parts that make up a FlowFile.
In a NiFi cluster a node will always get elected as the Cluster Coordinator or Primary Node (sometimes one node is elected for both these roles) Which node is elected to either role can change at anytime.
Your GenerateFlowFIle processor you have configured to execute on "Primary Node" only will produce FlowFile(s) only on the currently elected primary node. From your description, you dod not cover how your dataflow writes the files to the server that you will then run an ExecuteStreamCommand Python script on.
If you found this response assisted with your query, please take a moment to login and click on "Accept as Solution" below this post.
Thank you,
Matt
Created 09-30-2021 05:48 AM
@TRSS_Cloudera
Your use case is not completely clear to me.
Each Node in a NiFi cluster executes its own copy of the dataflow against its own set fo FlowFiles (FlowFiles are what the NiFI components execute upon). NiFi components can be processors, controller services, reporting tasks, input/output ports, RPG, etc. Each node maintains its own set of repositories. Two of those repositories (flowfile_repository and content_repository) hold the parts that make up a FlowFile.
In a NiFi cluster a node will always get elected as the Cluster Coordinator or Primary Node (sometimes one node is elected for both these roles) Which node is elected to either role can change at anytime.
Your GenerateFlowFIle processor you have configured to execute on "Primary Node" only will produce FlowFile(s) only on the currently elected primary node. From your description, you dod not cover how your dataflow writes the files to the server that you will then run an ExecuteStreamCommand Python script on.
If you found this response assisted with your query, please take a moment to login and click on "Accept as Solution" below this post.
Thank you,
Matt
Created 10-05-2021 11:33 PM
@TRSS_Cloudera Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
Regards,
Vidya Sargur,