Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

NiFi Load balancing, internal or external

avatar
Explorer

We have a three node NiDi cluster with an F5 load balancer in front of it. The main flow start with a listenHTTP processor to wait for files from AWS. The F5 has a single pool that then sends files round robin to NiFi. How would I configure the flow to use NiFi load balancing instead of the F5 or can they be used together. What confuses me is I think I would send to a single NiFi node and the connection after ListenHTTP would load balance across the cluster. This is a single point of failure so I cannot see the benefit in NiFi load balancing. Can someone explain please Cheers.

1 ACCEPTED SOLUTION

avatar
Super Mentor

@Siddo 

The current strategy you are using is the best option with a use a case where the client is sending/pushing data to listeners across your NiFi cluster nodes.  Whenever you have a client that is pushing data to NiFi, this setup avoids as your mentioned having a single point of failure. 
If a load balancer can't be used, It becomes the responsibility of the client to detect delivery problems and switch to delivering to a different node.

Load balancing within NiFi's dataflows is the best option when the dataflow is consuming from a source system.  Some data consumption methods are not cluster friendly (for example FTP).   This is because every node in a NiFi cluster executes the same flow.xml.gz.  If you had for example the listSFTP/GetSFTP processors running on every node, you would have data duplication and potentially issues as every node tried to consume the same data.  So in this scenario you would configure the processor to execute on the primary node only and then use LB connections to immediately redistribute those FlowFiles across your cluster before doing further processing.  This is why we created the List and Fetch processor pairs.  These are typically non cluster friendly type processors.  So a ListSFTP produces FlowFiles with zero content and only attributes with details on where to fetch a specific FlowFiles content.  Those 0 bytes FlowFiles quickly Load balance across the cluster where the FetchSFTP processor would fetch the actual content for the FlowFile specific data file and insert it into the FlowFile.   This type of setup also avoid single point of failure since loss of the currently elected primary node (where the data lister/consumer is running) would result in a new node being elected as the new Primary node.  That new primary node reads state from the cluster state provider and begins listing where the previous elected node's list processor stopped.

So you can see that each use case has very specific benefits/use cases.

Another scenario may be that even with an external F5 LB, you may find one node in your cluster ends up with a larger burden of work load (maybe one node ends up with bulk of larger data files.  That data can be redistribute on connections were such single node bottle necks occur to re-balance the load at that point in a dataflow.  So at times a combination may make sense as well, but I would not just apply this strategy unless needed since it adds to network usage.

NiFi's internal LB connections can also be used to move all data to a single node for some use case.  Let's say there is a batch of data spread out across multiple NiFi nodes that you want to merge in to a single FlowFile.  NiFi nodes each work on only the FlowFiles on their own node.  But using LB connection in specific spots on your flow would allow you to move all like data to the sam node before a merge type processor.

Hope this helps,

Matt

View solution in original post

2 REPLIES 2

avatar
Super Mentor

@Siddo 

The current strategy you are using is the best option with a use a case where the client is sending/pushing data to listeners across your NiFi cluster nodes.  Whenever you have a client that is pushing data to NiFi, this setup avoids as your mentioned having a single point of failure. 
If a load balancer can't be used, It becomes the responsibility of the client to detect delivery problems and switch to delivering to a different node.

Load balancing within NiFi's dataflows is the best option when the dataflow is consuming from a source system.  Some data consumption methods are not cluster friendly (for example FTP).   This is because every node in a NiFi cluster executes the same flow.xml.gz.  If you had for example the listSFTP/GetSFTP processors running on every node, you would have data duplication and potentially issues as every node tried to consume the same data.  So in this scenario you would configure the processor to execute on the primary node only and then use LB connections to immediately redistribute those FlowFiles across your cluster before doing further processing.  This is why we created the List and Fetch processor pairs.  These are typically non cluster friendly type processors.  So a ListSFTP produces FlowFiles with zero content and only attributes with details on where to fetch a specific FlowFiles content.  Those 0 bytes FlowFiles quickly Load balance across the cluster where the FetchSFTP processor would fetch the actual content for the FlowFile specific data file and insert it into the FlowFile.   This type of setup also avoid single point of failure since loss of the currently elected primary node (where the data lister/consumer is running) would result in a new node being elected as the new Primary node.  That new primary node reads state from the cluster state provider and begins listing where the previous elected node's list processor stopped.

So you can see that each use case has very specific benefits/use cases.

Another scenario may be that even with an external F5 LB, you may find one node in your cluster ends up with a larger burden of work load (maybe one node ends up with bulk of larger data files.  That data can be redistribute on connections were such single node bottle necks occur to re-balance the load at that point in a dataflow.  So at times a combination may make sense as well, but I would not just apply this strategy unless needed since it adds to network usage.

NiFi's internal LB connections can also be used to move all data to a single node for some use case.  Let's say there is a batch of data spread out across multiple NiFi nodes that you want to merge in to a single FlowFile.  NiFi nodes each work on only the FlowFiles on their own node.  But using LB connection in specific spots on your flow would allow you to move all like data to the sam node before a merge type processor.

Hope this helps,

Matt

avatar
Explorer

Thank Matt,

 

Stunningly detailed replay and very much appreciated.

 

Dave