Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

MiNiFi to NiFi S2S load balancing does not work

avatar
Explorer

I've read the question below:
https://community.cloudera.com/t5/Support-Questions/MiniFi-to-NiFi-connection-through-load-balancer/...
but in my case that I've implemented, site to site does not balance the incoming flows from MiNiFi.
Here is my scenario:
I have a NiFi cluster with 4 nodes. in the MiNiFi setup, I've set the remote process group URL to the nifi.remote.input.host that I've already set in the nifi.properties.
Although Port 1026 is opened for receiving data in all nodes, when I use tcpdump command, I can see that only the host for the node which I've mentioned above are getting data and other nodes in the cluster do not receive any data.
Nifi version: 1.8.0
MiNiFi version: 0.5.0 java

1 ACCEPTED SOLUTION

avatar
Super Mentor

@Arash 

 

In your 4 node NiFi cluster, what value do you have set in the "nifi.remote.input.host" property in the nifi.properties file for each of the 4 nodes?  It should be the FQDN for each node and not be the same value on all 4 nodes.

Form the host where MiNiFi is running, can all 4 of those FQDNs be resolved and reachable over the network?  If not, MiNiFI RPG is only going to be able to send successfully to one FQDN it can reach.

When the RPG is started it reaches out to the URL configured in the RPG to obtain S2S details from the target host.  That target host collects the host details for all currently connected nodes in the cluster and communicates that back to the client (MiNiFi).   If all 4 nodes report the same configured FQDN in the "nifi.remote.input.host" property, then client only knows of one FQDN to which it can send FlowFiles over Site-To-Site (S2S).

To improve redundancy in the RPG, you can provide a comma separated list of URLS in the RPG configuration so if any one node is down, the RPG can try fetch S2S details from the next host in the comma separated list.

Hope this helps,

Matt

View solution in original post

3 REPLIES 3

avatar

What you describe does not (yet) appear to conflict with the explanation in the linked thread. It seems that NiFi attempts to load balance when needed. Perhaps try routing your incoming messages through some heavy processors to the node which receives them consumes it's resources fast and see if it starts to load balance once you are hitting the limits.


- Dennis Jaheruddin

If this answer helped, please mark it as 'solved' and/or if it is valuable for future readers please apply 'kudos'.

avatar
Explorer

Thanks, @DennisJaheruddi .

I have a 4 node cluster. On the NiFi side, I stopped the node with the """Hostname""" for the nifi.input.remote.host, but on the MiNiFi side, I get the error below:

2021-02-02 10:03:18,105 WARN [NiFi Site-to-Site Connection Pool Maintenance] o.apache.nifi.remote.client.PeerSelector Could not communicate with """Hostname""":1026 to determine which nodes exist in the remote NiFi cluster, due to java.net.ConnectException: Connection refused 2021-02-02 10:03:18,105 WARN [NiFi Site-to-Site Connection Pool Maintenance] o.apache.nifi.remote.client.PeerSelector org.apache.nifi.remote.client.PeerSelector@649b3ed4 Unable to refresh Remote Group's peers due to Unable to communicate with remote NiFi cluster in order to determine which nodes exist in the remote cluster 2021-02-02 10:03:23,155 ERROR [NiFi Site-to-Site Connection Pool Maintenance] o.a.n.r.io.socket.ssl.SSLSocketChannel org.apache.nifi.remote.io.socket.ssl.SSLSocketChannel@4f8aa02f Failed to connect due to {} java.net.ConnectException: Connection refused.

---------

I also checked for the heavy load scenario, but when the load for """Hostname""" is high the load balancing does not occur too. I use SSL for communication, S2S With raw transport protocol, and the """Hostname""" is set for the URL in the remote process group on the MiNiFi side.

avatar
Super Mentor

@Arash 

 

In your 4 node NiFi cluster, what value do you have set in the "nifi.remote.input.host" property in the nifi.properties file for each of the 4 nodes?  It should be the FQDN for each node and not be the same value on all 4 nodes.

Form the host where MiNiFi is running, can all 4 of those FQDNs be resolved and reachable over the network?  If not, MiNiFI RPG is only going to be able to send successfully to one FQDN it can reach.

When the RPG is started it reaches out to the URL configured in the RPG to obtain S2S details from the target host.  That target host collects the host details for all currently connected nodes in the cluster and communicates that back to the client (MiNiFi).   If all 4 nodes report the same configured FQDN in the "nifi.remote.input.host" property, then client only knows of one FQDN to which it can send FlowFiles over Site-To-Site (S2S).

To improve redundancy in the RPG, you can provide a comma separated list of URLS in the RPG configuration so if any one node is down, the RPG can try fetch S2S details from the next host in the comma separated list.

Hope this helps,

Matt