Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Nifi cluster changing primary node very often

avatar
Expert Contributor

We have 2 node cluster HDF .Recently we have couple of projects migrated to Production which has to use few process groups which needs to run on primary node only .So off recently i saw the primary node is being changed constantly ,because of which we are facing lot of issue of the projects which run on primary node only .I see server is busy on GC.i want to understand how would the processors which are configured to "run on primary node only" work when the primary node keeps on changing ,is there a workaround to manually selet the primary node.

1 ACCEPTED SOLUTION

avatar
Master Mentor

@Bharadwaj Bhimavarapu

Processors within the body of a dataflow should NOT be configured to use the "Primary node" only "execution" strategy. The only processors that should be scheduled to run on "Primary node" only would be data ingest type processors that do not use cluster friendly type protocols. The most common non-cluster friendly ingest processors can be found to have "List<type>" processor names (ListSFTP, ListHDFS, ListFTP, ListFile, ....).

-

When a node is no longer elected as the primary node, it will stop scheduling only those processors set for "Primary node" only execution. All other processors will continue to execute. The newly elected primary node will begin executing its "Primary node" only scheduled processors. These processors generally are designed to record some cluster wide state on where previous primary node execution left off so the same processor executing on the new primary node picks up where other left off.

-

This is why it is important that any processor that takes a incoming connection from another processor is not scheduled for "Primary node" only execution. If primary node changes you still want original primary node to continue processing the data queued downstream of the "primary node" only ingest processor.

-

There is no way to specify a specific node in a NiFi cluster to be the primary node. It is important to make sure that any one of your nodes is capable of executing the primary node processors at any time.

-

Zookeeper is responsible for electing both the primary node and cluster coordinator in a NiFi cluster. If your GC cycles are affecting the ability of your nodes to communicate with ZK in a timely manor, this may explain the constant election changes by ZK in your cluster. My suggestion would be to adjust the ZK timeouts in NiFi here (defaults are only 3 secs which is far from ideal in a production environment). The following properties can be found in the nifi.properties file:

nifi.zookeeper.session.timeout=60 secs

nifi.zookeeper.connect.timeout=60 secs

*** If using Ambari to mange your HDF cluster, make the above changes via nifi configs in Ambari.

-

Thanks,

Matt

-

If you found this answer addressed you initial question, please take a moment to login and click "accept" on the answer.

View solution in original post

2 REPLIES 2

avatar
Expert Contributor
@Matt Clarke

Please let me know your thoughts ...

avatar
Master Mentor

@Bharadwaj Bhimavarapu

Processors within the body of a dataflow should NOT be configured to use the "Primary node" only "execution" strategy. The only processors that should be scheduled to run on "Primary node" only would be data ingest type processors that do not use cluster friendly type protocols. The most common non-cluster friendly ingest processors can be found to have "List<type>" processor names (ListSFTP, ListHDFS, ListFTP, ListFile, ....).

-

When a node is no longer elected as the primary node, it will stop scheduling only those processors set for "Primary node" only execution. All other processors will continue to execute. The newly elected primary node will begin executing its "Primary node" only scheduled processors. These processors generally are designed to record some cluster wide state on where previous primary node execution left off so the same processor executing on the new primary node picks up where other left off.

-

This is why it is important that any processor that takes a incoming connection from another processor is not scheduled for "Primary node" only execution. If primary node changes you still want original primary node to continue processing the data queued downstream of the "primary node" only ingest processor.

-

There is no way to specify a specific node in a NiFi cluster to be the primary node. It is important to make sure that any one of your nodes is capable of executing the primary node processors at any time.

-

Zookeeper is responsible for electing both the primary node and cluster coordinator in a NiFi cluster. If your GC cycles are affecting the ability of your nodes to communicate with ZK in a timely manor, this may explain the constant election changes by ZK in your cluster. My suggestion would be to adjust the ZK timeouts in NiFi here (defaults are only 3 secs which is far from ideal in a production environment). The following properties can be found in the nifi.properties file:

nifi.zookeeper.session.timeout=60 secs

nifi.zookeeper.connect.timeout=60 secs

*** If using Ambari to mange your HDF cluster, make the above changes via nifi configs in Ambari.

-

Thanks,

Matt

-

If you found this answer addressed you initial question, please take a moment to login and click "accept" on the answer.