Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Nifi Cluster

avatar

I have Nifi 1.9.2 and I'm trying to run it into 3 different ports on the same machine (Localhost). I can't seem to find Anything on the internet that can help me with that.

Can we do this?

I need your help guys, 

Thanks in advance

1 ACCEPTED SOLUTION

avatar
Super Mentor

@ChampagneM12 

 

When you install a NiFi cluster, you start with a blank canvas.  So there is no data ingestion at first.  The  user must construct data flow(s) to meet their individual use cases as I am sure you know.  Handling data ingestion through an outage is handled through your implementation.

Lets assume you are ingesting from Kafka in to NiFi since you mentioned you use Kafka.  You would likely start that dataflow  with a ConsumeKafka processor.  Le's also assume you have a 3 node NiFi cluster and the Kafka topic you are consuming from has 12 partitions.  

Since all nodes in your cluster will be executing the consumeKafka processor, each will be a consumer of that topic.  With a single concurrent task (default) configured on the ConsumeKafka, each of those 3  NiFi node's consumeKafka will be assigned 4 partitions each.  If you were to set the Concurrent tasks to 4, then you now have a total of 12 consumers (one for each Kafka partition).   Now lets assume one of your NiFi nodes goes down, Kafka will see a drop in the number of consumers from 12 to 8 and rebalance.  So consumption will continue with some of those consumers now being assigned multiple partitions until the down NiFi node comes back on line.

That is just one scenario.

In the case of using a NiFI Listen type processor (example: ListenTCP).  This starts a TCP socket listener on each node in the NiFi cluster on the same port.  In this case it would be the client or some external mechanism that would need to handle failover to a different node in the event a NiFi node goes down.  This is typically handled with an external load balancer which distributes data to all the NiFi nodes or switches to a different node when a node goes down.

In the use case of something like ListSFTP, this processor would be configured to run on "primary node" only.  Zookeeper is responsible for electing a primary node and a cluster coordinator in a NiFi cluster.  NiFi processor components like ListSFTP are designed for primary node execution only and store state on the data listed in cluster state (within zookeeper).  If the current elected primary node goes down, another node in the NiFi cluster is elected the new primary node and the primary node only configured processors are started non that new node.  Last recorded state for that component reported to ZK by the previous primary node is pulled from ZK to the new primary node processor and it picks up listing from there.  Again you have redundancy.

The only place in NiFi were you can have data delay, is when a NiFi node goes down while it still has active data in its connection queue(s).  Other nodes will not have access to that data on the other down node to take over work on it.  It will remain in that node's flowfile and content repositories until that node has been restored and can continue processing on that queued FlowFiles.  So it is important to protect those two NiFi repositories using RAID configured drives.  You can minimize impact in such cases through good flow design and use of back pressure to limit amount of FlowFiles that can queue on a NiFi node.
Also keep in mind that while the flowfie and content repositories are tightly coupled to the flow.xml.gz, these items are not tightly coupled to a specific piece of hardware.  You can stand up an entirely new node for you cluster and move the flow.xml.gz, content repo and flowfile repo on to that node before starting it and that new node will continue processing the queued FlowFiles.

Hope this helps,

Matt

View solution in original post

7 REPLIES 7

avatar
Super Guru

I believe it is possible.  You will need to create 3 separate installs of NiFI on same node, each with different ports.  You will also need to do some custom configuration to make sure they are communicating together as a cluster.   This would be very advanced.  Also, putting on same node is not recommended.

 

I would Highly Recommend you to build a 3 node cluster using Ambari, and install NiFi in this manner as it will make things very easy.

avatar
Super Mentor

@ChampagneM12 

 

Running multiple NiFi nodes within the same NiFi cluster on the same system is not recommended, but can be done.

 

This is possible by editing the nifi.properties file for each NiFi node so that is binds to its own unique HTTP ports for the following settings:
nifi.remote.input.socket.port=
nifi.web.http(s).port=

nifi.cluster.node.protocol.port=

nifi.cluster.node.load.balance.port=


On startup NiFi will bind to these ports and multiple nodes on the same server can not bind to the same port.

Also keep in mind that multple NiFi instances can NOT share resources like any of the repositories (database, flowfile, content, or provenance), local state directories, etc. so make sure those are all set to unique paths per node in the nifi configuration files (nifi.properties, state-management.xml, authorizers.xml)

This will allow you to have multiple nodes loaded on the same server in same NiFi cluster. You will however potentially run in to issues when you start building your dataflows...

Each instance will run its own copy of the data flows you construct.  So any processor or controller service you add that sets up a listener will not work as only one node in your cluster will successfully bind to the configured port (there is not workaround for this). 
 
So total success here is going to in part depend on what kind of data flows you will be building.

 

Hope this helps,

Matt

avatar

@MattWho @stevenmatison, Hey guys, I thank you for responding on such a short notice.

you are both not recommending this. I hear you both.

The goal of my architechture is to have three nifi as starting point, they all communicating with each other. but if one goes down (the leader), one of the two that are still running ensure the safety and digesting of data like if nothing is hapenning. 

I used the same Architecture for Kafka. In kafka we have brokers. the zookeeper elect one as a leader. and if the leader fails and is not longer available, zookeeper elect a new one to take the lead and digest Data. knowing that we dont loose any data at any point. in kafka I created three brokers one on localhost:9092, localhost:9093, localhost:9094

is this logic can not work on nifi? if no, what yall are saying is I should work with three different machines will that guarentee the safety and digesting of data? 

I thank you in advance guys!

PS: the use of both Nifi and Kafka is for a big company. the goal is to manage failovers.

avatar
Super Mentor

@ChampagneM12 

 

When you install a NiFi cluster, you start with a blank canvas.  So there is no data ingestion at first.  The  user must construct data flow(s) to meet their individual use cases as I am sure you know.  Handling data ingestion through an outage is handled through your implementation.

Lets assume you are ingesting from Kafka in to NiFi since you mentioned you use Kafka.  You would likely start that dataflow  with a ConsumeKafka processor.  Le's also assume you have a 3 node NiFi cluster and the Kafka topic you are consuming from has 12 partitions.  

Since all nodes in your cluster will be executing the consumeKafka processor, each will be a consumer of that topic.  With a single concurrent task (default) configured on the ConsumeKafka, each of those 3  NiFi node's consumeKafka will be assigned 4 partitions each.  If you were to set the Concurrent tasks to 4, then you now have a total of 12 consumers (one for each Kafka partition).   Now lets assume one of your NiFi nodes goes down, Kafka will see a drop in the number of consumers from 12 to 8 and rebalance.  So consumption will continue with some of those consumers now being assigned multiple partitions until the down NiFi node comes back on line.

That is just one scenario.

In the case of using a NiFI Listen type processor (example: ListenTCP).  This starts a TCP socket listener on each node in the NiFi cluster on the same port.  In this case it would be the client or some external mechanism that would need to handle failover to a different node in the event a NiFi node goes down.  This is typically handled with an external load balancer which distributes data to all the NiFi nodes or switches to a different node when a node goes down.

In the use case of something like ListSFTP, this processor would be configured to run on "primary node" only.  Zookeeper is responsible for electing a primary node and a cluster coordinator in a NiFi cluster.  NiFi processor components like ListSFTP are designed for primary node execution only and store state on the data listed in cluster state (within zookeeper).  If the current elected primary node goes down, another node in the NiFi cluster is elected the new primary node and the primary node only configured processors are started non that new node.  Last recorded state for that component reported to ZK by the previous primary node is pulled from ZK to the new primary node processor and it picks up listing from there.  Again you have redundancy.

The only place in NiFi were you can have data delay, is when a NiFi node goes down while it still has active data in its connection queue(s).  Other nodes will not have access to that data on the other down node to take over work on it.  It will remain in that node's flowfile and content repositories until that node has been restored and can continue processing on that queued FlowFiles.  So it is important to protect those two NiFi repositories using RAID configured drives.  You can minimize impact in such cases through good flow design and use of back pressure to limit amount of FlowFiles that can queue on a NiFi node.
Also keep in mind that while the flowfie and content repositories are tightly coupled to the flow.xml.gz, these items are not tightly coupled to a specific piece of hardware.  You can stand up an entirely new node for you cluster and move the flow.xml.gz, content repo and flowfile repo on to that node before starting it and that new node will continue processing the queued FlowFiles.

Hope this helps,

Matt

avatar

Okey @MattWho, understood!

Can you help me, if you can of course, try to configure a 2 nifi instance communication.

I have 3 Nifi Install and I want them to communicate with each others, but I'm trying to configure just two for now so I can see if it works. 

I have configured one on 8080,

# Site to Site properties
nifi.remote.input.host=localhost
nifi.remote.input.secure=false
nifi.remote.input.socket.port=8080
nifi.remote.input.http.enabled=true
nifi.remote.input.http.transaction.ttl=30 sec
nifi.remote.contents.cache.expiration=30 secs

# web properties #
nifi.web.war.directory=./lib
nifi.web.http.host=localhost
nifi.web.http.port=8080
nifi.web.http.network.interface.default=
nifi.web.https.host=
nifi.web.https.port=
nifi.web.https.network.interface.default=
nifi.web.jetty.working.directory=./work/jetty
nifi.web.jetty.threads=200
nifi.web.max.header.size=16 KB
nifi.web.proxy.context.path=
nifi.web.proxy.host=

and one on 8081

# Site to Site properties
nifi.remote.input.host=localhost
nifi.remote.input.secure=false
nifi.remote.input.socket.port=8081
nifi.remote.input.http.enabled=true
nifi.remote.input.http.transaction.ttl=30 sec
nifi.remote.contents.cache.expiration=30 secs

# web properties #
nifi.web.war.directory=./lib
nifi.web.http.host=localhost
nifi.web.http.port=8081
nifi.web.http.network.interface.default=
nifi.web.https.host=
nifi.web.https.port=
nifi.web.https.network.interface.default=
nifi.web.jetty.working.directory=./work/jetty
nifi.web.jetty.threads=200
nifi.web.max.header.size=16 KB
nifi.web.proxy.context.path=
nifi.web.proxy.host=

The problem is it shuts down just after I run it. Am I doing something wrong?

avatar
Super Mentor

@ChampagneM12 

 

Please start a new thread for your new issue.  We try to keep one question to a thread to avoid confusion and help users who made also have the same embedded question you are asking find it easier. Ping me in your new thread and I would be happy to help as much as i can.

 

Matt

avatar

Alright!

Thank you @MattWho