Support Questions

Find answers, ask questions, and share your expertise

CDF - configuring and managing isolated groups of nifi nodes

avatar
New Contributor

Hello.

 

I have a HDF 3.2 cluster shared for different developing groups. It is running correctly but it's quickly growing bigger and bigger and we've decided to set up different isolated groups of NiFi nodes in order to prevent that a collapse on one project affects all the projects.

 

Our first idea was to install new HDF clusters but we want to manage all the resources with one unique Ambari console and also we want to share other resources (Kafka, NiFi Registry) between all the groups, but I've read that older versions of Ambari doesn't support managing multiple clusters (I don't know if this is still true).

 

What would be the recommended approach? It is possible to manage isolated groups in one unique cluster or multiple clusters managed by different ambari each is the only solution?

 

PD: If upgrading to CDF last version is necessary, that's something we're open to (we're already planning to do this in the near future).

 

Many thanks for your help

 

1 ACCEPTED SOLUTION

avatar
Expert Contributor

In ambari you can create host configuration group and define the configs for those hosts instead of using default configuration by all the hosts. You can refer the below document for more details and steps:

 

https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/managing-and-monitoring-ambari/content/amb_man...

View solution in original post

5 REPLIES 5

avatar
Expert Contributor

In ambari you can create host configuration group and define the configs for those hosts instead of using default configuration by all the hosts. You can refer the below document for more details and steps:

 

https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/managing-and-monitoring-ambari/content/amb_man...

avatar
New Contributor

Hi!,

 

I have registered a new host to the HDF vía ambari agent, Then added the Nifi service and create a new config group for this new host, i have modified the Node Identities properties changing it to his name. But when i restart the Nifi service, it says 1/2 nodes on the Nifi UI. Looks like it tries to create a cluster and i need two separate nodes from each other. This is possible?

avatar
Expert Contributor

@MaurizioMR 

It seems, even if NIFI is standalone instance , you did set to run this is in Cluster mode by setting below property:

 

nifi.cluster.is.node=true

 

In this case, NIFI is set to run in cluster mode , It will go for the leader election processor and will send/create a heartbeat of and will send heartbeat to him or other nodes for processing.

 

Hence set this property to 'false' to run the NIFI nodes in standalone mode:

 

nifi.cluster.is.node=false

avatar
New Contributor

 Hi

 

Thanks for the quick response, Anyway, when i set the property to false it still counted 1/2 on the UI, i also replaced on the new config group the property: nifi.cluster.node.address to its own address maybe to avoid it to connect to the other cluster, but did not make any difference.

 

Maybe I didn't express myself correctly, we do not need different standalones instances of NIFI, we would like to configure is 2 or more cluster of NIFI instances working with different config groups and independent of each other clusters and if possible managed from the same ambari interface. 

 

I don't think we are the only ones who have ever raised this situation, but it is very difficult to find documentation on how to manage and configure more than one cluster from ambari.

 

Regards,

Maurizio

avatar
New Contributor

Just to close this thread, we've detected that we miss a configuration inside the new configuration group (nifi.zookeeper.root.node). This parameter should be different in each configuration group in order to set a new path base where to save the new cluster configuration in Zookeeper (we didn't need to create the path, it was created as soon as we change the config).

Now we have both groups of nifi instances up and running.

Thanks for the help!