Created 09-10-2025 07:06 AM
Hello everyone,
I need help in understanding if the NiFi Registry is appropriate for the kind of management I need to do in my case
I have a main nifi cluster, and the usual environments used in software development, but in the prod env the cluster is gonna be replaced by several different, indipendent clusters.
I will also have a package logic, meaning that for example, one of the production clusters might have a certain processor group, where another one might not.
Versioning will also managed through Git, does it conflict or cause any issues with the registry?
Does the registry allow for updating only certain processor groups depending on the cluster i am working on, even if the shared dev env contains all the packages? Moreover, how does the registry work in the backend? For example, do ALL the PG get downloaded, even if only a few are gonna get installed? Security is the main concern here, because i can't download on a cluster a PG that is not supposed to be there, even if i am not gonna actually install it on the NiFi client.
I would attach an image to better explain the kind of architecture i am gonna have, but it seems i'm lacking permissions on here, so i'll post a link instead https://imgur.com/78H6og2.jpg
Created 09-10-2025 07:16 AM
@Alexm__ Welcome to the Cloudera Community!
To help you get the best possible solution, I have tagged our NiFi experts @MattWho @mburgess who may be able to assist you further.
Please keep us updated on your post, and we hope you find a satisfactory solution to your query.
Regards,
Diana Torres,Created 09-10-2025 08:00 AM
Welcome to the Cloudera Community.
NiFi-Registry provides a mechanism for version controlling NiFi Process Groups (PG).
NiFi-Registry can be configured to persist version controlled PGs in Git rather then locally within NiFi-Registry.
Authorization policies set within NiFi-Registry will control whom can start version control and in to which Registry bucket that version controlled flow is stored.
Authorization policies also control whom can deploy a flow from NiFi-Registry onto a NiFi instance/cluster.
A typical setup would have one NiFi-Registry that is accessible by all your Dev and Prod NiFi deployments. When you Dev NiFi version controls a PG, that version controlled PG flow definition is uploaded to NiFi-Registry within a defined bucket. The PG on your Dev NiFi is now tracking against that version controlled flow. If changes are made to the flow on yoru dev NiFi, that NiFi will report "local changes" on the PG which can then be committed as another version of that already version controlled flow.
Flow that have been version controlled to a NiFi-Registry are NOT automatically deployed to other NiFi instances/clusters that have access to this same NiFi-Registry. A NiFi-Registry authorized user on one of those other clusters would need to initiate the loading of that version controlled flow on each of the prod NiFis. So controlling whom has access to specific NiFi-Registry buckets is important. This allows you to selectively deploy specific PG to different prod environments. Once these flow are deployed, they will also be tracked against what is in NiFi-Registry. This means that if someone commits a newer version of a flow to NiFi-Registry, any prod env tracking against that flow will show an indicator on the PG that a newer version is available. An authorized user wold be required to initiate the change to that newer version (it is not automatically deployed).
Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped.
Thank you,
Matt
Created 09-12-2025 03:17 AM
Hi @MattWho , thank you for the quick answer, the info you gave me are useful!
Forgive me if I ask another question. Does all of this support an Azure DevOps intergration, in which the deploy in prod would be handled through the DevOps pipelines? Or do you see any issues that are specific to a DevOps-NiFi setup?
Created 09-12-2025 05:59 AM
@Alexm__
In order for NiFi to communicate with NiFi-Registry, NiFi needs to have "NiFiFlowRegistryClient" added to "Registry Clients" section in NIFi under Controller settings.
A SSL Context Service (in which you can define a specific keystore and truststore that may or may not be the same keystore and truststore your NiFi uses) will be needed since a mutualTLS handshake MUST be successful between NiFi and NiFi-Registry.
So for your question, as long as there is network connectivity between your NiFi(s) and the NiFi-Registry, this can work. Your "user identity(s)" in NiFi that will be authorized to perform version control will also need to be authorized in your NiFi-Registry to specific buckets.
Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped.
Thank you,
Matt
Created 09-12-2025 11:41 AM
While i have never done anything myself with Azure DevOps pipelines, I don't see why this would not be possible. Dev, test, prod environments would likely have slight variations in NiFi configurations (source and target service URLs, passwords/usernames, etc). So when designing your Process Group dataflows you'll want to take that into account and utilize NiFi's Parameter contexts to define such variable value configuration properties. Sensitive properties (passwords) are never passed to NiFi-Registry. So any version controlled PG imported to another NiFi will not have the passwords set. Once you version control that PG, you can deploy it through rest-api calls to other NiFi deployments. First time it is deployed it will simply import the parameter context used in source (dev) environment. You would need to modify that parameter context in test, and prod environments to set passwords and alter any other parameters as needed by each unique env. Once the modified parameter context of same name exists in the other environments, promoting new versions of dataflows that use that parameter context becomes very easy. The updated dataflows will continue to use the local env parameter context values rather then those used in dev. If a new parameter is introduced to the parameter context, is simply gets added to the existing parameter context of the same name in test and prod envs.
So there will be some consideration in your automated promotion of version controlled dataflows between environments to consider.
Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped.
Thank you,
Matt