Member since
07-30-2019
3396
Posts
1619
Kudos Received
1001
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 419 | 11-05-2025 11:01 AM | |
| 317 | 11-05-2025 08:01 AM | |
| 452 | 11-04-2025 10:16 AM | |
| 672 | 10-20-2025 06:29 AM | |
| 812 | 10-10-2025 08:03 AM |
09-15-2025
06:36 AM
@asand3r With your ConsumeKafka processor configured with 5 concurrent tasks and a NiFi cluster with 3 nodes, you will have 15 (3 nodes X 5 concurrent tasks) consumers in your consumer group. So Kafka will assign two partitions to each consumer in that consumer group. Now if there are network issues, Kafka may do a rebalance and assign more partitions to fewer consumers. (Of course consumers in a consumer group changes if you have additional consumeKafka processors pointing at same topic and configured with same consumer group id. Matt
... View more
09-12-2025
11:41 AM
@Alexm__ While i have never done anything myself with Azure DevOps pipelines, I don't see why this would not be possible. Dev, test, prod environments would likely have slight variations in NiFi configurations (source and target service URLs, passwords/usernames, etc). So when designing your Process Group dataflows you'll want to take that into account and utilize NiFi's Parameter contexts to define such variable value configuration properties. Sensitive properties (passwords) are never passed to NiFi-Registry. So any version controlled PG imported to another NiFi will not have the passwords set. Once you version control that PG, you can deploy it through rest-api calls to other NiFi deployments. First time it is deployed it will simply import the parameter context used in source (dev) environment. You would need to modify that parameter context in test, and prod environments to set passwords and alter any other parameters as needed by each unique env. Once the modified parameter context of same name exists in the other environments, promoting new versions of dataflows that use that parameter context becomes very easy. The updated dataflows will continue to use the local env parameter context values rather then those used in dev. If a new parameter is introduced to the parameter context, is simply gets added to the existing parameter context of the same name in test and prod envs. So there will be some consideration in your automated promotion of version controlled dataflows between environments to consider. Versioning a DataFlow Parameters in Versioned Flows Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-12-2025
08:54 AM
@carange Welcome to the Cloudera Community. Opening a community questions exposes your query to anyone who access the Cloudera community site and is a great place to ask very specific issue questions or how-to type questions. Responses may come from any community member (may or may not be a Cloudera employee). For more in depth issues or time sensitive issues where sharing logs or sensitive information would be very helpful, creating a support case is the best option. Or if the suggestions and answers provided in the community are not completely solving your issue. Only individuals with a Cloudera product license can create support cases. With a Cloudera license you are able to raise support cases from MyCloudera that will get assigned to the appropriate support specialist for your issue. Simply open a browser to https://lighthouse.cloudera.com/s/ and login with your Cloudera credential via the following icon: You can then float over the "Support" option and select "cases". This will take you to a new page where you will see an option to "Create A Case": Select a "Technical assistance" case type and follow the prompts to provide the necessary information to submit your case details. You'll have the ability to upload images, logs, etc to your new case. If you have issues creating a case, please reach out to your Cloudera Account owner. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-12-2025
05:59 AM
@Alexm__ In order for NiFi to communicate with NiFi-Registry, NiFi needs to have "NiFiFlowRegistryClient" added to "Registry Clients" section in NIFi under Controller settings. A SSL Context Service (in which you can define a specific keystore and truststore that may or may not be the same keystore and truststore your NiFi uses) will be needed since a mutualTLS handshake MUST be successful between NiFi and NiFi-Registry. So for your question, as long as there is network connectivity between your NiFi(s) and the NiFi-Registry, this can work. Your "user identity(s)" in NiFi that will be authorized to perform version control will also need to be authorized in your NiFi-Registry to specific buckets. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-10-2025
10:12 AM
@nifier Sharing the details of the rest-api calls you made that are not working along with the specific Apache NiFi version being used may be helpful in providing guidance here. What response are you getting to your rest-ai calls? What do you see in the nifi-user.log and/or nifi-app.log when you execute your rest-api calls? How are you handling user authentication in your rest-api-calls (certificate, bearer token, etc)? rest-api call: https://<nifinode>:<nifiport>/nifi-api/processors/<Processor UUID>/run-status -X PUT -H 'Content-Type: application/json' --data-raw '{"revision":{"clientId":"<ID>","version":<version num>},"state":"<RUNNING, STOPPED, or RUN_ONCE>","disconnectedNodeAcknowledged":false}' --insecure Above would also need a client auth piece. What may be helpful to you is utilizing the developer tools in your web browser to capture the rest-api calls made as you perform the actions via the NiFi UI. Most developer tools give you the option to "copy as curl" the request that was made. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-10-2025
08:00 AM
@Alexm__ Welcome to the Cloudera Community. NiFi-Registry provides a mechanism for version controlling NiFi Process Groups (PG). NiFi-Registry can be configured to persist version controlled PGs in Git rather then locally within NiFi-Registry. Authorization policies set within NiFi-Registry will control whom can start version control and in to which Registry bucket that version controlled flow is stored. Authorization policies also control whom can deploy a flow from NiFi-Registry onto a NiFi instance/cluster. A typical setup would have one NiFi-Registry that is accessible by all your Dev and Prod NiFi deployments. When you Dev NiFi version controls a PG, that version controlled PG flow definition is uploaded to NiFi-Registry within a defined bucket. The PG on your Dev NiFi is now tracking against that version controlled flow. If changes are made to the flow on yoru dev NiFi, that NiFi will report "local changes" on the PG which can then be committed as another version of that already version controlled flow. Flow that have been version controlled to a NiFi-Registry are NOT automatically deployed to other NiFi instances/clusters that have access to this same NiFi-Registry. A NiFi-Registry authorized user on one of those other clusters would need to initiate the loading of that version controlled flow on each of the prod NiFis. So controlling whom has access to specific NiFi-Registry buckets is important. This allows you to selectively deploy specific PG to different prod environments. Once these flow are deployed, they will also be tracked against what is in NiFi-Registry. This means that if someone commits a newer version of a flow to NiFi-Registry, any prod env tracking against that flow will show an indicator on the PG that a newer version is available. An authorized user wold be required to initiate the change to that newer version (it is not automatically deployed). Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-08-2025
05:41 AM
@yoonli It would be helpful if you shared the complete authorization exception you are encountering. I have a feeling your authorization exception is not related to your server certificate, but more related to your individual NiFi user. Using a load balancer in front of your NiFi cluster would require that session affinity (sticky sessions) is enabled in your load balancer. The why? Any login based user authentication (ldap-provider, kerberos-provdier, etc) result in a token being issued to the user and a server side token stored on the NiFi server that issues the client token. Only the specific node in the NiFi cluster that issued the client bearer token will have the corresponding server side token. If your load balancer does not have sticky sessions enabled subsequent requests after obtaining the client bearer token may get direct to a different node in the cluster. Your browser will include this client token in all subsequent request to NiFi Since the other nodes will not hav the corresponding server token for your user the session would result in an not authorized response. Possible helpful HAProxy links: https://www.haproxy.com/blog/enable-sticky-sessions-in-haproxy https://www.haproxy.com/solutions/load-balancing ---- Certificate based authentication is not an issue since the client/server MutualTLS exchange happens in every communication between client and server. This is why is suspect that your setup involves a login based authentication method. ---- I see you configured your LB IP in the nifi.web.proxy.host property within the nifi.properties file. This property has nothing directly related to client/user authentication. It is about making sure NiFi accepts requests destined for a different hostname/IP then the destination host that received it. Let's say you initiate a connection to URL containing host: https://10.29.144.56/nifi/ Your HAProxy then routes that request to NiFi on host 10.29.144.58 which returns a server certificate with that servers hostname or the IP 10.29.144.58. The connection is going to be blocked because it appears as a man-in-the-middle attack. The expectation was that the request would be processed by the server 10.29.144.56; however, host 10.29.144.58 received the request. By adding 10.29.144.56 to the proxy.host property in NiFi, you are telling NiFi to accept requests intended for a different hostname or IP then the actual NiFi's hostname or IP. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-05-2025
05:16 AM
1 Kudo
@yoonli This thread is growing in to multiple queries that are not directly related. Please start a new community question so the information is easier for our community members to follow when they have similar issues. Thank you, Matt
... View more
09-04-2025
01:18 PM
@VVPeter I'd encourage you to create an Apache NiFi Jira with all you details: https://issues.apache.org/jira/browse/NIFI There was an improvement made to the bitbucket registry client in version 2.5 https://issues.apache.org/jira/browse/NIFI-14583 I don't see any direct correlation to your issue., but you could try upgrading to see if your issue persists before raising your bug jira. Thanks, Matt
... View more
09-04-2025
05:48 AM
@yoonli I see three issues: Issue 1: You are using the wrong composite provider. In my common setup list I properly state you need to be using the "composite-configurable-user-group-provider", but I see you are using the "composite-user-group-provider" class. However, some of the confusion comes from the example I copied from the Apache NiFi Documentation here: https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#composite-file-and-ldap-based-usersgroups While the NiFi doc example uses the "composite-configurable-user-group-provider" (class name and properties correct), the provider "identifier" says "composite-user-group-provider" still, making this confusing. So it looks like you missed the difference in class name. Since the "file-user-group-provider" is a configurable provider (meaning users/groups can dynamically be added and removed via the NiFI UI), it must be called by a provider that supports a configurable provider. So you'll need to switch from using: <userGroupProvider>
<identifier>composite-user-group-provider</identifier>
<class>org.apache.nifi.authorization.CompositeUserGroupProvider</class>
<property name="Configurable User Group Provider">file-user-group-provider</property>
<property name="User Group Provider 1">ldap-user-group-provider</property>
</userGroupProvider> to using: <userGroupProvider>
<identifier>composite-user-group-provider</identifier>
<class>org.apache.nifi.authorization.CompositeConfigurableUserGroupProvider</class>
<property name="Configurable User Group Provider">file-user-group-provider</property>
<property name="User Group Provider 1">ldap-user-group-provider</property>
</userGroupProvider> The "identifier" can be any string you want, but the "class" must align with the properties. Issue 2: Another issue I see is a mismatch in your user identity configured in the file-user-group-provider and the user name shown in the logs coming from your user authentication. file-access-policy-provider: cn=nifi,ou=users,dc=nifi,dc=local nifi-user.log (source of truth) cn=nifi,ou=users,dc=baoviet,dc=local These "user identities" do not match. Also keep in mind that the file-access-policy-provider can only seed policies for "user identities" that are being returned by one of your configured user-group-providers. Where are you expecting this user's DN to be returned from? Issue 3: Your ldap-user-group-provider is still misconfigured. the following is not a valid configuration in this provider: <property name="User Search Filter">(cn={0})</property> You can only use "{0}" in the ldap-provider login provider within the login-identity-providers.xml file. This login provider will substitute the {0} with the username entered in the NiFi login UI. The intent of the ldap-user-group-provider is return many user identities from your ldap. the above filter cn={0} would be treated as a literal and result in no results returned. Also keep in mind that you have configured your ldap-user-group-provider to return the ldap value from the "cn" ldap attribute as the "user identity" which is typically not a full user DN, like we see in the nifi-user.log you shared. <property name="User Identity Attribute">cn</property> Also I see you added this property to your ldap-user-group-provider which is NOT a valid property: property name="Identity Strategy">USE_USERNAME</property> The above property only exist in the ldap-provider found in the login-dentity-providers.xml file. This is where your probably still have this set to "USE_DN" resulting in the full DN "user identity" you are seeing in the nifi-user.log instead off just "nifi" which i assume you are typing as the username in teh Nifi login window. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue(s) or answering your question(s), please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more