Member since
07-30-2019
3387
Posts
1617
Kudos Received
998
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 337 | 10-20-2025 06:29 AM | |
| 477 | 10-10-2025 08:03 AM | |
| 343 | 10-08-2025 10:52 AM | |
| 370 | 10-08-2025 10:36 AM | |
| 400 | 10-03-2025 06:04 AM |
09-05-2025
02:24 AM
Thank you @MattWho, I’m currently using nifi-atlassian-nar-2.5.0-SNAPSHOT.nar, even though my NiFi version is 2.4.0.
... View more
09-03-2025
01:01 PM
@PriyankaMondal There significant differences between the Apache NiFi 1.x and Apache NiFi 2.x major release. Deprecated and removed processors, controller services, and reporting task components Some components moved to new nars Deprecated and removed NiFi Templates Deprecated and removed NiFi Variable registry. This means that you can not simply move your flow.json.gz from Apache NiFi 1.23.2 to Apache NiFi 2.x. First you should update your dataflows so they are no longer using any of the deprecated components in Apache NiFi 1.x. I recommend first upgrading to Apache NiFi 1.28 so you have the latest deprecation logging. Apache NiFi 1.28 should produce a deprecation log that will tell you all the deprecated components you are currently using on your dataflow canvas. Take steps to remove these components or replace them with alternative components that are still available in Apache NiFi 2.x. Examples: JoltTransformJson processor was included in the nifi-standard-nar in Apache NiFi 1.x , but has moved to a nifi-jolt-nar on Apache NiFi 2.x. So the class name has changed. You'll need to add the Apache NiFi 2.x class version of JoltTransformJson processor to canvas and reconfigure it for your dataflow and delete the ghosted (dashed line around it because NiFi does not know that class) JoltTransformJson processor. While you manually changed the version in yoru flow.json.gz, you did not change the class path as needed resulting in the processor still being ghosted. ConvertAvroToJson processor was deprecated in NiFi 1.x and removed in Apache NiFi 2.x. Would need to replace it with a convertRecord (available in Apache NIFi 1.x and 2.x) configured to use an Avro Reader and Json Writer. NiFi Variable Registry removed. So if you are using any NiFi variables in your processor configuration in Apache NiFi 1.x, you'll need to modify your dataflows to use NiFi parameters instead (Parameters exist in Apache NiFi1.x and 2.x) Templates were deprecate in NiFi 1.x and were replaced with Flow Definitions. Templates removed in NiFi 2.x. Would need to remove all templates saved in NiFi before moving to Apache NiFi 2.x. Above is just a short list. refer to the deprecation log produced by Apache NiFi 1.28 to see all deprecated processor you may have been using in yoru dataflows. I do wish there was an easier way to move from Apache NiFi 1.x to 2.x, but depending on your use of deprecated features and changed component classes, there may be little too a lot of effort needed to prepare yoru NiFi 1.x datfalows for migration to NiFi 2.x For Cloudera Flow Management license holders: For Cloudera Flow Management NiFi users, Cloudera has built a Cloudera Flow Management Migration Tool that automated many of the migration steps (swapping processors when alternatives exist, changing processor classes to new classes, converting templates to flow definitions, converting NiFi variables to NiFi parameters, etc. While there is still no direct upgrade possible from Cloudera Flow Management 2.1.7 (Apache NiFi 1.x based) to Cloudera Flow Management 4.10 (Apache NIFi 2.x based), this migration tool takes a lot of the manual work out of preparing your flow.json.gz for the new major release. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-03-2025
10:38 AM
@Virt_Apatt I don't know enough about your use case to make any other suggestions. All I know is that your user(s) supply some custom date that you have NiFi add 10 days to before running a Oracle query to get some result set returned to NiFi. NiFi is typically used to build dataflows that are always in the running state, so users do not need to continuously stop, modify component(s), and start a dataflow/component. What is the significance of this "custom date" that starts your dataflow? Is there any pattern to these custom dates? Can the next custom date be derived from the response from the previous Oracle query? How often does this dataflow get executed? Just some examples (there are many NiFi processor components that can fetch content from external sources): You could start your dataflow with a getSFTP or getFile processor that is checks a specific source SFTP server or local directory for a specific filename. In that file is your custom date. You then build your dataflow to extract that custom date from the consumed file to then execute your oracle query. This way your NiFi is always running and just waiting for the next file to show up on the SFTP server or in that local directory it keeps checking. Or maybe setup an http lister (ListenHTTP or HandleHTTPRequest) that listens for an http post that contains the custom date needed for your running dataflow. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
08-25-2025
09:18 PM
You did it! I changed my tracking strategy to "Tracking timestamps" and it now populated the "View State" window. Thank you very much for your assistance!
... View more
08-25-2025
12:57 PM
@GKHN_ As I described in my first response, Authentication and Authorization are two different processes. So it sounds like from your comment that authentication is working fine for both your users and authorization is failing for your non admin user. So issue is within the authorization phase. I assume both of your users are authenticating via ldap? In your ldap-provider in the login-identity-providers.xml you have the "Identity Strategy" set to "USE_DN". With this setting you the users full ldap DN will be used as the user identity string after successful authentication. This means that entire DN is being passed to the authorizer to lookup if that full dn has been authorized to the requested end-point NiFi policy. I see you have your initial admin identity manually defined in the file-user-group-provider and the file-access-policy provider: CN=NIFIUSER,OU=Userpro,OU=CUsers,OU=Company,DC=company,DC=entp So when you login via ldap with this user's ldap username and ldap password, the user's entire DN is being passed to the authorizer and the file-access-policy provider has setup all admin related NiFi policies for this initial admin user identity. I also see from the shared authorizers.xml that the only user-group-provider the "file-access-policy provider" is configured to use is the "file-user-group-provider". The file-user-group-provider requires the admin user to manually add additional user identities manually from the with the NiFi UI (Remember that with your current ldap-provider login provider, all your ldap user identities are going to be full DNs). As the admin user, go to the NiFi global menu and select "USERS": From the NiFi Users UI, select the "+" to add a new user: Then enter the full DN for your second user (Case sensitive). unless you have added any groups, your list of groups will be blank. Now that you have added this second user identity, you'll need to start authorizing that user identities for the various policy they need. In order to access the NiFi UI, all users must be authorized to "view the user interface". From the same NiFi Global menu mentioned above, select "Policies" this time. Then from the "Access Policies" UI that appears, select "view the user interface" from the policy list pull-down. Then click on the icon to the right that looks like a person with a "+". Find the user identity you just added and check the box and click the "Add" button. Now this user can access the NIFi UI. There are other policies this user will need before they can start building dataflows on the UI. NiFi allows for very granular authorizations. But at the minimum the user will need to be authorized on the process group in which they will build their dataflows. Not all policies are defined from the "Access Policies" UI in the global menu. the component level policies are define directly via the individual component (keep an eye out for the "key" icon) From the "Operation" panel directly on the NiFi canvas you can set policies on the currently selected component: Above I have selected my root Process Group (PG). If you click the key icon you will see all the access policies that users can be authorized for. You'll need to select each one by one your user will need and add the user to them. Above will allow you to setup access for your additional users using the file-user-group-provider you have configured in your authorizers.xml. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
08-25-2025
05:20 AM
@HoangNguyen Keep in mind that the Apache NiFi Variable Registry no longer exist in Apache NiFi 2.x releases and there is no more development of the Apache NIFi 1.x versions. NiFi Parameter Contexts, which were introduced in later versions of Apache NiFi 1.x, provides similar capability going forward and should be used instead of the variable registry. You'll be forced to transition to Parameter Contexts in order to move to Apache NiFi 2.x. versions. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
08-11-2025
10:26 AM
@AlokKumar User authentication using OpenID Connect: OpenID Connect If you found that any of the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped. Thank you, Matt
... View more
08-07-2025
08:57 AM
Hi Matt, Just to clarify one point—specifically in the context of NiFi REST API 2.0+—is there an endpoint where we can exchange an Azure AD access token for a NiFi access token, similar to a token exchange flow? Or, if such a direct token exchange is not supported (i.e., the token must always be obtained via browser redirection to the NiFi URL), could you please confirm that this is indeed the case? Thanks in advance!
... View more
08-03-2025
11:49 PM
So I stumbled on this tool called Data Flow Manager (DFM) while working on some NiFi stuff, and… I’m kinda blown away?
Been using NiFi for a few years. Love it or hate it, you know how it goes. Building flows, setting up controller services, versioning… it adds up. Honestly, never thought I’d see a way around all that.
With DFM, I literally just picked the source, target, and a bit of logic. No canvas. No templates. No groovy scripting. Hit deploy, and the flow was live in under a minute.
... View more
08-01-2025
06:39 AM
@Krish98 When you secure NiFi (HTTPS enabled), in the TLS exchange NiFi will either REQUIRE (if no additional methods of authentication are configured) or WANT (when additional method of authentication are configured, like SAML) a clientAuth certificate. This is necessary for NiFi clusters to work. Even when one node communicates with another, the nodes to be authenticated (done via a mutual TLS exchange) and authorized (authorizing those clientAuth certificates to necessary NiFi policies). When accessing the NiFi UI, a MutualTLS exchange happens with your browser (client). If the browser does not respond with a clientAuth certificate, NiFi will attempt next configured auth method, it your case that would be SAML. MutualTLS with trusted ClientAuth certificates removes the need to obtain any tokens, renew tokens, and simplifies automation tasks with the rest-api whether interacting via NiFi built dataflows or via external interactions with the NiFi rest-api. The ClientAuth certificate DN is what is used as the user identity (final user identity that needs to be authorized is derived from the DN post any Identity Mapping Properties manipulation). Just like your SAML user identities, your clientAuth certificate derived user identity needs to be authorized to whichever NiFi policies are needed for the requested rest-api endpoint. Tailing the nifi-user.log while making your rest-api calls will show you the derived user identity and missing policy when request is not authorized. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more