Member since
11-17-2021
1123
Posts
254
Kudos Received
29
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 458 | 11-05-2025 10:13 AM | |
| 323 | 10-16-2025 02:45 PM | |
| 648 | 10-06-2025 01:01 PM | |
| 569 | 09-24-2025 01:51 PM | |
| 473 | 08-04-2025 04:17 PM |
09-01-2025
05:59 AM
Hello dear support team, I’m experiencing the same issue as the original poster and others in the thread. Could you please assist me in updating the email address associated with my account? Thank you very much !
... View more
08-25-2025
12:57 PM
@GKHN_ As I described in my first response, Authentication and Authorization are two different processes. So it sounds like from your comment that authentication is working fine for both your users and authorization is failing for your non admin user. So issue is within the authorization phase. I assume both of your users are authenticating via ldap? In your ldap-provider in the login-identity-providers.xml you have the "Identity Strategy" set to "USE_DN". With this setting you the users full ldap DN will be used as the user identity string after successful authentication. This means that entire DN is being passed to the authorizer to lookup if that full dn has been authorized to the requested end-point NiFi policy. I see you have your initial admin identity manually defined in the file-user-group-provider and the file-access-policy provider: CN=NIFIUSER,OU=Userpro,OU=CUsers,OU=Company,DC=company,DC=entp So when you login via ldap with this user's ldap username and ldap password, the user's entire DN is being passed to the authorizer and the file-access-policy provider has setup all admin related NiFi policies for this initial admin user identity. I also see from the shared authorizers.xml that the only user-group-provider the "file-access-policy provider" is configured to use is the "file-user-group-provider". The file-user-group-provider requires the admin user to manually add additional user identities manually from the with the NiFi UI (Remember that with your current ldap-provider login provider, all your ldap user identities are going to be full DNs). As the admin user, go to the NiFi global menu and select "USERS": From the NiFi Users UI, select the "+" to add a new user: Then enter the full DN for your second user (Case sensitive). unless you have added any groups, your list of groups will be blank. Now that you have added this second user identity, you'll need to start authorizing that user identities for the various policy they need. In order to access the NiFi UI, all users must be authorized to "view the user interface". From the same NiFi Global menu mentioned above, select "Policies" this time. Then from the "Access Policies" UI that appears, select "view the user interface" from the policy list pull-down. Then click on the icon to the right that looks like a person with a "+". Find the user identity you just added and check the box and click the "Add" button. Now this user can access the NIFi UI. There are other policies this user will need before they can start building dataflows on the UI. NiFi allows for very granular authorizations. But at the minimum the user will need to be authorized on the process group in which they will build their dataflows. Not all policies are defined from the "Access Policies" UI in the global menu. the component level policies are define directly via the individual component (keep an eye out for the "key" icon) From the "Operation" panel directly on the NiFi canvas you can set policies on the currently selected component: Above I have selected my root Process Group (PG). If you click the key icon you will see all the access policies that users can be authorized for. You'll need to select each one by one your user will need and add the user to them. Above will allow you to setup access for your additional users using the file-user-group-provider you have configured in your authorizers.xml. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
08-08-2025
04:22 PM
@Malrashed, If you are using Cloudera Runtime 7.1.9 then you can use either CDS 3.3 or CDS 3.5 Powered by Apache Spark as an add-on service. For more details, you can refer to this document. Please note, CDS 3.3 Powered by Apache Spark 3.3.x and CDS 3.5 Powered by Apache Spark 3.5.x are distributed as a parcel (Refer here for additional download details). There are no external Custom Service Descriptors (CSD) for Livy for Spark3 or Spark3 because they are already part of Cloudera Manager 7.11.3. In Cloudera Runtime 7.1.9, Spark 2 is the default. If you need to use Spark 3, it must be added as an add-on service. Note that Spark 2 is deprecated in Cloudera Runtime 7.1.9. Starting with Cloudera Runtime 7.3.x, Spark 3 becomes the default version
... View more
08-08-2025
03:54 PM
@willx @ayushi Hi! By any chance do you have some insights here? Thanks!
... View more
08-08-2025
10:21 AM
Hive logging is configured on /etc/hive/conf/hive-log4j2.properties. Look for these: property.hive.log.dir property.hive.log.file That is the log location you are looking for Thanks, -JMP
... View more
08-04-2025
04:17 PM
Welcome to the Cloudera Community! To help you get the best possible solution, I have sento you a DM with further steps. Please keep us updated on your post, and we hope you find a satisfactory solution to your query.
... View more
08-03-2025
11:49 PM
So I stumbled on this tool called Data Flow Manager (DFM) while working on some NiFi stuff, and… I’m kinda blown away?
Been using NiFi for a few years. Love it or hate it, you know how it goes. Building flows, setting up controller services, versioning… it adds up. Honestly, never thought I’d see a way around all that.
With DFM, I literally just picked the source, target, and a bit of logic. No canvas. No templates. No groovy scripting. Hit deploy, and the flow was live in under a minute.
... View more
07-31-2025
10:35 PM
Here are some highlights from the month of July
WEBINAR The Power of Streaming in Real-Time AI and Analytics Register Now
VIRTUAL EVENT The latest innovations in data, analytics & AI Watch Now On Demand
Check out the FY25 Cloudera Meetup Events Calendar for upcoming & past event details!
60 New support questions
1095 New members
We would like to recognize the below community members and employees for their efforts over the last month to provide community solutions.
See all our top participants at Top Solution Authors leaderboard and all the other leaderboards on our Leaderboards and Badges page.
@MattWho @vats @mburgess @upadhyayk04 @Boris G @yagoaparecidoti @BobKing
Share your expertise and answer some of the below open questions. Also, be sure to bookmark the unanswered question page to find additional open questions.
Unanswered Community Post
Components/ Labels
I'm using Apache NiFi 2.x with Python-based custom processors. I have two different PythonProcessor scripts (in /python/extensions) with different logic. However, NiFi always runs only the first script's logic, even when I configure the second script in a different processor.
Apache NiFi
nifi-env.sh file is empty in 2.4.0. Upgrade issue in EKS
Apache NiFi
Error generating aggregated logs for Spark Applications on Cloudera CDP 7.2.18
Apache Spark
Cloudera Data Platform (CDP)
Issue with JoinEnrichment Processor
Apache NiFi
Issue in upgrading nifi from 2.0.0 M4 to 2.4.0
Apache NiFi
... View more
07-30-2025
07:17 AM
@justloseit NiFi Process groups are just logical containers for processors. A Process group does not run/execute. Selecting "Start" on a process group triggers starting of all the components within that process group. In your case it sounds like you have have setup cron scheduling on your ingest/starting processor(s) within the process group. All downstream processors to that source should be set run all the time and not cron based scheduling. So what you are really looking for is how long it took the processors within that process group to process all produced FlowFiles to point of termination? Besides looking at the lineage data for each FlowFile that traverses all the processor in a process group, I can't think of how else you would get that data. Take a look at the SiteToSiteProvenanceReportingTask available in Apache NiFi. It allows you send the provenance data (produces a lot of data depending on size of yoru dataflows and amount of FlowFiles being processed) via NiFi's Site-To-Site protocol to another NiFi instance (would recommend a separate dedicated NiFi to receive this data). You can then build a dataflow to process that data how you want to retain what information you need, or send it to an external storage/processing system. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more