Member since
07-30-2019
3426
Posts
1631
Kudos Received
1010
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 401 | 01-13-2026 11:14 AM | |
| 775 | 01-09-2026 06:58 AM | |
| 795 | 12-17-2025 05:55 AM | |
| 856 | 12-15-2025 01:29 PM | |
| 709 | 12-15-2025 06:50 AM |
01-21-2025
06:16 AM
@jirungaray Cloudera Flow Management (Based on Apache NiFi) provides multiple methods for managing user authorization. This includes NiFi internally via the File-Access-Policy-Provider and externally via Apache Ranger. There is no built in mechanism for auto setting up authorization policies for users or groups with the exception of the Initial Admin and Initial NiFi Node authorizations. Many of the Authorization policies are directly related to the components added to the canvas. Those components are assigned unique IDs making it impossible to create policies before the components exist. File-Access-Policy-Porvider: This provider utilizes a file on disk (authorizations.xml) to persists authorization policies. This file is loaded when NiFi starts. This means it is possible to manually generate this file and have NiFi load it on startup. Also as you mentioned, you could script out the authorization creating through NiFi Rest-API calls. Ranger provider: This moves authorization responsibility over to Apache Ranger. Policies setup within Ranger are download by the NiFi nodes where they are locally enforced. No matter which authorizer you choose to use, authorizations are easiest to manage via groups. Typical users setup ldap groups for various NiFi roles (admins, team 1, team2, etc..) and makes specific users members of these groups. This simplifies authorization since you can authorizer these groups instead of the individual users. Simply adding or removing a user as member of one of these authorized groups gives or removes authorized access to the NiFi resource identifier (NiFi policy). The ldap-user-group-provider can be added to the NiFi authorizers.xml to auto manage syncing of user and group identities from your AD/LDAP further simplifying management over the file-user-group-provider method which requires the manual adding of user and group identifiers to the NiFi. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-17-2025
07:37 AM
@MattWho Thank you for explanation. Now I understand MergeRecord determines to which file each flowfile to be merged by schema information. I'll consider increasing "Minimum number of records" as you recommended. Thanks,
... View more
01-16-2025
05:31 AM
@Eslam Welcome to the community. In order to get helpful answers, you'll need to provide more detail around your use case. NiFi provides many processors for connecting to various services on external hosts. You can find the list of default processors available with the Apache NiFi release here: NiFi 1.x release: https://nifi.apache.org/docs/nifi-docs/ NiFi 2.x release: https://nifi.apache.org/components/ At the very basic of level you have processors like: GetSFTP ListSFTP / FetchSFTP But there are also processor for connecting SMB, Splunk, rest-apis, SNMP, FTP, DBs, Kafka, hive, etc. on external servers. Look through the components list in the documentation for Get, List, Fetch, and Query type processors to see if any of them meet your use case needs. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-15-2025
10:46 PM
Thanks @MattWho . Checking with mergeContent option with our data volume, if it works then will go with this. Thanks for help/suggestion.
... View more
01-10-2025
01:58 AM
I changed my config to this: This seems to do the trick.
... View more
01-07-2025
09:29 AM
@ShellyIsGolden 500k+ files is a lot to list and the lookup on subsequent runs to look for new files. A few questions first: How is your listSFTP processor scheduling configured? With the Initial listing, how long does it take to output he 500K+ FlowFiles from time processor is started? When files are added to the SFTP server, are they added using a dot rename method? Is the last modified timestamp being updated on the files as they are being written to the SFTP server? So the processor when executed for the initial time will list all files regardless of the configured "Entity Tracking Time Window" set value. Subsequent executions will only list files with and last modified timestamp within the configured "Entity Tracking Time Window" set value. So accurate last modified timestamps are important. With initial listing of a new processor (or copy of existing processor) there is no step to check list files against the cache entries to see if file has never been listed before or if a listed file has changed in size since last listed. This lookup and comparison does happen on subsequent runs and can use considerable heap. Do you see any OutOf Memory (OOM) exceptions in your NiFi app logs? Depending on how often the processor executes, consider reducing the configured "Entity Tracking Time Window" value so fewer files are listed in the subsequent executions that need to be looked up. Set it to what is needed with a small buffer between each processor execution. Considering that it sounds you have yoru processor scheduled to execute every 1 minute, maybe try setting this to 30 minutes instead to see what impact it has. When you see the issue, does the processor show an active thread in the upper right corner that never seems to go away? When the issue appears, rather then copy the processor, what happens if you simply stop the processor (make sure all active threads complete, shows no active threads number in upper right corner of processor) and then just restart it? In the latest version of Apache NiFi, a "Remote Poll Batch Size" property (defaults to 5000) was added to the listSFTP processor which may help here considering the tremendous amount files being listed in your case. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-07-2025
08:04 AM
You can view the packaged version of Parquet in this pom. ./nifi-nar-bundles/nifi-parquet-bundle/nifi-parquet-processors/pom.xml
... View more
01-07-2025
07:14 AM
@Bern I suggest starting a new community question with the full error stack trace you are seeing. Your exception seems different then the one discussed in this community question: Failure is due to java.lang.IllegalArgumentException: A HostProvider may not be empty!: java.lang.IllegalArgumentException: A HostProvider may not be empty! You exception is: Failure is due to
org.apache.nifi.processor.exception.TerminatedTaskException: A few observations and things you may want to provide details around in your new community post: 1. The version of Apache NiFi you are using was released ~6 years ago. You should really consider upgrading to take advantage of lost of bug fixes, performance improvements, new features, and security CVEs addressed. The latest release in the 1.x branch is 1.28 (which is final release of 1.x branch). 2. Your screenshot shows over 250,000 queued FlowFiles (25.75 GB) and 1.373 running processors components. What do you have set as your Max Rimer Driven Tread count? 3. Any other WARN or ERROR messages in your NiFi logs? Any Out of Memory (OOM) reported? 4. It does not make sense why you are load-balancing in so many connections? Thank you, Matt
... View more
01-06-2025
08:15 AM
@Shelton / @MattWho , My NIFI is behind corporate proxy, because of that In production, NIFI is not able to hit the azure OIDC discovery url. could you please help me on it ? Thanks, spiker
... View more