Member since
07-30-2019
3434
Posts
1632
Kudos Received
1012
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 117 | 01-27-2026 12:46 PM | |
| 516 | 01-13-2026 11:14 AM | |
| 1150 | 01-09-2026 06:58 AM | |
| 962 | 12-17-2025 05:55 AM | |
| 469 | 12-17-2025 05:34 AM |
08-07-2022
11:47 PM
@MattWho @ckumar thanks for your inputs! I was able to resolve the issue following the steps you mentioned. Much appreciated!
... View more
08-04-2022
01:10 PM
@code Have you considered using GenerateTableFetch, QueryDatabaseTable, or QueryDatabaseTableRecord that generates SQL that you then feed to the ExecuteSQL to avoid getting old and new entries with each execution of your existing flow? Avoiding ingesting duplicate entries is better then trying to find duplicate entries across multiple FlowFiles. You can detect duplicates within a single FlowFile using DeduplicateRecord; however, this requires all records are merged in to a single FlowFile. You can use DetectDuplicate; however, this requires that each FlowFile contains one entry to compare. Using these methods add a lot of additional processing in your dataflows or holding of records longer then you want in your flow and this probably not the best/most ideal solution. If you found this response assisted with your query, please take a moment to login and click on "Accept as Solution" below this post. Thank you, Matt
... View more
08-04-2022
12:52 PM
@mhsyed The latest Cloudera Runtime version can be found here (latest at top of list): https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/release-guide/topics/cdpdc-runtime-download-information.html So latest version is CDH-7.1.7-1.cdh7.1.7.p1000.24102687 (CDP 7.1.7 Service Pack 1). If you found this response assisted with your query, please take a moment to login and click on "Accept as Solution" below this post. Thank you, Matt
... View more
08-04-2022
05:06 AM
Hello MattWho, Some time spent since your reply, you would still recommend to mount the NFS drives on each NiFi nodes - or are there any further development on this topic?
... View more
08-03-2022
12:01 PM
1 Kudo
@uzi1990 Can you provide more detail around the type of testing you are referring to? Testing what specifically? NiFi is a flow based programming ETL tool. As a user you add and configure components (processors, RPGs, Remote ports, funnels, etc..) to the NiFi canvas. Then interconnecting those added components via connections containing component relationships. Processor components (currently in excess of 300 unique processor available) can be started and stopped one by one or on groups. When a component executed in generates or passes a FlowFile to a downstream relationship. Via the NiFi UI, users can list the contents of a downstream connection and view/download the content of the FlowFile for inspection and also view any metadata/attributes NiFi has set for those FlowFiles. This is how you would validate the processor configuration produced the expected output you want. You can then start the next processor component in your dataflows and do the same processor over. Assuming you have content repository archive enabled, you can also execute an entire flow and examine the generated data provenance for any FlowFile(s) that traversed that dataflow. You can see the content and meatadata/attributes as they existed at each generated provenance event. Example Data Provenance lineage: You can right click on any event dot and view the details: If you found this response assisted with your query, please take a moment to login and click on "Accept as Solution" below this post. Thank you, Matt
... View more
08-03-2022
11:43 AM
@Angelvillar I think you have numerous unrelated questions here. The "GitFlowPersistenceProvider" allows you to configure a git repo in which your version controlled process groups can be pushed for persistent storage outside of NiFi Registry server's file system. What is most important here is that NiFi only reads from the git repo on service startup. While running everything is local to the NiFi-Registry server. So if some changes are manually made on the got repo, the NiFi -Registry will not see them. Additionally, the metadata about those stored versions flows is stored in the NiFi-Registry metadata-database and not in the got repo. Also keep in mind that if you created flows originally using the local file based flow provider and then switched to git repo provider, those flows will not get moved to git. Only new flows get created in git and old flows no longer are reachable. 1. Which Flow Persistence provider is configured for use in the NiFi-Registry has nothing to do with NiFi being able to connect and import flows. NiFi connects to the NiFi-Registry client URL configured in NiFi and gets a list of bucket flows to which the NiFi user has authorized access. That flow information comes from the NiFi-Registry metadata DB. So when you mad a change to the git repo, that would have had no affect until a NiFi-Registry restart. What is in the new repo also would have had no affect on what is in the NiFi-Registry Metadata DB. My guess here is that NiFi was give a list of version controlled flows known to the NiFi-Registry via the metadata DB and then when you tried to import one of them, NiFi-Registry could not find the actual flow locally. Review the "Switching from other Flow Persistence Provider" section under the metadata-database section in the NiFi-Registry docs. What changes did you make in configs when you cloned the git repo to tell NiFi to start using the new cloned repo over the original repo? If you configured git repository has existing flows committed to it, if you have nothing in the metadata-database, NiFi-registry will generate metadata for the flows imported from the flow persistence provider on NiFi-Registry startup. NiFi or NiFi-Registry being secured has nothing to do with the error you described. If NiFi was able to display a list of flows for selection to import, then connectivity to Registry seems fine. However, keep in mind that if you secure Registry, you must secure NiFi in order to write to any buckets. A secured NiFi can access a non secured NiFi-Registry and a non secured NiFi can access a non secured NiFi-Registry. It is also possible for a non secured NiFi to import flows from "public" buckets in a secured NiFi-registry. 2. It does not matter whether you run your NiFi-Registry on a VM or on Docker as long as the configured ports are reachable by your NiFi. This is all a matter of your personal preference. 3. Any Version controlled Process Group in NiFi will have a NiFi background thread that checks with NiFi-Registry to see if newer version of the PG are available. If NiFi is unable to access the NiFi-registry buckets or the persisted Flows no longer exist in NiFi-registry, you can expect to see exceptions about not being able synchronize PG with NiFi-Registry. Same would happen if you deleted the configured Registry client in the NiFi configuration and created a new Registry client pointing to same NiFi-Registry. When a NiFi-Registry client is configured that client is assigned a UUID. When a process group is version controlled, what is written to the local flow.xml.gz or flow.json.gz file is that UUID along with version controlled flow ID and version. If you delete and re-create the NiFi-Registry client it will create a new unique UUID. Your flows will not update to that new UUID, so those version controlled PGs will not be able to synchronize anymore as well. Sounds like you have been making a lot of changes and it is not clear what state everything was in before you started making changes. I'd suggest starting fresh by stopping version control on all your current PGs that have been version controlled, getting your flow persistence provider working, version control your first PG, and restart both NiFi and NiFI-registry to make sure everything is still functioning as expected. Then proceed to make one change at a time you want to try and repeat the restart to see what if anything breaks. If you found this response assisted with your query, please take a moment to login and click on "Accept as Solution" below this post. Thank you, Matt
... View more
08-02-2022
06:17 AM
@ZhouJun I'd recommend upgrading your NiFi to the latest release as you may be hitting these related bugs: https://issues.apache.org/jira/browse/NIFI-9835 https://issues.apache.org/jira/browse/NIFI-9433 https://issues.apache.org/jira/browse/NIFI-9761 Thank you, Matt
... View more
08-01-2022
09:07 PM
1 Kudo
@MattWho - We have separated out ebs volumes of each repos and also 3 ebs volumes each for content and provenance repos. Now looks like issue is pretty much resolved ! Thanks for all your suggestions, helped a lot Thanks Mahendra
... View more
08-01-2022
05:49 AM
@AbhishekSingh 1. @araujo response is 100% correct. 2. Just to add to @araujo respsonse here... NiFi-Registry has nothing to do with controlling what user can and can't do on the NiFi canvas. It simply allows users if it is installed to version control process groups. Even once a NiFi process group has been version controlled, authorized users in NiFi can still make changes to dataflows (even those that re version controlled). One they do make a change to a version controlled Process Group, that process group will indicate that a local change has been made and the authorized user will have the option to commit that local change as a new version of the dataflow. Controlling what users can do with dataflows is handled via authorization policies which NiFi handled very granularly. Authenticated users can be restricted to only specific Process Groups. Your NiFi admin user can setup NiFi authorization for other user per Process Group if they want by selecting the Process Group and clicking on the "key" icon in the "operate panel" on the left side of the NiFi canvas. If you found any of the responses provided assisted with your query, please take a moment to login and click on "Accept as Solution" below each of those posts. Thank you, Matt
... View more
08-01-2022
05:41 AM
@hegdemahendra 1. Do you see any logging related to the content_repository. Perhaps something related to NiFi not allowing writes to the content repository waiting on archive clean-up? 2. Is any outbound connection from the handleHTTPRequest processor red at the time of the pause? This indicates backpressure is being applied which would stop the source processor from being scheduled until back pressure ends. 3. How large is your Timer Driven thread pool? This is the pool of threads from which the scheduled components can use. If it is set to 10 and and all are currently in use by components, the HandleHTTPRequest processor , while scheduled, will be waiting for a free thread from that pool before it can execute. Adjusting the "Max Timer Driven Thread pool" requires careful consideration of average CPU load average across on every node in your NiFi cluster since same value is applied to each node separately. General starting pool size should be 2 to 4 times the number of cores on a single node. Form there you monitor CPU load average across all nodes and use the one with the highest CPU load average to determine if you can add more threads to that pool. If you have a single node that is always has a much higher CPU load average, you should take a closer look at that server. Does it have other service running on it tat are not running on other nodes? Does it unproportionately consistently have more FlowFiles then any other node (This typically is a result of dataflow design and not handling FlowFile load balancing redistribution optimally.)? 4. How many concurrent tasks on your HandleHttpRequest processor. The concurrent tasks are responsible for obtaining threads (1 per concurrent task if available) to read data from the Container queue and create the FlowFiles. Perhaps the request come in so fast that there are not enough available threads to keep the container queue from filling and thus blocking new requests. Hope the above helps you get to the root of your issue. If you found this response assisted with your query, please take a moment to login and click on "Accept as Solution" below this post. Thank you, Matt
... View more