Member since
07-30-2019
3421
Posts
1630
Kudos Received
1010
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 218 | 01-13-2026 11:14 AM | |
| 406 | 01-09-2026 06:58 AM | |
| 627 | 12-17-2025 05:55 AM | |
| 688 | 12-15-2025 01:29 PM | |
| 597 | 12-15-2025 06:50 AM |
01-13-2026
11:14 AM
1 Kudo
@Green_ Thinking more about challenges mentioned in my previous response. You could avoid them by creating a parameter-context template on Dev. This would be a parameter context with all the keys but no assigned values. Then when you import the flow to prod from dev you can uncheck the box for "Keep Existing Parameter Contexts" so that a new unique named parameter context is created each time you import the flow. Then you can update that newly generated parameter context with a flow specific name and slow specific values assigned to those parameters that currently have no values. Back on dev, if you make a change involving a newly introduced parameter key, simply update the parameter-context-template with the new key without an assigned value. Now when you change version in dev, you'll get the new key that you just need to assign prod specific flow value to. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-13-2026
10:21 AM
1 Kudo
@Green_ Considering the number of deployments, it might make most sense for you to do this using multiple rest-api calls. First to import your version controlled flow (no parameter-context associated with that version controlled flow) Create a new parameter context with parameters required for that new flow. Update imported Process Group with new name and updated association with newly created parameter context. What you have at that point is a new Process group with new unique name and assigned parameter context. While in NiFi-Registry you still have the dev version controlled PG with no associated parameter-context. This presents some new challenges.... Back on your dev system where your source Process group was version controlled with no parameter-conext. Since it is version controlled, if you make a change in DEV (add new configuration that references a new parameter context key/value, all your other Process Groups version controlled against that same NiFi-Registry flow definition in prod will reflect a new version available. If you "Change version" the Process group will get the change in the flow, but will also revert to NO assigned parameter context. So you will need to re-assign the appropriate parameter context to that Process Group and update the parameter context with newly referenced parameter. Back on your dev system where your source Process group was version controlled with no parameter-context. If you make a change that does not involve any newly introduced parameters, you will still have issue with parameter context being unassociated in dev if you change version. So you will need to re-assign the appropriate parameter context to that Process Group upon any version change. On the prod system where you have 1 too many process groups tied back to this single dev version controlled flow. If you were to make a change, it would reflect as local change that needs to be committed to version control. Since the version controlled flow has no parameter-context assigned, if you were to commit that change on dev, the version-controlled flow would get updated to reference the parameter-context assigned in Prod. So back on dev system a local change will show. Changing version to that new version will show the prod parameter-context now. And only way to revert this is by changing version on Dev back to an older version where no parameter-context was associated to dev process group. Then commit the needed change on DEV instead of Prod. This feels like maybe an area for product improvement. I am thinking along the lines of a checkbox on start version control or commit local that asks if parameter-context should be sent in change request. (Already parameter context changes are not sent if the version-controlled flow already has a parameter-context associated with it). This would allow you to choose not to include a parameter context with new version controlled dataflow (default checked) or not include new parameter context on commit local changes (default unchecked). So you would need to be careful that only dataflow configuration changes to are made on dev to this reusable version controlled flow definition. If you need to make deployment specific change on Prod, you would need to stop version control first, make the change and commit that as new unique version controlled process group. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-12-2026
08:00 AM
1 Kudo
@Green_ The parameter context assigned to a PG does not track as a version control change. Also the process group name does not track as a versioned change. This is by design so that you can reuse the same version controlled process group over and over and assign a unique parameter context and unique name to each. For example: Create a new process group named "master" and add a new parameter context to it. Build a simple dataflow and convert some properties to parameters. Version control the Process Group. Drag a new Process Group icon to canvas and select import from NiFi Registry. Select previously version Process Group. Edit Process Group name to "Clone-parameter-2" and change parameter context assigned to it. Hit apply You will notice the newly imported and modified Process Group shows no local changes. Now go back to Master and add a new component inside that Process group. You will see this change reported as local change. Commit that PG as a new version of Master. Soon afterwards you will see "Clone-parameter-2" report a new version as available. Change version of "Clone-parameter-2" to newer version. You'll notice that PG name and assigned Parameter context does not change. NOTE: If you make a change in any process group tied to this single version controlled flow, it will report a local change that you can commit to NiFi-Registry resulting in a new version being available to all others (Master is implied to have any real hierarchy in this example). NOTE2: If a change in one process group includes a new parameter being added to that process group's assigned parameter context, when other process groups are updated to that version, that new parameter context will be added to other parameter context automatically for you with value matching what was set in version. So processor will not be invalid and might have a value assigned you want/need to change. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-12-2026
05:34 AM
@NSX Caused by: java.net.UnknownHostException: cloudera.com Above is telling you that your Apache NIFi 2.6.0 /2.7.2 server is unable to resolve "cloudera.com to an IP address. You Apache NiFi 1.26 must be successful in hostname resolution and thus working. What if you manually added following to your hosts file on your 2.7.2 servers? 151.101.127.10 cloudera.com Have you tried pinging cloudera.com from both your 2.7.2 and 1.26 servers? Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-09-2026
09:25 AM
@jame1997 Since you are running a NiFi multi-node cluster, your ListS3 processor should be configured so this it only scheduled on the Primary node. The outbound connection feeds a FetchS3Object processor. That connection should have the "Load Balance Strategy" configured for Round Robin (this allows 0 byte listed FlowFiles to be distributed across all your nodes so that each node share the workload of fetching the content and processing it). ListS3 should only have the "1" concurrent task set. FetchS3Object can have multiple concurrent task if needed to keep up with listing. Also keep in mind that using Tracking timestamps can result in objects being missed and not listed. "Tracking Entities" is a more robust option but requires a map cache to hold those entities metadata. With 2,103 running processors and a timer driven thread pool of only 10, you may see delays in processors getting threads to do the work once they are scheduled for execution. What does not makes sense here is your statement that all you needed to do to get listS3 executing successfully was to stop just that processor and start it again. A common issue seen with concurrent tasks is users setting high concurrent tasks on some processor impacting other processors ability to get a thread. Otherwise there is not enough info here to speculate on the cause. I looked through Apache NiFi jira project for any known bugs that may relate to what you described and found none unless other details are missing. I can only suggest capturing a serious of thread dumps (spaced apart by 5 mins) should issue occur again and analyze those to see what the listS3 thread might be doing. Maybe take a look at these listS3 bugs that impact your version and are fixed in a newer release: NIFI-12732 NIFI-12594 Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-09-2026
06:58 AM
@PepeVo "When I set the ip address (not localhost) on nifi.web.https.hosts and connect it with error "the proxy server is refusing connections". Do I need to set the nifi.web.proxy.host to ipaddress too?" This because the IP does not exist in a SAN in your certificate. The first step here is create a proper clientAuth certificate that includes the SAN entries and EKUs. Apache NiFI out-of-the-box would have created a proper format keystore certificate. The CN value in the certificate is typically the hostname of the server it is being used on. I've seen multiple different value snippets in what has been shared by you. That hostname you are trying to use in the NiFi URL must exist as a SAN entry in the certificate. (This is not a NiFi specific requirement, this is enforced by the JDK) Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-09-2026
05:44 AM
1 Kudo
@Pashazadeh Apache NiFI 2.0.x was a technical milestone/preview releases that underwent many changes before the first GA release with NiFi 2.1.x. I would not expect a change in behavior going forward, unless some bug is introduced or the community agrees on a change in functionality/behavior. While I don't have a specific answer to what bug resulted in the difference in behavior you encountered, here are some changes that affected the JsonRecordSetWriter. NIFI-14331 NIFI-13963 / NIFI-13843 NIFI-12670 If you still have your NiFi 2.0.0 running, you could run your flow using a convertRecord with same record readers and writers and then compare the output content with what you see with 2.7.1 output. Maybe that can help figure out what is happening and if either of those bugs affecting earlier NiFi 2.x versions is related. Thanks, Matt
... View more
01-08-2026
12:55 PM
@PepeVo Look at the output from following java keytool command: keytool -v -list -keystore <nifi-keystore.p12/jks You'll want to verify the EKU, KeyUsage, and SubjectAlternativeName (SAN) fields in the output. EKU must contain clientAuth and serverAuth SAN must contain your server hostname and any other hostname your node may also be known as. One of these SAN names is what you must use in the browser URL. Hostname verification in the TLS exchange between your browser and NiFi is done using the certificate SAN and not the Certificate DN. You also can add the same IP address (127.0.0.1) to the /etc/hosts file multiple times. it will resolve to the first entry. If you want to assign additional names to 127.0.0.1, it needs to be done on same line. But SNI is not going to allow you to use 127.0.0.1 in the browser URL. You should set the "nifi.web.https.host" property in the nifi.properties file to one of the SAN values from your keytstore and then use that name in your url to access the NiFi UI. On NiFi startup, you can also tail the nifi-app.log looking for the line that looks like this: ... [main] org.apache.nifi.web.server.JettyServer Started Server on https://<hostname>:8443/nifi Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-08-2026
09:09 AM
@PepeVo The invalid SNI is not a NiFi thing. It is related to trying to use 127.0.0.1 local IP in the URL. You are going to need to use a hostname. I see in you set https.host=localhost in the nifi.properties. Is "localhost" a SAN entry in the certificate? Can you share the verbose output from the NiFi generated keystore and the keystore you manually created? The NiFi generated keystore should have SAN entry for localhost and your server/computer hostname. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-08-2026
05:41 AM
@jame1997 There is not enough information yet to say what was experienced here. When you say "had to stop and start the process", does this mean you had to stop and start NiFi, or stop and start only the ListS3 processor to get listing to start working again? When the listS3 was not producing any FlowFiles, was it showing an small number in upper right corner indicating an active thread? When this listS3 is not working, is the outbound connection from the processor "red" indicating backpressure is being applied preventing the processor from getting scheduled? What is the exact version of Apache NiFi being used? Single NiFi instance or a multi-node NiFi cluster setup? How many "running" processors on your canvas? How large is the NiFi Max Timer Driven Thread pool set to (default is 10, but typically this is set to 2 to 4 times the number of cores on the NiFi host). Monitoring of CPU load average with your flow running will allow you to determine if you can increase this even more. Perhaps the canvas was. thread starved. As more dataflows are built on the canvas, there is more chance the default thread pool may not be large enough to run your flow smoothly. Any long running threads can prevent other processors that are scheduled from getting a thread for extended periods of time. If you saw a small number displayed on the processor indicating it was scheduled to execute while it as not producing any FlowFiles, you could take a serious of thread dumps which you could inspect to see if the listS3 processor thread was making any progress or just blocked/waiting. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more