Member since
07-30-2019
3426
Posts
1631
Kudos Received
1010
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 323 | 01-13-2026 11:14 AM | |
| 636 | 01-09-2026 06:58 AM | |
| 715 | 12-17-2025 05:55 AM | |
| 776 | 12-15-2025 01:29 PM | |
| 659 | 12-15-2025 06:50 AM |
01-21-2026
02:36 PM
@Runa27 Without details of your database table structure/configuration and your test file, it would be challenging to identify your exact issue. Have you tried setting the "Unmatched Column Behavior" property to "Ignore Unmatched Columns" or "Warn on Unmatched Columns" to see if that makes a difference? Can you share how your CSVReader has been configured? Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-21-2026
02:27 PM
@Runa27 You should be able to take a screenshot and then just right click an paste it to the response window. It may paste small, so click on it and drag the corner to make it large enough that it can be read. Once you click reply, readers of your post will not have option to resize themselves. Thanks, Matt
... View more
01-21-2026
02:16 PM
1 Kudo
@Runa27 I don't have a windows system at the moment, but I downloaded Apache NiFi 2.7.2 and openJDK 21 on a small Centos 9 VM. Unzipped and started it. So default using the single-user authentication and authorizer. JDK 21 is the required java version for Apache NiFi 2. Once i accessed the UI, I added a GenerateFlowFile processor and connected it to a UpdateAttribute processor. Configured GenerateFlowFile processor to create a 10 byte file. I was able to list the success queue and view the content of the flowFile. What Java version are you using? What Windows version are you using? What browser are you using? Did you try a different browser ( I am using Chrome)? When you try "view content", it should open a new browser tab where it would load the content viewer. I assume this new tab is being opened where you then see the exception? Try opening the "developer-tools" available in your browser. Then try to refresh the tab and you see the requests and response made. Somewhere in that you should see your exception being thrown as a response. That info may give more insight into what your issue is loading the content viwer. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-20-2026
05:52 AM
@Runa27 Before being able to properly help we need you to share your exact Apache NiFi version details. This allows us to see if your are experiencing a known issue in your specific version. Also you'll want to tail the nifi-user.log and nifi-app.log when you make the request to view the content. Then share the output from both those files covering the time of that request (please include time when you performed request via NiFi UI). Also share a bit more about your NiFi setup. What method of user authentication and user authorization are you using? Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-20-2026
05:42 AM
@pnac03 Based on your nifi-registry.properties file, there is no user identity manipulation happening. This means that the full DistinquishedName (DN) presented by NiFi in the MutualTLS exchange with NiFi-Registry will be the user identity for the registry client connecting to your NiFi-Registry. That means that the full DN needs to be authorized in NiFi-Registry properly. That DN needs to be authorized for the following Special Privileges: "Can manage buckets" - Read "Can proxy user requests" - Read, Write, and Delete From the keystore you shared fro your SSL Context Service, we can see it properly contains only one PrivateKeyEntry and the DN for that clientAuth privateKey is: O=3SCDemo, CN=nifi-registry So the above (case sensitive) MUST exist as a user in your NiFi-Registry and have granted to it the above special Privileges mentioned. Also, the user identity of the user logged into NiFi (as displayed in upper right corner - case sensitive) when attempting start version control on a process group in NiFi will need to exist as a user in your NiFi-Registry and be authorized properly directly on the bucket in which you want to version control the process group (this is different then the Special Privileges section in NiFi-Registry). Read Bucket - Allows user to see version controlled flows in the bucket. Write Bucket - Allows user to commit new version controlled flows to the bucket Delete Bucket - allows user to delete a bucket. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-13-2026
11:14 AM
1 Kudo
@Green_ Thinking more about challenges mentioned in my previous response. You could avoid them by creating a parameter-context template on Dev. This would be a parameter context with all the keys but no assigned values. Then when you import the flow to prod from dev you can uncheck the box for "Keep Existing Parameter Contexts" so that a new unique named parameter context is created each time you import the flow. Then you can update that newly generated parameter context with a flow specific name and slow specific values assigned to those parameters that currently have no values. Back on dev, if you make a change involving a newly introduced parameter key, simply update the parameter-context-template with the new key without an assigned value. Now when you change version in dev, you'll get the new key that you just need to assign prod specific flow value to. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-13-2026
10:21 AM
1 Kudo
@Green_ Considering the number of deployments, it might make most sense for you to do this using multiple rest-api calls. First to import your version controlled flow (no parameter-context associated with that version controlled flow) Create a new parameter context with parameters required for that new flow. Update imported Process Group with new name and updated association with newly created parameter context. What you have at that point is a new Process group with new unique name and assigned parameter context. While in NiFi-Registry you still have the dev version controlled PG with no associated parameter-context. This presents some new challenges.... Back on your dev system where your source Process group was version controlled with no parameter-conext. Since it is version controlled, if you make a change in DEV (add new configuration that references a new parameter context key/value, all your other Process Groups version controlled against that same NiFi-Registry flow definition in prod will reflect a new version available. If you "Change version" the Process group will get the change in the flow, but will also revert to NO assigned parameter context. So you will need to re-assign the appropriate parameter context to that Process Group and update the parameter context with newly referenced parameter. Back on your dev system where your source Process group was version controlled with no parameter-context. If you make a change that does not involve any newly introduced parameters, you will still have issue with parameter context being unassociated in dev if you change version. So you will need to re-assign the appropriate parameter context to that Process Group upon any version change. On the prod system where you have 1 too many process groups tied back to this single dev version controlled flow. If you were to make a change, it would reflect as local change that needs to be committed to version control. Since the version controlled flow has no parameter-context assigned, if you were to commit that change on dev, the version-controlled flow would get updated to reference the parameter-context assigned in Prod. So back on dev system a local change will show. Changing version to that new version will show the prod parameter-context now. And only way to revert this is by changing version on Dev back to an older version where no parameter-context was associated to dev process group. Then commit the needed change on DEV instead of Prod. This feels like maybe an area for product improvement. I am thinking along the lines of a checkbox on start version control or commit local that asks if parameter-context should be sent in change request. (Already parameter context changes are not sent if the version-controlled flow already has a parameter-context associated with it). This would allow you to choose not to include a parameter context with new version controlled dataflow (default checked) or not include new parameter context on commit local changes (default unchecked). So you would need to be careful that only dataflow configuration changes to are made on dev to this reusable version controlled flow definition. If you need to make deployment specific change on Prod, you would need to stop version control first, make the change and commit that as new unique version controlled process group. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-12-2026
08:00 AM
1 Kudo
@Green_ The parameter context assigned to a PG does not track as a version control change. Also the process group name does not track as a versioned change. This is by design so that you can reuse the same version controlled process group over and over and assign a unique parameter context and unique name to each. For example: Create a new process group named "master" and add a new parameter context to it. Build a simple dataflow and convert some properties to parameters. Version control the Process Group. Drag a new Process Group icon to canvas and select import from NiFi Registry. Select previously version Process Group. Edit Process Group name to "Clone-parameter-2" and change parameter context assigned to it. Hit apply You will notice the newly imported and modified Process Group shows no local changes. Now go back to Master and add a new component inside that Process group. You will see this change reported as local change. Commit that PG as a new version of Master. Soon afterwards you will see "Clone-parameter-2" report a new version as available. Change version of "Clone-parameter-2" to newer version. You'll notice that PG name and assigned Parameter context does not change. NOTE: If you make a change in any process group tied to this single version controlled flow, it will report a local change that you can commit to NiFi-Registry resulting in a new version being available to all others (Master is implied to have any real hierarchy in this example). NOTE2: If a change in one process group includes a new parameter being added to that process group's assigned parameter context, when other process groups are updated to that version, that new parameter context will be added to other parameter context automatically for you with value matching what was set in version. So processor will not be invalid and might have a value assigned you want/need to change. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-12-2026
05:34 AM
@NSX Caused by: java.net.UnknownHostException: cloudera.com Above is telling you that your Apache NIFi 2.6.0 /2.7.2 server is unable to resolve "cloudera.com to an IP address. You Apache NiFi 1.26 must be successful in hostname resolution and thus working. What if you manually added following to your hosts file on your 2.7.2 servers? 151.101.127.10 cloudera.com Have you tried pinging cloudera.com from both your 2.7.2 and 1.26 servers? Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-09-2026
09:25 AM
@jame1997 Since you are running a NiFi multi-node cluster, your ListS3 processor should be configured so this it only scheduled on the Primary node. The outbound connection feeds a FetchS3Object processor. That connection should have the "Load Balance Strategy" configured for Round Robin (this allows 0 byte listed FlowFiles to be distributed across all your nodes so that each node share the workload of fetching the content and processing it). ListS3 should only have the "1" concurrent task set. FetchS3Object can have multiple concurrent task if needed to keep up with listing. Also keep in mind that using Tracking timestamps can result in objects being missed and not listed. "Tracking Entities" is a more robust option but requires a map cache to hold those entities metadata. With 2,103 running processors and a timer driven thread pool of only 10, you may see delays in processors getting threads to do the work once they are scheduled for execution. What does not makes sense here is your statement that all you needed to do to get listS3 executing successfully was to stop just that processor and start it again. A common issue seen with concurrent tasks is users setting high concurrent tasks on some processor impacting other processors ability to get a thread. Otherwise there is not enough info here to speculate on the cause. I looked through Apache NiFi jira project for any known bugs that may relate to what you described and found none unless other details are missing. I can only suggest capturing a serious of thread dumps (spaced apart by 5 mins) should issue occur again and analyze those to see what the listS3 thread might be doing. Maybe take a look at these listS3 bugs that impact your version and are fixed in a newer release: NIFI-12732 NIFI-12594 Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more