Member since
07-30-2019
3423
Posts
1630
Kudos Received
1010
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 296 | 01-13-2026 11:14 AM | |
| 577 | 01-09-2026 06:58 AM | |
| 697 | 12-17-2025 05:55 AM | |
| 758 | 12-15-2025 01:29 PM | |
| 645 | 12-15-2025 06:50 AM |
01-08-2026
09:09 AM
@PepeVo The invalid SNI is not a NiFi thing. It is related to trying to use 127.0.0.1 local IP in the URL. You are going to need to use a hostname. I see in you set https.host=localhost in the nifi.properties. Is "localhost" a SAN entry in the certificate? Can you share the verbose output from the NiFi generated keystore and the keystore you manually created? The NiFi generated keystore should have SAN entry for localhost and your server/computer hostname. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-08-2026
05:41 AM
@jame1997 There is not enough information yet to say what was experienced here. When you say "had to stop and start the process", does this mean you had to stop and start NiFi, or stop and start only the ListS3 processor to get listing to start working again? When the listS3 was not producing any FlowFiles, was it showing an small number in upper right corner indicating an active thread? When this listS3 is not working, is the outbound connection from the processor "red" indicating backpressure is being applied preventing the processor from getting scheduled? What is the exact version of Apache NiFi being used? Single NiFi instance or a multi-node NiFi cluster setup? How many "running" processors on your canvas? How large is the NiFi Max Timer Driven Thread pool set to (default is 10, but typically this is set to 2 to 4 times the number of cores on the NiFi host). Monitoring of CPU load average with your flow running will allow you to determine if you can increase this even more. Perhaps the canvas was. thread starved. As more dataflows are built on the canvas, there is more chance the default thread pool may not be large enough to run your flow smoothly. Any long running threads can prevent other processors that are scheduled from getting a thread for extended periods of time. If you saw a small number displayed on the processor indicating it was scheduled to execute while it as not producing any FlowFiles, you could take a serious of thread dumps which you could inspect to see if the listS3 processor thread was making any progress or just blocked/waiting. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-07-2026
06:43 AM
1 Kudo
@pnac03 1). The keystore configured in NiFi (nifi.properies and in SSL Context Service controller services) and NiFi-Registry (nifi-registry.properties) must contain only 1 PrivateKeyEntry since there is no way to control which is used when multiple exist. The verbose output you shared for your keystore shows it containing only 1 PrivateKeyEntry. 2) TLS will negotiate the highest mutually supported version between client and server in the mTLS exchange. 3) You did not share the verbose output for the keystore used in the SSL Context Service you configured your NiFiFlowRegistry client to use. Would also need to see the nifi-registry.properties file to inspect all the identity mapping properties set to see how the DNs might be manipulated. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-06-2026
09:47 AM
@PepeVo Without the nifi-app.log output it would be impossible for us to guess where it is failing in the startup process. The NiFi bootstrap process simply starts the main NiFi process and monitors that child process id to make sure it still exists (NiFI has not died). Apache NiFi should start securely out of the box without needing to make any configuration modifications to the nifi.properties file. All you need to do is make sure Java 21 is the default java version installed on the NiFi server host first. So my guess is issue is with some configuration you have modified from the defualt in your nifi.properties file. Or perhaps an issue with a manual configuration you made in some other nifi configuration file within the conf directory. Thank you, Matt
... View more
01-06-2026
09:19 AM
1 Kudo
@Green_ When you deploy a Dataflow (that has a parameter context assign to it) from NiFi A via NiFi-Registry to another NiFi B, the parameter context will be added to NiFi 2 if a parameter context of the same exact name does NOT already exist in NiFI 2. If the Parameter Context with same name already exists, that local parameter context will be used. Additionally, if the parameter context of the same name present in the original flow from NiFi has a new parameter name not present in same named pre-existing parameter context on NiFi B, that additional name/value will be added to the existing same named parameter context on NiFi B. So NiFi / NIFi-Registry was designed with the intent to handle different parameter values per NiFi deployment. Now the first time you deploy a flow from NiFi A to NiFi B, you end up with the parameter context from NIFi A being added to NiFi B. You'll need to update values as needed in NiFi B before starting the dataflow(s) in that Process group. But new version after that will not be an issue (unless additional new parameter name/value pairs are added. Those would need to be updated or you could add the new params manually in NiFi 2 before updating version. I think above solution is better since you'll have the all the parameter name/value pairs when you import the new dataflow from NiFi-Registry, you'll just need to update some values before starting the new dataflow. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-06-2026
05:55 AM
1 Kudo
@pnac03 Some clarification: The NiFi Registry Client (NifiRegistryFlowRegistryClient) will use the configured keystore and truststore in the defined SLS ContextService if configured to authenticate with the target NiFi Registry URL. This Client Auth certificate will proxy the request on behalf of the user identity displayed in the upper right corner of the NiFi UI where this NiFi Registry client is being used. If an SSL Context Service is not defined in the Registry client, the Registry client will use the keystore and truststore configured in the NiFi node's nifi.properties files. Now it is common in a Nifi cluster setup that every node has its own unique keystore. As such you would need to make sure that all the clientAuth certificates are properly authorized to proxy user requests in the target NiFi (this applies no matter which NiFi node you are logged into when making the call to NiFi-Registry since the request gets replicated by all nodes.). That brings into question your statement below: I have verified the same even from Registry using tcpdump in my setup and I do see that the incoming CN name from nifi is CN=node-0-nifikop instead of what is referenced in the SSL Context. Can you share the verbose output for your PrivateKeyEntry? Does it contain only 1 PrivateKey Entry or multiple? (Must contain only one since NiFi Registry client does not provide a configuration option to specify a specific certificate by alias name. ----- Public bucket clarification: A public bucket allows any user to import a flow from that bucket to the Canvas of a NiFi. It does not allow any user to write (start version control) of a new dataflow or commit new version of an existing version controlled dataflow to the public bucket. Writing a new flow to a bucket will require proper write permission on the bucket regardless of whether the bucket is public or not. ------- User Identities: The user identities coming from the ssl context services and proxied are case sensitive "User 2" and "user 2" would be treated as to different users in both NiFi and NiFi-Registry. The User identities are evaluated against any identity mappings that may be configured in the nifi-registry.properties file, so you'll want to take a look at these to make sure they are not manipulating the user identity string or clientAuth certificate DN. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-06-2026
05:25 AM
@MuruganFinastra Since you are getting a 403 response, the first thing you should do is see what user identity this 403 is being returned for. For this you'll want to be tailing the nifi-user.log while you attempt to make this rest-api call. You will see the denied related log lines in the nifi-user.log. That logging will provide the user identity string and which NiFi authorization policy required for which that user identity did not have the required permissions. Using this output, we can determine the next steps required here. Is the expected user identity being logged? What is the logged authorization policy resulting in the 403 response? Also which user authentication and authorization configuration options are you using in your setup? Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
12-17-2025
05:55 AM
@Bern You just need to follow same steps you used in your original post question. You drag the add process group icon to the canvas.: In the pop-up window select the "browse" icon to far right instead of entering a "Name": Navigate to and select your downloaded flow definition you created in Apache NiFi 1.13 (I would strongly encourage you to be using Apache NiFi 1.28). There have been many changes a fixes and you'll want to make sure your Apache NiFi 1.x dataflows are valid on the latest 1.28 before attempting to move them to Apache NiFi 2.x). Apache NiFi 1.28 will also have deprecation logging that will help make user aware if they are using components that no longer exist in Apache NiFi 2.x. You'll need to make modification to your flows so you are no longer using those components before moving your flow definitions over to NiFi 2.x. Also be aware that NiFi 2.x no longer support NiFi Variables. These were deprecated and removed. The replacement is NiFi Parameters. So if you are using Variables in your NiFi 1 dataflows, you' need to modify your dataflows to use paramters before moving your flow definitons over to NiFi 2. After you have selected your flow definition json file, you have option to change name that is displays from json and click "Add": Remember that Apache NiFi 2.x is a major release change and the expectation is that you are on the latest NiFi 1.28 release before attempting to move to NiFi 2. NOTE: Cloudera Flow Management (CFM) licensed users have access to Cloudera specific automation tools that can auto transform templates into valid flow definitions and automated migration of CFM 2.1.7 SP2 (Apache NiFi 1.x based) flow.json.gz files into CFM 4.1x (Apache NiFi 2.x based) compatible version. This automation handles deprecated components, converting NiFi variables (deprecated) into NiFi parameters (replacement), etc. https://docs.cloudera.com/cfm/4.11.0/cfm-migration-tool/topics/cfm-mt-overview.html#concept_wlv_sl3_... Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
12-17-2025
05:34 AM
@fy-test Apache NiFi is only going to be able to address CVEs found in the NiFi-Registry package lib directory files included with the distribution. Any OS/System-level CVEs would need to be addressed by the owner of the platform on which the NIFi-Registry services is being used. You can find the Apache NiFi Security Reporting here: https://nifi.apache.org/documentation/security/ You'll find CVEs already addressed in NiFi and NiFi-Registry on the above page. You'll also see how to report any new security vulnerabilities you may discover. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
12-15-2025
01:44 PM
@Bern Is your external Zookeeper installed on the same host as your NiFi? If so, the load your NiFi is putting on those nodes may contribute to the performance of your ZK. The last Apache NiFi 1.x major release version if NiFi 1.28. I recommend upgrading to this version. You'll potentially need to make significant changes and updated to your Apache NiFi 1.x versions dataflows before they can be used in Apache NiFi 2.x. Apache NiFi 1.28 is also new enough that it will produce the flow.json.gz format that is also used by Apache NiFi 2.x. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more