Member since
07-30-2019
3406
Posts
1623
Kudos Received
1008
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 314 | 12-17-2025 05:55 AM | |
| 375 | 12-15-2025 01:29 PM | |
| 355 | 12-15-2025 06:50 AM | |
| 345 | 12-05-2025 08:25 AM | |
| 594 | 12-03-2025 10:21 AM |
08-03-2022
12:01 PM
1 Kudo
@uzi1990 Can you provide more detail around the type of testing you are referring to? Testing what specifically? NiFi is a flow based programming ETL tool. As a user you add and configure components (processors, RPGs, Remote ports, funnels, etc..) to the NiFi canvas. Then interconnecting those added components via connections containing component relationships. Processor components (currently in excess of 300 unique processor available) can be started and stopped one by one or on groups. When a component executed in generates or passes a FlowFile to a downstream relationship. Via the NiFi UI, users can list the contents of a downstream connection and view/download the content of the FlowFile for inspection and also view any metadata/attributes NiFi has set for those FlowFiles. This is how you would validate the processor configuration produced the expected output you want. You can then start the next processor component in your dataflows and do the same processor over. Assuming you have content repository archive enabled, you can also execute an entire flow and examine the generated data provenance for any FlowFile(s) that traversed that dataflow. You can see the content and meatadata/attributes as they existed at each generated provenance event. Example Data Provenance lineage: You can right click on any event dot and view the details: If you found this response assisted with your query, please take a moment to login and click on "Accept as Solution" below this post. Thank you, Matt
... View more
08-03-2022
11:43 AM
@Angelvillar I think you have numerous unrelated questions here. The "GitFlowPersistenceProvider" allows you to configure a git repo in which your version controlled process groups can be pushed for persistent storage outside of NiFi Registry server's file system. What is most important here is that NiFi only reads from the git repo on service startup. While running everything is local to the NiFi-Registry server. So if some changes are manually made on the got repo, the NiFi -Registry will not see them. Additionally, the metadata about those stored versions flows is stored in the NiFi-Registry metadata-database and not in the got repo. Also keep in mind that if you created flows originally using the local file based flow provider and then switched to git repo provider, those flows will not get moved to git. Only new flows get created in git and old flows no longer are reachable. 1. Which Flow Persistence provider is configured for use in the NiFi-Registry has nothing to do with NiFi being able to connect and import flows. NiFi connects to the NiFi-Registry client URL configured in NiFi and gets a list of bucket flows to which the NiFi user has authorized access. That flow information comes from the NiFi-Registry metadata DB. So when you mad a change to the git repo, that would have had no affect until a NiFi-Registry restart. What is in the new repo also would have had no affect on what is in the NiFi-Registry Metadata DB. My guess here is that NiFi was give a list of version controlled flows known to the NiFi-Registry via the metadata DB and then when you tried to import one of them, NiFi-Registry could not find the actual flow locally. Review the "Switching from other Flow Persistence Provider" section under the metadata-database section in the NiFi-Registry docs. What changes did you make in configs when you cloned the git repo to tell NiFi to start using the new cloned repo over the original repo? If you configured git repository has existing flows committed to it, if you have nothing in the metadata-database, NiFi-registry will generate metadata for the flows imported from the flow persistence provider on NiFi-Registry startup. NiFi or NiFi-Registry being secured has nothing to do with the error you described. If NiFi was able to display a list of flows for selection to import, then connectivity to Registry seems fine. However, keep in mind that if you secure Registry, you must secure NiFi in order to write to any buckets. A secured NiFi can access a non secured NiFi-Registry and a non secured NiFi can access a non secured NiFi-Registry. It is also possible for a non secured NiFi to import flows from "public" buckets in a secured NiFi-registry. 2. It does not matter whether you run your NiFi-Registry on a VM or on Docker as long as the configured ports are reachable by your NiFi. This is all a matter of your personal preference. 3. Any Version controlled Process Group in NiFi will have a NiFi background thread that checks with NiFi-Registry to see if newer version of the PG are available. If NiFi is unable to access the NiFi-registry buckets or the persisted Flows no longer exist in NiFi-registry, you can expect to see exceptions about not being able synchronize PG with NiFi-Registry. Same would happen if you deleted the configured Registry client in the NiFi configuration and created a new Registry client pointing to same NiFi-Registry. When a NiFi-Registry client is configured that client is assigned a UUID. When a process group is version controlled, what is written to the local flow.xml.gz or flow.json.gz file is that UUID along with version controlled flow ID and version. If you delete and re-create the NiFi-Registry client it will create a new unique UUID. Your flows will not update to that new UUID, so those version controlled PGs will not be able to synchronize anymore as well. Sounds like you have been making a lot of changes and it is not clear what state everything was in before you started making changes. I'd suggest starting fresh by stopping version control on all your current PGs that have been version controlled, getting your flow persistence provider working, version control your first PG, and restart both NiFi and NiFI-registry to make sure everything is still functioning as expected. Then proceed to make one change at a time you want to try and repeat the restart to see what if anything breaks. If you found this response assisted with your query, please take a moment to login and click on "Accept as Solution" below this post. Thank you, Matt
... View more
08-02-2022
07:28 AM
@PradNiFi1236 The Remote input port will use the keystore and truststore configured in the nifi.properties file. The S2SBulletinReportingTask will use the keystore and truststore configured in the SSLContextService Controller service. It would be difficult for me to help with a potential SSL Handshake issue without the verbose output of those 4 files that are being used. <path to>/keytool -v -list -keystore <keystore or truststore filename> You need to verify that the compleete trust chain exist in the truststore used in the nifi.properties file for the ClientAuth PrivateKeyEntry from the keytsore configured in the SSLConextService. You need to verify that the complete trust chain exist in the truststore used in the SSLContextService for the ServerAuth PrivateKeyEntry found in the keystore from the nifi.properties file. You also need to make sure that your keystore does not contain more than 1 PrivateKeyEntry in it. You need to make sure that the PriavteKeyEntry has correct SAN entry(s). You should tail the nifi-user.log on the host configured in the S2SBulletinReportingTask and then enable that reporting task. If the MutualTLS handshake was successful, you should see the request being made for the S2S details. This would help you understand the exact client identity string that is being checked for authorization to /site-to-site (pretty name for policy: Retrieve site-to-site details) NiFi resource identifier policy. I also don't know the full destination URL you have configured to verify it is correct. It should just be: https://<nifihostname>:<nifiport>/ Where <nifiport> is the same port you use to access the web UI canvas. If you found this response assisted with your query, please take a moment to login and click on "Accept as Solution" below this post. Thank you, Matt
... View more
08-02-2022
06:17 AM
@ZhouJun I'd recommend upgrading your NiFi to the latest release as you may be hitting these related bugs: https://issues.apache.org/jira/browse/NIFI-9835 https://issues.apache.org/jira/browse/NIFI-9433 https://issues.apache.org/jira/browse/NIFI-9761 Thank you, Matt
... View more
08-01-2022
06:06 AM
@hegdemahendra This could be and IOPS issue possibly, but it could also be a concurrency issue with threads. How large is your Timer Driven thread pool? This is the pool of threads from which the scheduled components can use. If it is set to 10 and and all are currently in use by components, the HandleHTTPRequest processor , while scheduled, will be waiting for a free thread from that pool before it can execute. Adjusting the "Max Timer Driven Thread pool" requires careful consideration of average CPU load average across on every node in your NiFi cluster since same value is applied to each node separately. General starting pool size should be 2 to 4 times the number of cores on a single node. Form there you monitor CPU load average across all nodes and use the one with the highest CPU load average to determine if you can add more threads to that pool. If you have a single node that is always has a much higher CPU load average, you should take a closer look at that server. Does it have other service running on it tat are not running on other nodes? Does it unproportionately consistently have more FlowFiles then any other node (This typically is a result of dataflow design and not handling FlowFile load balancing redistribution optimally.)? How many concurrent tasks on your HandleHttpRequest processor. The concurrent tasks are responsible for obtaining threads (1 per concurrent task if available) to read data from the Container queue and create the FlowFiles. Perhaps the request come in so fast that there are not enough available threads to keep the container queue from filling and thus blocking new requests. Assuming your CPU load average is not too high, increase your Max Timer Driven Thread pool and the number fo concurrent tasks on your HandleHttpRequest processor to see if that resolves your issue. But keep in mind that even if this helps with processor getting more threads, if the disk I/O can't keep up then you will still have same issue. As far as having all your NiFi repos on same disk, this is not a recommended practice. Typical setup would have the content_repository on its own disk (content repo can fill disk to 100% which does not cause issue other then not being able to write new content until disk usage drops), The provenance_repository on its own disk (size of this disk depends on amount of provenance history you want to retain and size fo your dataflows along with volume of FlowFiles, but its disk usage is controllable. Recommend separate disk due to disk I/O), and put the database_repository (very small in terms of disk usage) and flowfile_repository (relatively small unless you allow a very large number fo FlowFiles to queue in your dataflows. FlowFile_repos only hold metadata/attributes about your queued FlowFIles, but can also be I/O intensive on disk) together on a third disk. If you found this response assisted with your query, please take a moment to login and click on "Accept as Solution" below this post. Thank you, Matt
... View more
08-01-2022
05:49 AM
@AbhishekSingh 1. @araujo response is 100% correct. 2. Just to add to @araujo respsonse here... NiFi-Registry has nothing to do with controlling what user can and can't do on the NiFi canvas. It simply allows users if it is installed to version control process groups. Even once a NiFi process group has been version controlled, authorized users in NiFi can still make changes to dataflows (even those that re version controlled). One they do make a change to a version controlled Process Group, that process group will indicate that a local change has been made and the authorized user will have the option to commit that local change as a new version of the dataflow. Controlling what users can do with dataflows is handled via authorization policies which NiFi handled very granularly. Authenticated users can be restricted to only specific Process Groups. Your NiFi admin user can setup NiFi authorization for other user per Process Group if they want by selecting the Process Group and clicking on the "key" icon in the "operate panel" on the left side of the NiFi canvas. If you found any of the responses provided assisted with your query, please take a moment to login and click on "Accept as Solution" below each of those posts. Thank you, Matt
... View more
08-01-2022
05:41 AM
@hegdemahendra 1. Do you see any logging related to the content_repository. Perhaps something related to NiFi not allowing writes to the content repository waiting on archive clean-up? 2. Is any outbound connection from the handleHTTPRequest processor red at the time of the pause? This indicates backpressure is being applied which would stop the source processor from being scheduled until back pressure ends. 3. How large is your Timer Driven thread pool? This is the pool of threads from which the scheduled components can use. If it is set to 10 and and all are currently in use by components, the HandleHTTPRequest processor , while scheduled, will be waiting for a free thread from that pool before it can execute. Adjusting the "Max Timer Driven Thread pool" requires careful consideration of average CPU load average across on every node in your NiFi cluster since same value is applied to each node separately. General starting pool size should be 2 to 4 times the number of cores on a single node. Form there you monitor CPU load average across all nodes and use the one with the highest CPU load average to determine if you can add more threads to that pool. If you have a single node that is always has a much higher CPU load average, you should take a closer look at that server. Does it have other service running on it tat are not running on other nodes? Does it unproportionately consistently have more FlowFiles then any other node (This typically is a result of dataflow design and not handling FlowFile load balancing redistribution optimally.)? 4. How many concurrent tasks on your HandleHttpRequest processor. The concurrent tasks are responsible for obtaining threads (1 per concurrent task if available) to read data from the Container queue and create the FlowFiles. Perhaps the request come in so fast that there are not enough available threads to keep the container queue from filling and thus blocking new requests. Hope the above helps you get to the root of your issue. If you found this response assisted with your query, please take a moment to login and click on "Accept as Solution" below this post. Thank you, Matt
... View more
07-29-2022
02:53 PM
1 Kudo
@hegdemahendra How many FlowFiles are queued on the outbound connection(s) from your HandleHttpRequest processor? Is backpressure being applied on the HandleHTTPRequest processor? What version of NiFi are you using? Any logging in the app.log about not being allowed to write to content repository and waiting on archive cleanup? If NiFi is blocking on on creating new content claims in to the content_repository, the HandleHTTPRequest processor will not be able to take data from the container and generate the outbound FlowFile. This would explain why cleaning up those repos would reduce the disk usage below the blocking threshold. There are some know issues around NiFi blocking even if archive sub-directories in the content_repository are empty which were addressed in the latest Apache NiFi 1.16 release or Cloudera's CFM 2.1.4.1000 release. You may also want to look at your content repository settings for: Compare those to your disk usage where your content_repo is located. https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#content-repository If you found this response assisted with your query, please take a moment to login and click on "Accept as Solution" below this post. Thank you, Matt
... View more
07-29-2022
02:30 PM
@PradNiFi1236 There are numerous steps in this process. So lets start with some basics on Site-To-Site (S2S). The S2SBulletinReportingTask works much like the NiFi Remote Process Group (RPG). When configured with a Destination URL and enabled a background thread will run independently on an interval to fetch S2S details from the destination URL. If the destination url is a node in a NiFi cluster, the returned S2S details will include the hostnames of all the nodes in the cluster, whether cluster nodes are configured to support RAW and/or HTTP transport protocols, the configured RAW port for each node, node load average, etc... Configuring just one destination URL from the target cluster does not change this behavior. Configuring a comma separate list of nodes from the same destination cluster affords you HA. If S2S details can't be retrieved from node URL 1, it then tries second URL, and so forth. Also keep in mind it does not matter what node URL of a NiFi cluster you are accessing, any component (processor, reporting task, controller service, etc) added to the canvas is replicated to all nodes in the NiFi cluster. So when you enable this S2SBulletinReporting task, all nodes are going to try to fetch S2S details. Each node in a NIFi cluster has all the same components and executes all the same components (with exception of processors that can be scheduled to execute on primary node only). This means that all node will be trying to send generated bulletins to your cluster nodes. So by what you shared, it looks like the background thread that fetches those S2S details is failing due to timeout. This could be for any number of reasons. Your configured keystore in the sslContextService does not contain a single PrivateKeyEntry that can be trusted by the truststore configured in the nifi.properties file on all 3 of your destination nodes. The PrivateKeyEntry presented by the 3 NiFi nodes to the controller service is not trusted by what exists in the truststore configured in your SSLConTextService. keystore used in sslContextService does not have a clientAuth PrivtaeKeyEntry in it. nifi.remote.input.secure is not set to true. nifi.remote.input.http.enabled not set to true. There are several authorization policies in play here as well, but I don't think you have even gotten that far yet. Retrieve site-to-site details <-- The privateKeyEntry from the keystore configured in the sslContextService will need to be authorized to retrieve these S2S details. The keystore used in each of your 3 NiFi node's sslContext service may have unique DNs for their PrivateKeyEntry. So all three of those unique keys would need to be authorized Receive data via site-to-site <-- The privateKeyEntry from the keystore configured in the sslContextService will need to be authorized to retrieve these S2S details. The keystore used in each of your 3 NiFi node's sslContext service may have unique DNs for their PrivateKeyEntry. So all three of those unique keys would need to be authorized. This allows the S2SBulletinReporting task to see this bulletinmonitoring remote input port as an option to send bulletins to. But if you were getting past authentication and failing on authorization, your exception would be different. Instead of timeouts, you would be seeing not authorized exceptions. If you found this response assisted with your query, please take a moment to login and click on "Accept as Solution" below this post. Thank you, Matt
... View more
07-18-2022
12:45 PM
@Alevc Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target The above exception you are encountering with TLS is caused by a lack of a complete trust chain in the mutual TLS handshake. On each side (server and client) of your TLS connection, you will have a keystore containing a PrivateKey entry (Will support and extended key usage (EKU) of clientAuth, serverAuth, or both) that either your client or server will use to identify itself. That PrivateKey entry will have an owner and issuer DN associated with it. The issuer is the signer for the owner. Each side will also have a truststore (just another keystore by a different name containing a bunch of TrustedCertEntry(s)) that would need to contain the trustedCertEntry for the issuer/signer of your PrivateKeyEntry. It is also very common that the issuer/signer trustedCertEntry has an owner DN and Issuer DN that do not match. This means that that issuer was just an intermediate Certificate Authority (CA) and was issued/signed by another CA. As such the truststore would need to also contain the TrustedCertEntry for that next level issuer CA. This continues until you reach the root CA trustedCertEntry where the owner and issuer have the same DN. This is known as the rootCA for your PriavteKeyEntry. Having all the intermediate CA(s) and the root CA, means you have the complete trust chain in your truststore. This process applies in both directions in the mutual TSL handshake. Meaning your clientAuth certificate presented by your Kafka Consumer must have its complete trust chain in the Kafka servers truststore. And the ServerAuth certificate presented by your server must have its complete trust chain present in the truststore used by your client Kafka consumer. Note: I am over simplifying this mutual TLS handshake (private keys themselves are never shared and there is more in the server and client hello exchanges in the TLS handshake), but intent is to focus at a high level on what your issue is caused by specifically. So to get past your issue, you need to make sure the truststore used by your client and server side contain all the CAs trust chain trustedCertEntries. If you found this response assisted with your query, please take a moment to login and click on "Accept as Solution" below this post. Thank you, Matt
... View more