Member since
07-30-2019
3406
Posts
1622
Kudos Received
1008
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 118 | 12-17-2025 05:55 AM | |
| 179 | 12-15-2025 01:29 PM | |
| 120 | 12-15-2025 06:50 AM | |
| 245 | 12-05-2025 08:25 AM | |
| 407 | 12-03-2025 10:21 AM |
11-26-2018
02:57 PM
@Mauro Beltrame - Keep in mind that every node in your NiFi cluster runs its own copy of the flow.xml, has its own set of repositories, and works on its own set of FlowFiles. When a "primary node" change occurs, this does not mean that FlowFiles being processed on old primary node are moved over to the new primary node. It is still the responsibility of the old primary node to finish processing FlowFiles on that node. The "primary node" execution configuration setting on processors simply sets whether this processor should be scheduled to execute on all nodes at same time or just get scheduled on which ever nodes is currently elected the primary node. This is important for non-cluster friendly protocols that some processors use (i.e listSFTP, ListFIle, etc...). - Keep in mind that only processors that are responsible for ingesting data (those that create the FlowFile in NiFi) should be configured for "Primary node" only operation. Al processors within the body of a dataflow (any processor that accepts an inbound connection to it) should be configured to always run on all nodes in your cluster. - If I had to guess one or more of the following is occurring for you: 1. Processors within the body of your dataflows are configured with "Primary node". This means that any FlowFiles ingested on old primary node will end up queued in front of one of these now primary node only configured processors that is no longer getting scheduled on that old primary node. 2. Your primary node only processors have some configuration that is dependent on a local file that that is not present on every node in same directory with same permissions. (for example, Using a private key in listSFTP and that private key was not placed on all nodes). - Thank you, Matt - If you found this answer addressed your question, please take a moment to login in and click the "ACCEPT" link.
... View more
11-19-2018
05:12 PM
1 Kudo
@Bharat Yadav ONLY Input and output ports created at root canvas level are considered remote. You will notice port box rendering looks a little different when added at root level (small wifi looking kind of symbol). These "remote" versions of the input/output ports will give you the option to authorize which clients the port can "receive data via site-to-site" on.
... View more
11-19-2018
03:13 PM
@Max Musti Couple things about your certificates: The certificates must include 1 or more SubjectAlternativeNames (SAN) for security reasons. Since you are using a wildcard in the DN for the certificate owner, You should have a unique SAN entry for each server hostname that is using this certificate. You also must make sure that the certificates support being used for both "clientAuth' and 'serverAuth'. You often can see this called out in the verbose key output from keytool in the ExtendedKeyUsage section. *** NiFi can act as both a client (such as when using Remote Process Groups or talking to NiFi-registry) and a server. - Alternatively and recommended, you could create a separate certificate for each of your servers (these will still require a SAN entry). - When it comes to NiFi talking to registry, the following must be successful: 1. A successful 2-way TLS handshake between NiFi and NIFi-registry. I think this may be were you are having an issue. Specifically with your NiFi server(s) presenting a client cert to the NiFi-registry. (This is where the "clientAuth" extendedKeyUsage comes in to the picture) 2.The client server(s) must all be authorized for both "Read" on "Can Manage buckets" and "Can proxy user requests". - Hope this helps you get your issues resolved. - Thank you, Matt -
... View more
11-16-2018
01:06 PM
Article content updated to reflect new provenance implementation recommendation and change in JVM Garbage Collector recommendation.
... View more
11-15-2018
01:54 PM
Is the destination NiFi of your ReprotingTask a NiFi cluster or Single NiFi instance? On the NiFi instance the RPG is pointing at, what is configured in the following property: nifi.remote.input.host
... View more
11-14-2018
02:46 PM
I think you may have misunderstood me? Below an existing "Answer" in this existing HCC thread you will see a "Add comment" link you can click on to respond to that existing answer. Just as I have done here. I noticed you started an entirely new question in HCC with your response above about the truststore. Thank you, Matt
... View more
11-13-2018
06:50 PM
@Félicien Catherin Your observations are correct. The prioritizer only works against the FlowFiles currently in the "Active" queue. Because this active queue resides in JVM heap, the reordering based on priority comes at very little expense to performance. Re-evaluating all swapped FlowFiles each time a new FlowFile enters a connection would have a hit to throughput performance. - The first question I would ask is why the queue is so large? - Increasing the configured swap threshold in nifi.properties file will allow more FlowFiles to be held in the active queue, but that comes at an expense of high heap usage by your NiFi. - Keep in mind that the best throughput will always be obtained when no prioritizers are defined on a connection.
... View more
11-13-2018
03:53 PM
@sally sally The invokeHTTP processor can be configured to utilize a SSLContextService controller service. It is in this Controller Service where you would define the location of keystore (if needed by server endpoint) and truststore. You need to make sure that this keystore and truststore files are owned by the same user that owns the NiFi java process. - Do you even know if you need client authentication for this endpoint? Do you know what form of user authentication is required (meaning, does endpoint support TLS user authentication via user certificate)? - when you say "when I run this code in cmd", are you talking about the command provided via the Oracle link you shared? I don't know what exact command you are trying to run, but Windows is reporting FileNotFoundException for some file referenced in your command. - Thank you, Matt
... View more
11-13-2018
03:46 PM
@sally sally *** Community Forum Tip: Try to avoid starting a new answer in response to an existing answer. Instead use comments to respond to existing answers. There is no guaranteed order to different answer which can make it hard following a discussion.
... View more
11-13-2018
03:34 PM
@Poonam Mishra - When using Site-To-Site (S2S) between two secured NiFi instances a 2-way TLS handshake occurs. Based on the specific error you are getting, this handshake appears to have been successful. This means your issue purely falls on the Authorization side. - The NiFi instance running the SiteToSiteProvenanceReportingTask is acting as the client in this connection to your other NiFi instance acting as the Server side of the connection. So it is the Server side NiFi that is checking what authorizations have been granted for the client NiFi instance(s) (If source is a NiFi cluster, every node will need to be authorized). - The full DN from the PrivateKeyEntry found in your keystore on the client NIFi will be used. If your target/Server side NiFi has any configured Identity.mapping.patterns defined in its nifi.properties file, then that full DN will be evaluated against them to see fi any match. If a match is found, then resulting mapping value is passed to the authorizer; otherwise, full DN will be passed to authorizer. - You have not shared what authorizer you are using, but there must exist user entries for every client NiFi who is connecting. For S2S, these client strings passed to the authorizer (case sensitive) must be granted the following policies: 1. "retrieve site-to-site details" <-- This policy allows the client NiFi (one with reporting task) to retrieve details about the target which includes details on supported transport protocols, number of nodes, RAW port values, Node hostnames, node load, etc...). 2. "receive data via site-to-site" <-- This policy allows the authorized client to utilize this remote input port for sending FlowFiles. This would be authorization set on your "ProvenanceMonitoring" remote input port. - Check both your nifi-app.log and nifi-user.log for full stack traces and error related to this process. - Thank you, Matt - If you found this answer addressed your question, please take a moment to login in and click the "ACCEPT" link.
... View more