Member since
07-30-2019
3417
Posts
1623
Kudos Received
1008
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 451 | 12-17-2025 05:55 AM | |
| 512 | 12-15-2025 01:29 PM | |
| 540 | 12-15-2025 06:50 AM | |
| 398 | 12-05-2025 08:25 AM | |
| 662 | 12-03-2025 10:21 AM |
10-06-2023
09:01 AM
@PriyankaMondal 1. Not clear on the question here. Why use Toolkit to create three keystores? I thought you were getting three certificated (one for each node) from your IT team. Use those to create the three unique keystores you will use. 2. It appears your DN has a wildcard in it. NiFi does not support the use of wildcards in the DN of node ClientAuth certificates. This is because NiFi utilizes mutualTLS connections and the clientAuth DN is used to identify the unique connecting clients and is used to setup and configure the authorizations. Now you could ask your IT team to create you one keystore with a non wildcard DN like "cn=nifi-cluster, ou=domainlabs, DC=com" and add all three of your Nifi node's hostnames as SAN entries in that one PrivateKeyEntry. This would allow you to use that same PrivateKey keystore on all three NiFi nodes. This has downsides liek security. If keystore on one node gets compromised, all hosts are compromised because it is reused. All nodes will present as same client identity (since all present same DN) during authorization. So nothing will distinguish one node from the other. The keystore used by NiFi can ONLY contain one privateKey entry. Merging multiple keystores with privateKey entries will result in one keystore with more than one PrivateKeyEntry which is not supported by NiFi. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
10-05-2023
08:02 AM
@LKB Can you share screenshots of your UpdateAttribute processor configuration? Are you using the advanced UI of the UpdateAttribute processor? The UpdateAttribute processor is fairly simplistic in design. Without configuring the advanced UI, it simply can remove attributes or create/modify existing attributes. Each Attribute is defined by key:value pairs where the property name is the key and property value is the key. The Advanced UI allows for conditionally based attribute additions or modifications. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
10-05-2023
07:49 AM
@hkh After upgrading your DB, did you upgrade the driver being used in the DBCPConnectionPool controller service to the latest? This may be reason why the controller service is still trying to enable. You may also want to look at a NiFi thread dump to see what that enabling Controller service is waitin on If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
10-05-2023
07:41 AM
@PriyankaMondal You should have a signed certificate for each of your three NiFi nodes. Make sure those certificates meet the minimum requirements for NiFi. Certificate DN can not contain wildcards. Certificate Extended Key Usage (EKU) must include "clientAuth" and "serverAuth" Certifcate must contain SAN entry for server hostname and any alternate DNS names that server may use. The Certificate (PrivateKey) needs to be placed inside a JKS or PKCS12 keystore. There are plenty of resource in the web for creating keystores. But essentially you want to combine your pem and key files to make a p12 file. You can then import that p12 file in to a JKS keystore. A NiFi keystore must contain ONLY one PrivateKeyEntry. So don't create a singel keystore where you import all 3 private keys. You should have three separate Keystores (one for each NiFi node). NiFi uses two keystores (keystore and truststore): Keystore - contains only one PrivateKeyEntry (unique to each NiFi node) Truststore - contains one too many TrustedCertEntries. The same truststore is used on all NiFi nodes. The truststore needs to contain the compete trust chain for your node's private keys. A certificate is signed by an authority. In order for server to trust a certificate presented in a TLS exchange, the authorities that signed that certificate must be trusted. That is where this truststore comes into play. An authority can be of two types, intermediate CA or Root CA. An intermediate CA is one where the issuer and signer are two different entities (DNs don't match). A Root CA is one where the issue and signer are the same (DNs match). Let's say you Private key with DN = "CN=node1, OU=NiFi" was signed by an Intermediate corp CA with "DN = CN=Intermediate1, 0U=company". And that intermediate CA TrustedCert was signed by a Root CA with DN = "CN=RootCA, Ou=company". In order for your truststore to have the complete trust chain, the NiFi truststore would need to contain both a TrustedCertEntry for Intermediate CA and the root CA. For the Truststore you will need to get the public cert(s) from your IT team (who should also be able to help you with your keystore and truststore creation) As far as the setup of NIFi goes, nothing else is different from what you did when using the self-signed certificates when it comes to configuration. Keep in mind that each node's identity is derived from the nodes private certifcate DN. That DN is evaluated against and configured user identity mapping patterns configired in the nifi.properties file. If the java regex pattern matches the certificate DN, the mapping value and mapping transform are applied. That resulting mapped identity is what needs to be authorized in NiFi. So these mapped identities become your node identities when configuring the NiFi authorizer. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
10-04-2023
05:46 AM
@hegdemahendra The autoload capability in NiFi can only auto-load new nars added to the directory. It does not handle unload or reload. The reason for this is because a reload would require the upgrade of existing components using a previously uploaded nar. This process would require the stopping of all components added to canvas from that nar, upgrading all those components to the new nar version, and then starting the components again. You also have the issue with the fact that the flow.json.gz has already been loaded in to memory with a different component version. Then you also have the issue of when someone adds a new nar version and does not remove the old nar version first. You should be able to click on a component on the canvas once multiple version of same class are loaded and switch to the other version. With the way NiFi is designed, NiFi will allow multiple versions of the same components to be loaded (always been that way). So there has never been the capability when multiple versions of the same component are loaded to trigger an upgrade of any components from those duplicate component classes on the canvas. NiFi can only change a component's version on startup and only if only one version of the component class exists on startup. On startup, NiFi loads the NiFi lib nars and any nars found in custom lib folders or autoload directory. These nars get unpacked in to a work directory. NiFi then starts loading the dataflow from the flow.json.gz file. The flow.json.gz contains each components class, version, and configuration. When loading a component where version is not found but ONLY one different version of that same component class is found, NiFi will switch that component that version of the class (could be older version or newer). If any component versions changed on startup a new flow.json.gz is written out to match what is loaded in to memory. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
10-03-2023
07:36 AM
1 Kudo
@edim2525 GC kicks in around 80% of heap memory usage. You could certainly enable GC debug logging to verify that GC is executing. GC can only clean-up unused memory (Memory no linger being held by a process). I see you have three NiFi nodes. Are you only having Heap memory usage issues on the one node? I see the node with growing heap usage is the elected primary node. What processors do you have running as "primary node" execution? Does your primary node have a lot more queued FlowFiles than the other nodes? If you disconnect the primary node from your cluster which will force a new primary node to be elected, does the heap then start to grow on the new elected primary node? What version of NiFi are you running? What version of Java is your NiFi running with? Have you collected heap dumps and analyzed them to see where the heap usage is being used? Do you have any custom processors added to your NiFi? Are you using any scripting based processors where you have written your own code that is being executed within NiFi? Matt
... View more
09-29-2023
06:04 AM
@VLban From what you have shared, I don't think you are having any issues with yoru NiFi communicating with your zookeeper. When NiFi is running it sends a heartbeat message to ZK so that ZK knows that node is available. ZK is used to facilitate the election of two NiFi roles: 1. Cluster coordinator - Only one node in the NiFi cluster can be elected as cluster coordinator. The cluster coordinator is responsible for replicating requests made form any node to all nodes in the cluster. This allows for NiFi to support a zero master architecture meaning that users do not need to connect to the elected cluster coordinator node in order to make changes. Users can interact with the NiFi cluster form any node. 2. Primary node - Only one node at a time can be elected to this role. The node with this assigned role will be the only node that schedules component processors configured with "primary node" only execution. Your log output shared indicates that ZK is receiving these heartbeats from at least some of the 10 nodes (maybe all of them, but we know the node from which you got this log is talking to ZK fine) allowing for cluster coordinator election to be successful. We see that "sd-sagn-rtyev:9082" was elected with the cluster coordinator role. Once nodes aware of who the elected cluster coordinator is, they will start sending cluster heartbeats to that elected cluster coordinator. The initial set of heartbeats will be used to connect the nodes to the cluster (things like making sure all nodes are running the exact sam flow.xml.gz/flow.json.gz, have matching users.xml files, and authorizations.xml files). If your NiFi is secured (running over HTTPS), then all communications between nodes are over mutualTLS encrypted connections. Based on the exception you shared, it sounds like this connection between node(s) and the elected cluster coordinator is failing. 1. Make sure that all nodes can properly resolve the cluster hostnames to reachable IP addresses. 2. Make sure that the PrivateKeyEntry in each nodes keystore configured in the nifi.properties supports EKUs clientAuth and serverAuth, have required host SAN entry(s). 3. Make sure that the truststore used on every node contains the complete trust chain for all the privateKey entries being used by all 10 nodes. A PrivateKey may be signed by a root or intermediate CA (an intermediate CA may be signed by another intermediate CA or the root CA). A complete trust chain consists of ALL trusted public certificates from signer of the Private key to the root CA. If a MutualTLS handshake can not be established, typically one side or the other will simply close the connection. Most commonly as a result of lack of proper trust. Thus would explain the Broken pipe (write failed) as the client was unable to send heartbeat connection_request to the elected cluster coordinator. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-28-2023
01:24 PM
@Frank37 Welcome to the community. NiFi's provenance tracks specific types of provenance events and not all processors produce these types of events. So typically a processor like LogMessage would not be producing any provenance events. Provenance is used to track the lifecycle of a NiFi FlowFile tracking any time it is routed, modified (content or attributes), cloned, dropped, etc. That being said, it appears your LogMessage processor is auto-terminating it's success relationship? In that case it should be generating "DROP" provenance events. I setup a simply flow to verify this works successfully: So your issue seems unrelated to the processor specifically. First thing to check are your provenance related configuration properties in the nifi.properties file. You stated oldest event is more than a month old which is not expected unless you change your default provenance settings which only retains provenance events for 7 days. If you go to the global menu in the upper right corner of the UI and select "Data Provance", do you get any provenance results? If so, the newest will be at the top. What is the dat on the newest event? If it is old and you have active running flows, this tells me provenance has not been updating for some time. Does newest provenance event align with some change to your NiFi or upgrade/migration? maybe a configuration or corruption issue: Which provenance persistence provider are you using? What is the newest file in your provenance repository storage directory? I see you are running Apache NiFi 1.19. Was this instance always running this version or did you upgrade at some point in time? If so what version was used previously? There were some bugs in older versions of NiFi that could have rendered the provenance repository corrupt if this was upgrade from some older version at one point in time. Stopping NiFi, purging all contents of provenance repository and starting up would resolve that. Maybe an authorization issue: Has your user been authorized to "view provenance" data on this component or on the process group in which this processor resides? The authorization to "query provenance" is not that same as the authorization to "view provenance" events produced by individual components. Processors inherit authorization from the parent process group unless explicit policies have been set directly on the component. Process Group in turn inherit form parent Process groups if explicit policy is not set on child process group. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-28-2023
12:54 PM
@sarithe You may also want to take a look at Process Group (PG) FlowFile Concurrency configuration options as a possible design path since there does not appear to be any dependency between task 1 and task 2 in your description. You just want to make sure that not more than 2 tasks are executing concurrently. You move your processors that handle the 2 task executions inside two different child PGs configured with "Single FlowFile per Node" Process Group FlowFile Concurrency. Within the PG you create an input port and output port. Between these two ports you handle your task dataflow. Outside this PG (parent PG level), you handle the triggering FlowFiles. The task PGs will allow 1 FlowFile at a time to enter that PG and because of the FlowFile Concurrency setting, not allow any more FlowFiles to enter this PG until that FlowFile processes out. As you can see from above example, each task PG is only processing a single FlowFile at a time. I built this example so that task 2 always takes longer, so you see that task 1 Pg is outputting more FlowFile processed the Task 2 PG while still making sure that on two tasks are ever being executed concurrently. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
09-27-2023
06:29 AM
@lafi_oussama Does the zipfile actually contain files or only empty directories?
... View more