Member since
07-30-2019
3427
Posts
1632
Kudos Received
1011
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 93 | 01-27-2026 12:46 PM | |
| 503 | 01-13-2026 11:14 AM | |
| 1086 | 01-09-2026 06:58 AM | |
| 936 | 12-17-2025 05:55 AM | |
| 997 | 12-15-2025 01:29 PM |
03-23-2023
11:43 AM
@srilakshmi Yes, Apache NiFi 1.9.0 was released over 4 years ago on February 19, 2019. Many bugs, improvements and security fixes have made there may into the product since then. The latest release as of this post is 1.20. While i can't verify 100% from what exists in this thread that you are experiencing NIFI-9688, the odds are pretty strong. You can fin the release notes for Apache NiFi here: https://cwiki.apache.org/confluence/display/NIFI/Release+Notes#ReleaseNotes-Version1.20.0 If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped. Thank you, Matt
... View more
03-21-2023
11:49 AM
@Techie123 What is observed in the endpoint logs for these transactions? Can you share the complete stack trace from the NiFi logs? Can you share your invokeHTTP processor configurations? Can you share your NiFi version? Thanks, Matt
... View more
03-21-2023
11:39 AM
@udayAle @ep_gunner When NiFi is brought down, the current state (stopped, started, enabled, disabled) of all components is retained and on startup that same state is set on the components. Only time this is not true is when the property "nifi.flowcontroller.autoResumeState" is set to false in the nifi.properties file. When set to false a restart of NiFi would result in all components in a stopped state. In a production environment, this property should be set to true. Perhaps you can share more details on the maintenance process you are using as I am not clear on how your maintenance is impacting the last known state of some components. If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped. Thank you, Matt
... View more
03-21-2023
06:32 AM
@davehkd When your nodes become disconnected, a reason will be logged and also most recent events viewable from within the cluster UI via the NIFi interface. So first question is reason given for node disconnections? Is it reporting a communication exception with Zookeeper or is it reporting disconnection due to lack of heartbeat (more common). Within a cluster a node is elected as the cluster coordinator by ZK, the nodes begin sending health and status heartbeats to that cluster coordinator. Default is every 5 seconds. The elected cluster coordinator expects to receive at least one heartbeat every 8x the configured heartbeat interval, so every 40 seconds. This is a pretty aggressive setting for NiFi clusters under heavy load or high heap pressure caused by dataflow design. So first make sure that every node in your cluster has the same configured heartbeat interval value (mixed values will definitely cause lots of node disconnections). If you are seeing reason for disconnection as lack of heartbeat, adjust the heartbeat interval to 30 seconds. This means a heartbeat would need to missed in a 4 minutes window instead of 40 seconds. As far as GC goes, GC is triggered when Java heap utilization gets around ~80%. How much memory have you configured your NiFi to use? Setting really high for no reason means would result in longer GC stop-the-world events. Generally NiFi would be configured with 16 GB to 32 GB for most use cases. If you find yourself needing more then that , you should take a closer look at your dataflow implementations (dataflows). The NiFi heap holds many things including the following: - fllow.json.gz is unpacked and loaded into heap memory on startup. Flow.json.gz includes everything you have added and configured via the NiFi UI (flows, controller settings, registry clients, templates, etc.). Templates are a deprecated method of creating flow snippets for reuse. They are held in heap because they are part of the flow.json.gz even though they are not part of any active dataflow. Downloading for external storage and deleting from within NiFi will reduce heap usage. - user and groups synced from ldap if using the ldap-user-group-provider. Shoudl make sure that your have configured filters on this provider so that you are liimiting the number of groups and users to only those the will actually be accessing yoru NiFi. - FlowFiles are what you see queued between processor components on the UI. FlowFiles consist of metatdata/attributes about the FlowFile. NiFi has build in swap settings for how many FlowFiles can exist in a given queue before they start swapping to disk (20,000 set via nifi.queue.swap.threshold in nifi.properties). Swap files are always 10,000 FlowFiles. By default, a connection has a backpressure object threshold of 10,000. This means by default a connection will not likely generate a swap file because it is unlikely to reach the swap threshold with these defaults (connection queues are soft limits). So If you have lots of connection with queued FlowFiles, you will have more heap usage. Generally speaking, a FlowFile's default metadata attributes amount to very little heap usage, but users can write whatever they want to FlowFile attributes. If you extracting and writing larges amounts of content to FlowFile attributes in yoru dataflow(s), you'll have high heap usage and should be question yourself as to why you are doing this. - NiFi processor components - Some processors have resource considerations that users should take in to considerations when using those processors. The embedded documentation within your NiFi will have section for resource considerations under each processor's docs. Look to see if you are using and with heap/memory consideration. Often heap usage can be reduced through dataflow design modifications. I hope these details help you dig into your heap usage and helps you make adjustments to improve your cluster stability. If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped. Thank you, Matt
... View more
03-17-2023
07:30 AM
@anoop89 This is an unrelated issue to this original thread. Please start a new question. Fell free to @ckumar and @MattWho in your question so we get notified. This issue is related to authorization of your user. Thanks, Matt
... View more
03-14-2023
05:56 AM
@srilakshmi The PublishKafka and PublishKafkaRecord processors do not write any new attributes to the FlowFile when there is a failure. It simply logs the failure to the nifi-app.log and routes the FlowFile to the failure relationship. So on the FlowFile there is no unique error written that can be used for dynamic routing on failure. It could be expensive to write stack traces that come out of Client code to NiFi FlowFiles considering FlowFile attributes/metadata resides in the NiFi heap memory. This may be a topic you want to raise in Apache NiFi jira as a feature/improvement request on these processors to get feedback from Apache NiFi community committers. If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped. Thank you, Matt
... View more
03-14-2023
12:57 AM
well if you want to modify the port, you have to stop nifi, modify nifi.properties and restart nifi. But without knowing how you configured NiFi ( content of nifi.properties) nobody can tell you how to solve your issue. Maybe you can add the content of nifi.properties here and afterwards we can guide you further. Nevertheless, how Matt stated below, you need to check your application logs and see if NiFi started correctly. If NiFi starts correctly, you will see a log line saying that NiFi UI (User Interface) is available at a specific URL. You need to take that URL and paste it in your browser. If such a line is not available, than you need to check the logs to see what the error is.
... View more
03-13-2023
01:43 PM
hello Matt, i was simply trying to understand if my approach for use of 3rd party certificates as I described was an appropriate approach. Thanks for pointing out the tool. I'll use it.
... View more
03-12-2023
11:09 PM
@GSB Have any of the replies helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
03-09-2023
11:57 AM
@davehkd Unfortunately, I would need to have access to the nifi-app.log file(s) from each node to dig in deeper. Did you copy the flow.xml.gz, flow.json.gz, users.xml, and authorizations.xml files from NiFi node 1 or 2 to NIFi node 3? These files all need to match in order for a node to join the cluster. 1. The UI of nifi1 or nifi2 shows "2/2" in the status bar just along top of canvas? 2. The UI of nifi3 shows "1/1" in the status bar just along the top of the canvas? If both above are true, this indicates nifi3 is member of a different cluster. Possible result if issue with your ZK or using a different ZK root node (nifi.zookeeper.root.node). Check for any leading or trailing whitespace in your configuration. You may also want to inspect your ZK logs for the connections coming from all three nodes. If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped. Matt
... View more