Member since
07-30-2019
3406
Posts
1622
Kudos Received
1008
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 150 | 12-17-2025 05:55 AM | |
| 211 | 12-15-2025 01:29 PM | |
| 144 | 12-15-2025 06:50 AM | |
| 262 | 12-05-2025 08:25 AM | |
| 423 | 12-03-2025 10:21 AM |
01-07-2020
01:49 PM
@venu413 If you open the NiFi summary UI (NiFi UI --> Global menu --> Summary), select the connections tab, locate this connection with the 54 queued flowfiles, and then click the cluster connection summary icon ( )to far right, Are all 54 queued FlowFiles on same node? Is anything being logged in the nifi-app.log on that node were these FlowFiles are queued? Any observed errors in nifi-app.log during startup if you restart this node? In your nifi.properties file, what values are configured for these properties: nifi.cluster.load.balance.comms.timeout= nifi.cluster.load.balance.connections.per.node= nifi.cluster.load.balance.host= nifi.cluster.load.balance.max.thread.count= nifi.cluster.load.balance.port= nifi.cluster.node.address= If recommend that both the "nifi.cluster.node.address=" and "nifi.cluster.load.balance.host=" have been configured uniquely per node in your cluster to the resolvable hostname for the given node. So if you node has a hostname of node1.mycompany.com, then this hostname should be used in both these properties in the NiFi running on that host. Restart is needed anytime you edit the nifi.properties file.
... View more
12-27-2019
02:42 PM
1 Kudo
Hello @Former Member Now that you have both NiFi and NiFi-Registry secured they will use TLS to authenticate with one another. NiFi-Registry does not initiate any connections to NiFi. NiFi will always act as the client talking to NiFi-Registry. 1. All 3 of your NiFi nodes must exist as users in the NiFi-Registry. 2. Any users who will be version controlling NiFi process Groups will need to exist as users in NiFi-Registry. 3. Your NiFi nodes must be authorized in NiFi-Registry for "Can proxy user requests" and read for "can manage buckets". Found by clicking on settings in NiFi-Regsitry UI, then selecting Users tab, and clicking pencil to right of each of your NiFi nodes. 4. You users must create a bucket(s) in NiFi-Registry and authorize your NiFi user(s) for read, write, delete on the bucket. From same setting UI click buckets tab, click "add bucket" then using pencil to left of bucket authorize your user(s). 5. From the NiFi UI, click on the global menu (upper right corner) --> Controller Settings --> Registry Clients tab. Click the "+" icon to add a new NiFi-Registry client. Provide the HTTPS://<nifi-regsitry-hostname:port> as the URL and a name of your choosing. Provide the keystores and truststores created for your NiFi and NiFi-Registry can support mutual authentication between these two services, you will be good to go. Otherwise check your nifi and nifi-registry app logs for any TLS handshake errors which would need to be resolved. Hope this helps you get going with NiFi-Registry, Matt
... View more
12-27-2019
02:25 PM
@stevenmatison I don't see any reason why you can not do this. Do NOT put the 1.10 nar in to NiFi's defualt lib directory. Instead create a new custom lib directory by adding a new property to the nifi.properties file: nifi.nar.library.directory.custom1=/<path>/custom-libs Copy the 1.10 parquet nars in to this directory and make sure the permissions and ownership are set correctly for this custom path and libs so that the NiFi service user can access them. Once NiFi is restarted, you should see both versions of the parquet processor available for adding to the NiFi canvas. Note: Adding additional versions of the same nars can make upgrading a bit more challenging. After copying the new nar in to custom nar lib, you now have both 1.9 and 1.10 versions of the NiFi components. Let's say later you upgrade to 1.11 when it becomes available for example. NIFi normally handles upgrading to new versions during startup (1.9 processors upgraded to 1.11), but in you case after upgrade you will have this custom lib nar 1.10 and 1.11 that cam with upgrade. Any 1.9 parquet components from your flow.xml.gz will become ghost processors on the canvas because NiFi does not have a version 1.9 and there are two options (1.10 and 1.11) so it picks neither. You would need to either drop ghost processors and add the 1.10 or 1.11 version in its place or manually edit the flow.xml.gz to change version number manually to desired available version. Hope this helps, Matt
... View more
12-27-2019
07:14 AM
@Former Member Since you are asking a new question unrelated to the question asked in the original subject, I kindly ask that you start a new question. Would be happy to help. Asking multiple questions in one thread makes a thread harder to follow for other users of this community forum. If you feel this question subject has been answered, please accept a solution provided to close out this thread. Thank you, Matt
... View more
12-27-2019
07:09 AM
1 Kudo
@saivenkatg55 A very common reason for UI slowness is JVM Garbage Collection (GC). All GC events are stop-the-world events whether it is a partial or full GC event. Partial/young GC is normal and healthy, but if it is being triggered back to back non stop or is running against a very large configured JVM heap it can take time to complete. You can enable some GC logging in your NiFi bootstrap.conf file so you can see how often GC is running to attempt to free space in your NiFi JVM. To do this you need to add some additional java,arg.<unique num>= entries in your NiFi bootstrap.conf as follows: java.arg.20=-XX:+PrintGCDetails
java.arg.21=-XX:+PrintGCTimeStamps
java.arg.22=-XX:+PrintGCDateStamps
java.arg.23=-Xloggc:<file> The last entry allows you to specific a separate log file for this output to be written in to rather than stdout. NiFi does store information about component status in heap memory. This is the info you can see on any component (processor, connection, process group, etc.) when you right click on it and select "view status history" from the displayed context menu. You'll notice that these component report status for a number of data points. When your restart your NiFi, everything in the JVM heap memory is gone. So over the next 24 hours (default data point retention) the JVM heap will be holding a full set of status points again. You can adjust the component status history buffer size and datapoint frequency to reduce heap usage here if this status history is not that important to you via the following properties in the nifi.properties file: nifi.components.status.repository.buffer.size=1440
nifi.components.status.snapshot.frequency=1 min above represents defaults. For every status history point for every single component, NiFi will retain 1440 status points (recording 1 point every 1 min). This totals 24 hours worth of status history for every status point. Changing the buffer to 288 and frequency to 5 minutes will reduce number of points retained by 80% while still giving your 24 hours worth of points. The dataflows you build may result in high heap usage triggering a lot of heap pressure. Those NiFi components that can result in high heap usage are documented. From the NiFi Global menu in the upper right corner of the NIFi UI, select "Help". You will see the complete list of components on the left had side. When you select a component, details about that component will be displayed. One of the details is "System Resource Considerations". For example, here is the system resource considerations for the MergeContent processor: You may need to make adjustments to your dataflow designs to reduce heap usage. NiFi also holds FlowFile metadata for queued FlowFiles in heap memory. NiFi Does have a configurable swap threshold (which is applied per connection) to help with heap usage here. When a queue grows too large, FlowFile metatdata in excess of the configured swap threshold will be written to disk. The swapping in and swapping out of FlowFiles from disk can affect dataflow performance. NiFi's default backpressure object thresholds settings for connections is set low enough that swapping would typically never occur. However, if you have lots and lots of connections with queued FlowFiles, that heap usage can add up. This article I wrote may help you here: https://community.cloudera.com/t5/Community-Articles/Dissecting-the-NiFi-quot-connection-quot-Heap-usage-and/ta-p/248166 ----- Other than heap usage, component validation can affect NiFi UI responsiveness. Here is an article i wrote about that: https://community.cloudera.com/t5/Community-Articles/HDF-NiFi-Improving-the-performance-of-your-UI/ta-p/248211 Here is another useful article you may want to read: https://community.cloudera.com/t5/Community-Articles/HDF-NIFI-Best-practices-for-setting-up-a-high-performance/ta-p/244999 Hope this helps you with some direction to help improve your NiFi UI responsiveness/performance, Matt
... View more
12-26-2019
08:38 AM
1 Kudo
@Former Member Make sure you have configured "nifi.registry.security.needClientAuth=false". When not set it defaults to true. NeedClientAuth=true tells NiFi that in the TLS handshake it will "require" client to present a client side certificate. If one is not presented, the connection will just close and NiFi-Registry will never try any other authentication method. This property must be set to false in order for NiFi-Registry to support any authentication method other than TLS. Hope this gets you going, Matt
... View more
12-23-2019
10:24 AM
@Former Member Simply configuring the ldap-provider in the identity-providers.xml file will not result in NiFi-Registry using it. Make sure you have set the following property in the nifi-registry.properties file: nifi.registry.security.identity.provider=ldap-provider This tells NiFi to use the "ldap-provider" configured in that file. Also make sure the file is named "identity-providers.xml" and not "login-identity-providers.xml". NiFi-Registry uses the former while NiFi uses the latter identity providers filename. One other things to consider... If NiFi-Registry is configured to support Spnego: nifi.registry.kerberos.spnego.authentication.expiration=12 hours
nifi.registry.kerberos.spnego.keytab.location=
nifi.registry.kerberos.spnego.principal= Spnego auth will be attempted before any configured identity provider. So all it takes is to have Spnego enabled in your browser and NiFi-Registry to be setup to support Spnego auth and you will not see login page as well. If you do not have Spnego enabled in your browser, then this is not your issue because even if configured if browser does not return Spengo creds, NiFi-Registry will move on to next configured authentication provider. Hope this helps, Matt
... View more
12-23-2019
10:12 AM
@Siraj Does your ConsumeMQTT processor produce all output FlowFIles with the same Filename? Your MergeContent processor will merge if both configured min settings are satisfied at the end of execution. If you have your MergeContent processor configured to run as fast as possible (run schedule set to 0 sec default). It may upon execution only see one FlowFile on incoming connection at that moment in time and put only one FlowFile in a bin and merge just that one FlowFile since you set your min num entries to "1". I suggest you edit your MergeContent as follows: 1. Configure "Correlation Attribute Name" to "filename". 2. Perhaps increase your min setting from 1 to some higher value. 3. Always set "Max Bin Age". (This is your forced merge property, it will force a bin to merge even if it has not reach both min values within this configured amount of time.) 4. Make sure you have enough bins to accommodate the expected number of unique filenames plus 1 extra. (If all bins have allocated FlowFiles to them and the next FlowFile cannot be added to an existing bin, the oldest bin will be forced to merge to free a bin). The NiFi putFile processor does not support append. Append actions can cause all kinds of issue especially with a NiFi cluster where the target directory of the putFile is mounted to all the NiFi nodes. You can't have multiple nodes trying to write/append to the same file at the same time. My suggestion would be to use the UpdateAttribute processor before your putFile to modify the filename attribute. Perhaps prepend a uuid to the filename to ensure uniqueness across multiple files or NiFi nodes (if clustered). ${UUID()}-${filename} Hope this helps you, Matt
... View more
12-23-2019
09:58 AM
@Siraj The NiFi putFile processor does not support append. Append actions can cause all kinds of issue especially with a NiFi cluster where the target directory of the putFile is mounted to all the NiFi nodes. You can't have multiple nodes trying to write/append to the same file at the same time. My suggestion would be to use the UpdateAttribute processor before your putFile to modify the filename attribute. Perhaps prepend a uuid to the filename to ensure uniqueness across multiple files or NiFi nodes (if clustered). ${UUID()}-${filename} Hope this helps you, Matt
... View more
12-23-2019
09:48 AM
@Boenu You will need to configure your HandleHttpRequest processor with a SSL Context Service in order to encrypt data in transit being sent to this processor from a client. This of course then means you client needs to be able to at a minimum to trust the server certificate presented by this SSL context service in the TLS handshake. The truststore you use in the NiFi SSL Context Service will only need to contain the public cert for your client or complete certificate trust chain for your client if you have configured your HandleHttpRequest processor to "Need authentication" in the Client Authentication property. Mutual Authentication is not needed to ensure encryption of data in transit. Hope this helps, Matt
... View more