Member since
07-30-2019
3396
Posts
1619
Kudos Received
1001
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 422 | 11-05-2025 11:01 AM | |
| 327 | 11-05-2025 08:01 AM | |
| 462 | 11-04-2025 10:16 AM | |
| 680 | 10-20-2025 06:29 AM | |
| 820 | 10-10-2025 08:03 AM |
09-01-2017
04:56 PM
Issue was browser version related. Switching to a newer version of the browser resolved this issue.
... View more
08-29-2017
07:20 PM
@kdoran Thanks a lot that makes sense it worked
... View more
08-31-2017
12:39 PM
@sally sally The user who is logged in and building out dataflow, has no correlation to who that dataflow is running as. All the processors on the canvas are being executed by the user who owns the Nifi process itself. So when you setup a SSL Context Service to use a specific keystore and truststore, it is the PrivateKeyEntry in that keystore that will be used as the user for authentication and authorization during any established connection. The TrustedCertEntry(s) in the truststore provided in the SSL Context Service will be used to establish trust of the Server certificates passed by the endpoint (in your case the certs being passed from your NiFi nodes) during the two-way TLS handshake. Now this is a little different then when you log in via the browser to the UI. Two-way TLS is not enforced by your browser like it is by NiFi's processors. Your browser likely did not trust the cert presented by your NiFI nodes, and you added an exception the first time you connected saying that you would like to trust that unknown cert coming from the nifi node. Within NiFi and the SSL Context Service, there is no way to add such an exception. So trust must work in both directions. This means the truststore you use in your ssl Context Service must be able to trust the certificates being passed by each of your Nifi nodes. Thanks, Matt
... View more
08-16-2017
04:18 PM
1 Kudo
@nesrine salmene The Database repository consists of two H@ databases: nifi-user-keys.h2.db nifi-flow-audit.h2.db When NiFi is running you will see two additional lock files that correspond to these databases. The nifi-user-keys.h2.db is only used when NiFi has been secured and it contains information about who has logged in to NiFi. The same information here is also output to the nifi-user.log. You can parse the nifi-user.log to audit who has logged in to a particular NiFi instance. The nifi-flow-audit.h2.db is used by NiFi to keep track of all configuration changes made within the NiFi UI. The information contained in this DB is viewable via the "Flow Configuration History" embedded UI found under the Upper right corner hamburger menu in NiFi's UI: You can use NiFi's rest API to query the Flow Configuration History. Thanks, Matt
... View more
10-30-2017
06:23 PM
1 Kudo
In later versions of NiFi, you may also consider using the "record-aware" processors and their associated Record Readers/Writers, these were developed to avoid this multiple-split problem as well as the volume of associated provenance generated by each split flow file in the flow.
... View more
08-29-2017
05:48 PM
@Wesley Bohannon Glad you came up with a solution. Sorry I did not get back to you sooner. Vacation got in the way. 🙂
... View more
10-13-2017
04:01 PM
@Hadoop User If merging FlowFiles and adding more concurrent tasks to your putHDFS processor help with your performance issue here, please take a moment to click "accept" on the above answer to close out this thread. Thank you, Matt
... View more
10-17-2017
02:19 PM
@Shawn Weeks I have found the solution. It is with the principal which is has permission validation. Thanks for your help
... View more
08-04-2017
12:59 PM
Thanks Matt Clarke
... View more
08-04-2017
03:17 PM
1 Kudo
@J. D. Bacolod Those processors were added for specific uses cases such as yours. You can accomplish the same thing almost using the putDistributedMapCache and FetchDistributeMapCache processors along with an UpdateAttribute processor. I used the UpdateAttribute processor to set a unique value in a new attribute named "release-value". In my case the value is assigned it was: The FetchDistributedMapCache processor then acts as the wait processor did looping FlowFile in the "not-found" relationship until the corresponding value is found in the cache. The "release-value" is written to the cache using the PutDistributedMapCache processor down the other path after the InvokeHTTP processor. It will receive the "Response" relationship. Keep in mind, the FetchDistributedMapCache processor does not have an "expire" relationship. If a response if never received for some FlowFile or the cache expired/evicted the needed value, those FlowFiles will loop forever. You can solve this two ways: 1. Set File Expiration on the connection containing the"not-found" relationship that will purge files that have not found a matching key value in the cache by the time the FlowFile's age has reached x value. With this option aged data is just lost. 2. Build a FlowFile expire loop which kicks these looping not-found FlowFiles out of loop after x amount of time so they can be handled by other processors. This can be done using the "Advanced" UI of an UpdateAttribute processor and a RouteOnAttribute processor: The UpdateAttribute sets a new attribute I called "initial-date" if and only if it has not already been set on the FlowFile. This can be done as follows using the "Advanced" UI of the UpdateAttribute processor : The RouteOnAttribute Processor then compares the current date plus x milliseconds to that attribute's value to see if file has been looping for more the x amount of time. (Using 6 minutes (360000 ms) as an example, my RouteOnAttribute would have a property/routing rule like this: FlowFiles that have been looping for 360000 milliseconds or more will then get routed to "expired" relationship where you can choose what you want to do with them. As you can see the processors wrap the above flow up in only two processors versus 5 processors you would need in older versions to get same functionality. Thanks, Matt
... View more