Member since
07-30-2019
3434
Posts
1632
Kudos Received
1012
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 125 | 01-27-2026 12:46 PM | |
| 542 | 01-13-2026 11:14 AM | |
| 1176 | 01-09-2026 06:58 AM | |
| 980 | 12-17-2025 05:55 AM | |
| 486 | 12-17-2025 05:34 AM |
01-19-2022
05:45 AM
@Wisdomstar Thank you, I appreciate that and glad I could help. Matt
... View more
01-18-2022
10:43 AM
@LuisLeite Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
01-18-2022
08:34 AM
@Kilynn So as i mentioned in my last response, once memory usage go to high, OS level OOM Killer was most likely killing the NiFi service to protect the OS. The NiFi bootstrap process would have detected the main process died and started it again assuming OOM killer did not kill the parent process.
... View more
01-18-2022
08:23 AM
@oopslemon NiFi only encrypts and obscures values in properties that support sensitive properties (so those properties which are specifically coded as sensitive properties like "password" properties). So there is no way at this time to encrypt all or portions of property values not coded as sensitive. Keep in mind it is not just what is visible in the UI, your unencrypted passwords will be in plaintext with the NiFi flow.xml.gz file as well. My recommendation to you is to use mutual TLS based authentication instead. You can create a clientAuth certificate to use in your rest API calls. Then you need to make sure that your clientAuth certificate is authorized to perform the actions the rest-api call is making. This is not going to be possible while using the single user login mode as it does not allow you to setup additional users and authorizations. This single users authentication and authorization providers where added to protect users from unprotected access to their NiFis. It was not meant to be the desired choice when securing your NiFi. It is one step above an unsecured default setup that existed prior to NiFi 1.14. It protects you, but also has limitations that go with its very basic functionality. So step one is to switch to another method of authentication and authorization to you NiFi. TLS is always enabled for authentication as soon as NiFi is configured for HTTPS. You can configure additional authentication methods like ldap/AD. https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#user_authentication The authorizer configured in the authorizers.xml file allows you to establish policies that control user/client permissions. https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#multi-tenant-authorization Then you can configure your invokeHTTP processor to simply use a SSLContextService that you would configure with your clientAuth certificate keystore and a truststore. The password fields in this controller service would be encrypted. No more need to constantly get a new bearer token. All you need to worry about is getting a new client certificate before the old one expires which is typically every 2 years, but that is configurable when you create it and get it signed. If you found this response assisted with your query, please take a moment to login and click on "Accept as Solution" below this post. Thank you, Matt
... View more
01-14-2022
11:09 AM
@sandip87 It would be difficult to point our the specific misconfiguration without seeing your nifi.properties, login-identity-providers.xml, and authorizers.xml files. Also sharing the complete stack trace thrown during NiFi service startup would be helpful as well. There should be more than what you shared that was logged in the nifi-app.log. Thanks, Matt
... View more
01-14-2022
10:37 AM
1 Kudo
@gm There are a few changes to you command you will need to make. All configuration changes need to include the current revision. You can get the current revision by first executing the following: curl 'https://<hostname>:<port>/nifi-api/processors/5964ef54-017e-1000-0000-0000219f4de1' \
-H 'Authorization: Bearer <BT>' \
--compressed \
--insecure Then to make a change you can then use: curl 'https://<hostname>:<port>/nifi-api/processors/5964ef54-017e-1000-0000-0000219f4de1' \
-X 'PUT' \
-H 'Authorization: Bearer <BT>' \
-H 'Content-Type: application/json' \
-d '{"component":{"id":"5964ef54-017e-1000-0000-0000219f4de1","config":{"properties":{"bootstrap.servers":"${MIL_KAFKA_KERB_BROKERS}"}}},"revision":{"version":<revision number>}}' \
-i \
-k Things to take careful note of: 1. The user friendly property names shown in processors on the NiFi UI may not always match with the actual property name being modified. Above is a perfect example since the consume and publish Kafka processor displays "Kafka Brokers"; however the actual kafka property being set is "bootstrap.servers". 2. It might be safer to use --data-raw in stead of just -d since the content may have = and @ used in it. 3. Start with '{" instead of '" only. 4. Be carful when copying from a text editor as the ' and " may get altered/changed by the editor. 5. All changes require a correct revision number. The first command I provided will return you the current revision for the component. Then use that revision number as shown in above example when you PUT the change. Making use of the "Developer tools" provided within your browser will make it easier when trying to troubleshoot NiFi rest-api requests. Simply open developer tools, make change to property, click "apply" on the component and observer the call made in the Network tab of the developer tools. In most developer tools you can right click on the call and select "Copy as curl". Then paste that copied command in your editor of choice for review. Keep in mind the what you copy will have some additional unnecessary headers. If you found this response assisted with your query, please take a moment to login and click on "Accept as Solution" below this post. Thank you, Matt
... View more
01-14-2022
07:24 AM
@LejlaKM Sharing your dataflow design and processor component configurations may help get you more/better responses to your query. Things you will want to look at before and when you run this dataflow: 1. NiFi heap usage and general memory usage on the host 2. Disk I/O and Network I/O 3. NiFi Host CPU Utilization (If your flow consumes 100% of the CPU(s) during execution, this can lead to what you are observing. Does UI functionality return once copy is complete?) 4. Your dataflow design implementation including components used, configurations, concurrent tasks etc. While most use cases can be accomplished through dataflow implementations within NiFi, not all use cases are a good fit for NiFi. IN this case your description points at copying a large Table from One Oracle DB to another. You made not mention of any filtering, modifying, enhancing, etc being done to the Table data between this move which is where NiFi would fit in. If your use case is a straight forward copying from A to B, then NiFi may not be the best fit for this specific us case as it will introduce unnecessary overhead to the process. NiFi ingest content and writes it a content_repository and creates FlowFiles with attributes/metadata about the ingested data stored in a FlowFile_repository. Then it has to read that content as it writes ti back out to a destination. For simple copy operations where not intermediate manipulation or routing of the DB contents needs to be done, a tool that directly streams from DB A to DB B would likely be much faster. If you found this response assisted with your query, please take a moment to login and click on "Accept as Solution" below this post. Thank you, Matt
... View more
01-14-2022
05:42 AM
Sometimes nifi UI is not showing because nifi is not running correctly even if it shows that it is launched, i suggest to check your java_home parameter in the nifi/bin/nifi-env.sh file and see if the path and/or value of jdk corresponds to yours. (note: the java_home parameter in this file is not editabe).
... View more
01-11-2022
08:37 AM
@Neil_1992 I strongly recommend not setting your NiFi heap to 200GB. Java reserves the XMS space and grows to the XMX space as space is requested. Java Garbage Collection (GC) execution to reclaim heap no longer being used does not kick in to ~80% of heap is used. That means GC in your case would kick in at around 160+ GB of used heap. All GC execution is stop-the-world activity which means your NiFi will stop doing anything until GC completes. This can lead to long pauses resulting node disconnections, issues with timeouts with dataflows to external services ,etc. When it comes to flow.xml.gz file, you are correct that it is uncompressed and loaded into heap memory. The flow.xml.gz contains: Everything you add via the NiFi UI to the canvas (processors, RPG, input/output ports, funnels. labels, PGs, controller services, reporting tasks, connections, etc.). This includes all the configuration of fro each of those components. NiFi Templates are also stored in the flow.xml.gz, uncompressed and loaded in to heap as well. Once a NiFi template is created, it should be downloaded, stored outside of NiFi, and local copy of template inside of NiFi deleted. As far as your specific flow.xml.gz, I see a closing tag "</property>" following that indicates that some component has a property which typically consists of a "name" and "value" with the huge null laced null strings in the value field. I'd scroll up to see which component this property belongs to and then check why this value was set. Maybe it was a copy paste issue? Maybe this is just part of some template that was created with this large string for some purpose? Nothing here says with any certainty that there is a bug. If you found this response assisted with your query, please take a moment to login and click on "Accept as Solution" below this post. Thank you, Matt
... View more
01-10-2022
01:31 PM
@techNerd I think your scenario may need a bit more detail to understand what you are doing and what it is doing versus what you want the flow to do. The ListFile only listed information about file(s) found in the target directory. It then generates a one of more FlowFiles from the listing that was performed. A corresponding FetchFile processor would actually retrieve the content for each of the listed files. From the sounds of your scenario, you have instituted a 20 sec delay somehow between that ListFile and FetchFile processor? Or you have configured the run schedule on the ListFile processor to "20 secs"? Setting the run schedule only tells the processor how often it should request a thread from the NiFi controller that can be used to execute the processor code. Once the processor gets its thread, it will execute. The ListFile processor will list all files present in the target source directory based on the configured file and path filters. For each File listed it will produce a FlowFile. Run schedule does not mean it executes for a full 20 seconds continuously checking the input directory to see if new files arrive. The run schedule also not impacted by how long it takes a listing to complete. It will request a thread every 20 seconds (00:00:20, 00:00:40, 00:01:00, etc...). The configured "concurrent tasks" controls whether the processor can execute multiple listing in parallel. Let say the thread that was executed at 00:01:00 was still executing 20 seconds later. Since that thread is still using the default 1 concurrent task, the listFile would not be allowed to request another thread from the controller at that time. Since the run schedule is independent of the thread execution duration, there is no way to dynamically alter the schedule. There is also no way for a new file to get listed at same time as a previous file (unless both were already present at time of listing) within the same thread execution. The listFile use the configured "Listing Strategy" to control how it handles listing of files. A "tracking" strategy is used to prevent the ListFile processor from listing the same file twice by recording some information in a state provider or a cache. If "No Tracking" is configured, the listFile will list all found files every time it executes. ListFile does not remove the source file from the directory. Removal of the source file is a function optionally handled by the corresponding FetchFile processor. If this is not clear, share more details around your use case and flow design specific so I can provide more direct feedback. Here is the documentation around processor scheduling (works the same no matter which processor is being used): https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#scheduling-tab If you found this response assisted with your query, please take a moment to login and click on "Accept as Solution" below this post. Thank you, Matt
... View more