Member since
07-30-2019
3391
Posts
1618
Kudos Received
1000
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 281 | 11-05-2025 11:01 AM | |
| 165 | 11-05-2025 08:01 AM | |
| 502 | 10-20-2025 06:29 AM | |
| 642 | 10-10-2025 08:03 AM | |
| 405 | 10-08-2025 10:52 AM |
01-10-2024
12:57 PM
@pratschavan FetchFile is typically used in conjunction with ListFile so that it only fetches the content for the FlowFile it is passed. ListFile would only list the file once. If you are using only the FetchFile processor, I am guessing you configured the "File to Fetch" property with the absolute path to you file. Using this processor in this way means that it will fetch that same file every time it is scheduled to execute via the processor's "Scheduling" tab configuration. Can you share screenshots of how you have these two processors configured? If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-10-2024
12:45 PM
@FrankHaha Have you tried using the "Infer Schema" Schema Access Strategy in the JsonTreeReader 1.24.0 controller services instead of fetching schema from AvroSchemaRegistry? Another option would be to use the ExtractRecordSchema 1.24.0 processor along with JsonTreeReader 1.24.0 controller services configured with "Infer Schema" Schema Access Strategy" to output the schema into a FlowFile Attribute "avro.schema". You can then get the produced schema from that FlowFile Attribute to add to your AvroSchemaRegsitry for future use. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-10-2024
06:00 AM
Hello, Here is the threaddump of last restart of node 2304. We took a threaddump every 5 minutes: threaddump I notice only "Cleanup Archive for contentX" that seems take more than 5 minutes for some content repo. Don't know if this cleaning can be a blocking point. And maybe I'm missing something on the interpretation of threaddump. I take also some screens of the cluster view to check if there is more usage of the 2 bad nodes (2304 and 2311). The 2 nodes has 40GB more flowfiles (6% of usage instead of 5% for others): Screen cluster Nifi is clustered and we have three zookeeper server nodes dedicated for Nifi. Do you know how we can check zookeeper actions: election of the Cluster and Primary role? Thanks for your help Best Regards
... View more
01-10-2024
02:48 AM
Thanks @MattWho . Well... I regained ownership of the Nifi platform; I don't have the information regarding the initial installation. I will check about ExecStop and give a feedback. Thanks a lot !
... View more
01-08-2024
05:32 PM
@elemenop Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
01-05-2024
09:45 AM
Welcome the community @JamesZhang
As this is an older post, we recommend starting a new thread. The new thread will provide the opportunity to provide details specific to your environment that could aid others in providing a more accurate answer to your question.
... View more
01-04-2024
07:26 AM
@arutkwccu The Apache NiFi 2.0.0-M1 release notes have now been updated with a list of nars that have been moved to the Optional Build Profiles. https://cwiki.apache.org/confluence/display/NIFI/Release+Notes#ReleaseNotes-Version2.0.0-M1 Thank you, Matt
... View more
01-03-2024
08:15 AM
@PriyankaMondal What is being logged in the nifi-user.log when the issue happens? Have you tried using your browser's developer tools to look at the data being exchanged in the request with the NiFi cluster? Feels like maybe the site cookies are not being sent to the NiFi node after successful authentication resulting in the exception being seen. Thanks, Matt
... View more
01-02-2024
06:33 AM
@benimaru It is important to understand that NiFi does not replicate active FlowFiles (objects queued in connection between NiFi processor components) across multiple nodes. So in a five node NiFi cluster where you are load balancing FlowFiles across all nodes, each node has a unique subset of the full data received. This if node 1 goes down, the FlowFiles on node 1 will not be processed until node 1 is back up. 100% agree with @joseomjr that placing an external load balancer in front of the ListenUDP endpoint is the correct solution to ensure high availability of that endpoint across all your NiFi nodes. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more