Member since
06-14-2023
90
Posts
27
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3380 | 12-29-2023 09:36 AM | |
4617 | 12-28-2023 01:01 PM | |
978 | 12-27-2023 12:14 PM | |
465 | 12-08-2023 12:47 PM | |
1466 | 11-21-2023 10:56 PM |
02-21-2024
01:45 PM
I had a need for multiple Lookups...custom Groovy processor with several LookUp services as a part of it...consolidated that, routed accordingly, and performed faster.
... View more
02-02-2024
10:39 AM
1 Kudo
yea I saw that post and finally got it to work by making sure I ran this command on Ubuntu to install venv: sudo apt install python3.11-venv After I ran that command, everything started up and stayed up normally for NIFI 2.0.0 M2.
... View more
01-26-2024
02:09 AM
2 Kudos
Hi @SandyClouds , I ran into this issue before and after some research I found that when you do the ConvertJsonToSQL nifi assigns timestamp data type (value = 93 in the sql.args.[n].type attribute ). When the PutSQL runs the generated sql statement it will parse the value according to the assigned type and format it accordingly. However for timestamp it expects it to be in the format of "yyyy-MM-dd HH:mm:ss.SSS" so if you are missing the milliseconds in the original datetime value it will fail with the specified error message. To resolve the issue make sure to assign 000 milliseconds to your datetime value before running the PUTSQL processor. You can do that in the source Json itself before the conversion to SQL or after conversion to SQL using UpdateAttribute, by using the later option you have to know which sql.args.[n].value will have the datetime and do expression language to reformat. If that helps please accept solution. Thanks
... View more
01-22-2024
12:30 PM
Do you have a sample? I'm not sure NiFi can do this natively, but I have recently done some PDF parsing inside NiFi with a custom Groovy processor.
... View more
01-13-2024
05:51 AM
Oh, I successfully managed to integrate and run NiFi 2.0 with Python on Windows using the method you suggested. Thank you so much!
... View more
01-10-2024
12:57 PM
@pratschavan FetchFile is typically used in conjunction with ListFile so that it only fetches the content for the FlowFile it is passed. ListFile would only list the file once. If you are using only the FetchFile processor, I am guessing you configured the "File to Fetch" property with the absolute path to you file. Using this processor in this way means that it will fetch that same file every time it is scheduled to execute via the processor's "Scheduling" tab configuration. Can you share screenshots of how you have these two processors configured? If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-10-2024
06:00 AM
Hello, Here is the threaddump of last restart of node 2304. We took a threaddump every 5 minutes: threaddump I notice only "Cleanup Archive for contentX" that seems take more than 5 minutes for some content repo. Don't know if this cleaning can be a blocking point. And maybe I'm missing something on the interpretation of threaddump. I take also some screens of the cluster view to check if there is more usage of the 2 bad nodes (2304 and 2311). The 2 nodes has 40GB more flowfiles (6% of usage instead of 5% for others): Screen cluster Nifi is clustered and we have three zookeeper server nodes dedicated for Nifi. Do you know how we can check zookeeper actions: election of the Cluster and Primary role? Thanks for your help Best Regards
... View more
01-04-2024
07:26 AM
@arutkwccu The Apache NiFi 2.0.0-M1 release notes have now been updated with a list of nars that have been moved to the Optional Build Profiles. https://cwiki.apache.org/confluence/display/NIFI/Release+Notes#ReleaseNotes-Version2.0.0-M1 Thank you, Matt
... View more
01-02-2024
06:33 AM
@benimaru It is important to understand that NiFi does not replicate active FlowFiles (objects queued in connection between NiFi processor components) across multiple nodes. So in a five node NiFi cluster where you are load balancing FlowFiles across all nodes, each node has a unique subset of the full data received. This if node 1 goes down, the FlowFiles on node 1 will not be processed until node 1 is back up. 100% agree with @joseomjr that placing an external load balancer in front of the ListenUDP endpoint is the correct solution to ensure high availability of that endpoint across all your NiFi nodes. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more