Member since
07-30-2019
3420
Posts
1624
Kudos Received
1009
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 122 | 01-09-2026 06:58 AM | |
| 481 | 12-17-2025 05:55 AM | |
| 542 | 12-15-2025 01:29 PM | |
| 552 | 12-15-2025 06:50 AM | |
| 405 | 12-05-2025 08:25 AM |
01-17-2024
01:37 PM
@MPHSpeed Rather than using RouteText processor which routes individual lines of a text file, you could use RouteOnContent processor that routes the entire FlowFile whose content matches to a dynamic relationship. What i would do is extract the data type TDRS 3, or AMSC 1 or SKYNET 4C, etc to a FlowFile attribute using ExtractText processor and then you have that type associated with the FlowFile through your entire flow making it easy to do things like merge FlowFiles all of same type together (MergeContent with "Correlation Attribute Name"), route FlowFiles of a specific type using RouteOnAttribute, etc... Then you also have options using the many Record based processors if you can define a schema for your data that defines your record as those three lines. Example: SplitText (splits relationship) ---> ExtractTEXT: ExtractText (Matched relationship) --> RouteOnAttribute RouteOnAttribute with above configuration will have three dynamically created relationships for the data types you want to keep. connect each to the unique dataflow path for processing that data type. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-17-2024
07:32 AM
@pratschavan Based on the exception shared it is telling you that the ingested msg files you have do not follow RFC-2822 specification the extractEmailHeader controller service has a requirement. States that the particular MSG it tried to process was missing the sender. You may need to write your won customer reader for your formatted MSG files and without, unfortunately I will not be very helpful as it is outside my area of knowledge. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-17-2024
07:28 AM
@Alexy I have never used cleanHistoryOnStart and the docs are not super clear. Archive removal normally happens at time of log rotation. This is why older logs beyond window in which they should be retained linger around when application is down and then started up. I cam across this post that has some interesting responses: https://stackoverflow.com/questions/54680434/the-old-log-files-didnt-get-removed-when-using-logback-rollingfileappender Implied from that that post it is not removing the files because of your rolling policy %d{yyyy-MM-dd_HH} If what they are saying is correct, it will not remove your old logs because they are form a different day and your log rotation is based on hour within current day. I more commonly uses "totalSizeCap" setting a value here larger than what i expect to see retained in my desired history time window or even larger based on available disk space. This has worked for me to clean old stuff up so it is not around forever. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-17-2024
06:58 AM
@Sartha Your MiNiFi needs a simple dataflow that consists of a TailFile processor and then either a Remote Process Group (RPG) (recommended if your NiFi is a multi-node cluster, but can be used with single instance NiFi as well.) or a PostHTTP processor (Use if NiFi a single instance and not a cluster). Neither the RPG not the PostHTTP can be configured with a target URL with "localhost". Localhost would be the MiNiFi server. It needs to be the hostname of the server where your NiFi is running. Make sure if using RPG, you have configured the Site-to-Site properties in the nifi.properties file. Your NiFi would need a Remote Input Port (this is what your MiNiFi RPG will transmit FlowFiles to) or a ListenHTTP processor (if you used PostHTTP on your MiNiFi, this is what it can be configured to send to). The outbound connection from either of these components needs to feed whatever downstream processors you need to do within NiFi. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-17-2024
06:37 AM
1 Kudo
@glad1 No not necessary. I suggested becasue i was still unclear how often your initial ExecuteSQL was producing a source file. The PG makes it easy to throttle per source FLowFile processing so you would get one merged FlowFile for each produced FlowFile. Thanks, Matt
... View more
01-12-2024
07:00 AM
@manishg I am not clear on what you are trying to accomplish here. What is the use case? What is your NiFi version? What is your OS? NiFi does not have a "start.sh" script. Are you talking about the "nifi.sh" script. Perhaps there are just some important details I am missing here. also not sure why you would want to change the nifi..web.http.port configuration property in the nifi.properties file to a variable. These properties are all read during startup of NiFi and evaluating NiFi variables is does not happen during NiFi startup. Nor does NiFi support defining NiFi variables in the nifi.properties file. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-12-2024
06:46 AM
@pratschavan You may find the guidance provided below useful: https://stackoverflow.com/questions/47200178/read-message-body-of-an-email-using-apache-nifi Instead of using consumePOP3 processor to get your msg files directly from an email server, you would simply ingest those files from your msg storage folder(s). As far as interacting with your SQL DB, there are numerous documented SQL processor: https://nifi.apache.org/docs/nifi-docs/ If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-12-2024
06:09 AM
1 Kudo
@Madhav_VD What about the "Relationship" tab? My guess here is that you have checked the "retry" box on the success relationship of the putSQL processor. If that is the case, unchecking "retry" on the success relationship should resolve your FlowFile penalization issue: When this processor is running, does it produce any bulletins or exceptions in the log output? If it is producing bulletins, warn, or error logs, it is likely failing to write to your SQL DB. The FlowFIle would then be routed to retry or failure relationship depending on exception where "retry" if checked would be applied based on retry property configurations. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
01-12-2024
06:07 AM
@LKB I recommend creating a new community question with the details around yoru setup and exceptions you may be seeing. You are more likely to get better traction on a community question that does not already have and accepted solution. Thank you, Matt
... View more
01-11-2024
02:09 PM
@Madhav_VD How has yoru PutSQL processor been configured (all tabs)? For a FlowFile(s) to be penalized a processor needs to apply that penalty. That could be applied by the PutSQL if you configure retry on a relationship or being applied by the processor feeding the connection. Looking at your attached dataflow, I don't believe the EvaluateJsonPath processor is applying any such penalty. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more