Created 03-22-2018 08:36 PM
For error handling purpose, I need a dummy processor to queue up flowfiles for reprocessing. Suppose I have a mongodb put processor that the write to mongodb. But the persistence fails due to network or disk issues. In this case, the flowfiles from the failure relationship is routed to a putMail processor which sends messages. Now I want to route all flowfiles from the putMail processor to a dummy processor and the dummy processor is routed by back to back putMongo processor. The dummy processor stays as stopped so as to queue up all the flowfiles. After the email is received, operation team will fix the mongodb issue. As this time, the dummy processor will be restarted to route the message back to mongodb put processor.
Does nifi have a built-in dummy processor?
Thanks,
Mark
Created on 03-23-2018 12:35 PM - edited 08-17-2019 11:20 PM
Your putMongo processor could route to failure for many reason (may not even be an issue with the MongoDB). For example, a network outage of network issue during transfer. With a stopped dummy processor, you end up stalling delivery of files that would other wise be successful on a retry.
I suggest using a little more complicated failure loop flow design. One where you retry the FlowFiles x number of times before triggering an email or directing the Flowfiles to a holding queue.
Inside the "Retry Check Loop" process group I have the following flow:
Simply leave the "Reset retry counter" updateAttribute processor stopped so that FlowFiles will queue in front of it after 3 attempts to deliver have been made. Running that processor will reset the counter to zero and pass those Flowfiles back out to PutMongo processor again.
Here is a template of the above "Retry Check Loop" process group.
You can import this template directly to your NiFi by clicking on the following icons:
Hope this helps,
Matt
Created on 03-22-2018 08:47 PM - edited 08-17-2019 11:20 PM
Hi @Mark Lin
When I develop, I use a funnel to see what's happening in my flow. Also, you can use UpdateAttribute with adding any attribute. I think these two don't have much impact on resources usage.
Funnel: A funnel is a NiFi component that is used to combine the data from several Connections into a single Connection.
Created on 03-23-2018 03:45 AM - edited 08-17-2019 11:20 PM
Something like this will help you.
In this case, I am trying to write the data to S3 and if it fails, redirect it through an UpdateAttribute process back to the parent processor again.
For your scenario, you can fit in the e-mail logic and of course can stop the processor for some later action 🙂
Hope that helps!
Created on 03-23-2018 12:35 PM - edited 08-17-2019 11:20 PM
Your putMongo processor could route to failure for many reason (may not even be an issue with the MongoDB). For example, a network outage of network issue during transfer. With a stopped dummy processor, you end up stalling delivery of files that would other wise be successful on a retry.
I suggest using a little more complicated failure loop flow design. One where you retry the FlowFiles x number of times before triggering an email or directing the Flowfiles to a holding queue.
Inside the "Retry Check Loop" process group I have the following flow:
Simply leave the "Reset retry counter" updateAttribute processor stopped so that FlowFiles will queue in front of it after 3 attempts to deliver have been made. Running that processor will reset the counter to zero and pass those Flowfiles back out to PutMongo processor again.
Here is a template of the above "Retry Check Loop" process group.
You can import this template directly to your NiFi by clicking on the following icons:
Hope this helps,
Matt
Created on 03-23-2018 02:31 PM - edited 08-17-2019 11:20 PM
Did you look at the MonitorActivity processor?
You can set a threshold where it will send out a inactive email message (example message: Have not seen any failed FlowFiles for x amount of time)
Then later when data starts failing it will trigger an activity.restored email message (example message: Seeing failed flowfiles now)
This processor can be configured to create above messages only once.
could fit in to failure loop as shown above.
Thanks,
Matt
Created 03-23-2018 01:20 PM
Thanks Matt, Rahul, and Abdelkrim. Our design is changed a little bit to prevent duplicated emails from being sent out. Failed flowfiles are routed to a kafka publisher and then again routed to a updateAttribute which will serve as a holding queue. A separate processor group that is composed of a kafka consumer and a customized putMail processor that will send out the same type of error messages only once within a configured time period. We need this customized putMail processor because network or disk issue takes time to get fixed and we will receive too many duplicated emails without it.
Thanks,
Mark
Created 03-23-2018 01:36 PM
Hi @Mark Lin
What you can do also to manage duplicates is to set a ControlRate with an expire duration for the connection before the ControlRate. This way, you let only when Flowfile goes through each X amount of time, and other FlowFile get stuck in the connexion and deleted automatically. However, to this to work, you should separate your messages before and not route all event to the same ControlRate otherwise you will have 1 notification whatever the issues are.
I hope this helps
Thanks
Created 03-23-2018 06:51 PM
Hi Matt,
The MonitorActivity processor is exceptionally useful. I will use it to monitor the overall health of my nifi processor groups.
Thanks,
Mark