Support Questions

Find answers, ask questions, and share your expertise

Nifi : Implement Sleep Mechanism in nifi without Executescript

avatar
Contributor

Hi Team,

I am trying to implement for the scenario where the events should sleep for 5 minutes and let all the flow files are queued up, then check the count of the files if it is greater than 10 route to failure else route to SUCCESS.

I did in Execute script processor as below. However, i am trying to avoid the executescript for this and try to use Native nifi processors.

====== Executescript ===============

Thread.sleep(300000)
def flowFiles = session.get(100)
if (!flowFiles || flowFiles.size() <= 10) {
session.transfer(flowFiles, REL_SUCCESS)
} else {
session.transfer(flowFiles, REL_FAILURE)
}

 

 

2 ACCEPTED SOLUTIONS

avatar
Master Mentor

@rajivswe_2k7 

Why are you fetching same files twice?

I don't follow the "fail" all 5 if any one of them fails.  You successfully wrote some of them to destinations.  So what action are you taking when a partial failure happens (for example only 1 of 5 fails to write to archive?

Why not just build dataflow around failure relationships to notify you of the specific files that failed?

Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped.

Thank you,
Matt





View solution in original post

avatar
Contributor

Thank you @MattWho . Yes , Initially, i have designed the partial failure. Now , i have changed the design to captured on the failure flow files and send the alert on that. Thank you. 

View solution in original post

4 REPLIES 4

avatar
Master Mentor

@rajivswe_2k7 

What is the use case for wanting to hold downstream processing of FlowFiles until a min 10 are queued?  This is not a typical use pattern for NiFi.  While I am sure it could be done without using a scripting processor,I don't think it would be as efficient in terms of resources.

Creative use of the MergeContent processor comes to mind here.

Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped.

Thank you,
Matt

avatar
Contributor

Thank you @MattWho , 

Lists3->fetchs3->compress the files(4 or 5 files usually)->put target s3->Fetchs3 again->Puts3 Archive folder->DeleteS3(Remove Original file)->(Check all 5 files are processed till delete, if any one file is missing Route all 5 to Failure else Route all 5 to success) , for this last step , i have used Execute script, now i am looking for Native processors.

Kindly let me know if you need more information on this. 

avatar
Master Mentor

@rajivswe_2k7 

Why are you fetching same files twice?

I don't follow the "fail" all 5 if any one of them fails.  You successfully wrote some of them to destinations.  So what action are you taking when a partial failure happens (for example only 1 of 5 fails to write to archive?

Why not just build dataflow around failure relationships to notify you of the specific files that failed?

Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped.

Thank you,
Matt





avatar
Contributor

Thank you @MattWho . Yes , Initially, i have designed the partial failure. Now , i have changed the design to captured on the failure flow files and send the alert on that. Thank you.