Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Need Merging Strategy to merge flowfiles as a pair from tow different processors.

avatar
Contributor

I have to send a Post to request to  a  api, that api also needs a token to make connection 
but before sending the post request I need to get token from which I inturn need to call another api
So I have Nifi flow which looks like 

I have generate flow flie which gives me a payload for post api, it is connected to two processors Merge content and replaceText(Reason for this I want to generate a token only when there is flowfile which needs to sent to sent to Post API). Replace text updates the content in the flow file to the payload to which is needed by the API that generates the token. then you have evaluateJsonPath processor to extract the token from reponse of InvokeHttpProcessor and add it to attribute to flowflie, then flowfile is sent to MergeContent processor
In the MergeProcessor, as it has input queues one from GenerateFlowFlie and one from 
EvaluateJsonPath processor, I want to pick one from each queue and merge them so that output of MergeContent has Payload of POST api in flowfile and token in attribute. 

As for each flow file a new token is generated, I want to use two inputs to merge content as a pair and pick one each from each queue and merge. 

How can I achieve this?NIfi_merge_content_as_pairs.PNG

 

1 REPLY 1

avatar
Master Mentor

@Anderosn 

So MergeContent does just that, merges the content of all FlowFiles being merged. I am not sure how often your GenerateFlowFile processor executes, but when it does execute it will create a FlowFile with a unique filename (unless you set the filename in the GenerateFlowFile processor via a dynamic property).  The produced data by the GenerateFlowFile is routed is routed as a FlowFile to one of the success relationships and a clone FlowFile is routed to the other success relationship in your dataflow (both FlowFiles have same "filename" but different flowfile uuids). The "filename" attribute can be used in the MergeContent processor in the "Correlation Attribute Name" property.  Then you can set min num entries to "2".  This will make sure both FlowFiles with same value in the filename attribute will get allocated to same bin.  The MergeContent property "Attribute Strategy" will need to be set to "Keep All Unique Attributes" so that the final merged FlowFile will include the new token attribute.

Now we have to deal with the content.  What we need to make sure is that the FlowFile used to fetch the token has no content before being routed to mergeContent processor. For that you can use the ModifyBytes processor and set "Remove all content" to "true" after your EvaluateJsonPath processor.  Removing the content does not remove the FlowFile metadata/attributes, so this now 0 byte FlowFile will still have its filename value and token attribute with value.

-------
Now with above suggestion for your existing dataflow as an option, there are probably many other dataflow designs to accomplish this.

Since you are using GenerateFlowFile to create the content needed for your final invokeHTTP, I'd go a different route that does not need a MergeContent processor.

GenerateFlowFIle (custom content needed to fetch token) --> InvokeHTTP (get Token) --> EvaluateJsonPath (extract token from content to attribute) --> replaceText ( ("Replacement Strategy"="always replace", "Evaluation mode"="Entire text", "replacement value"=<content needed for your final rest-api call>) --> InvokeHTTP (you final rest-api endpoint request).

The above removes need for MergeContent or dealing with multiple paths.  You have a single process flow where any failure in along the path does not result in potential of orphaned binned FlowFile at your MergeContent processor.

If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped.

Thank you,
Matt