Member since
07-30-2019
3135
Posts
1565
Kudos Received
909
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
162 | 01-09-2025 11:14 AM | |
906 | 01-03-2025 05:59 AM | |
445 | 12-13-2024 10:58 AM | |
497 | 12-05-2024 06:38 AM | |
392 | 11-22-2024 05:50 AM |
01-11-2017
01:29 PM
@Joshua Adeleke Another option might be to use the ReplaceText processor to find the first two lines and replace them with nothing. Glad to hear you got things working for you.
... View more
01-10-2017
09:43 PM
6 Kudos
@Raj B Not all Errors are equal. I would avoid lumping all failure relationships into the same error handling strategy. Some Errors are no surprise, can be expected to occur on occasion, and may be a one time thing that resolves itself. Lets use your example above....
The putHDFS processor is likely to experience some failure over time do to events outside of NiFi's control. For example, let say a file in the middle of transferring to HDFs when the network connection is lost. NIFi would in turn route that FlowFile to failure. If that failure relationship had been routed back on the putHDFS, it would have likely been successful on the subsequent attempt. A better error handling strategy in this case may be to build a simple error handling flow that can be used when the type of failure might lead to self resolution. So here you see Failed FlowFiles enter at "data", they are then checked for a failure counter and if one does not exist it is created and set to 1. If it exists, it is incremented by 1. The check recount count will continue to pass the file to "retry" until the same file has been seen x number of times. "Retry" would be routed back to the source processor of the failure. after x attempts the counter is reset, an email is sent, and the file is place in some local error directory for manual intervention. https://cwiki.apache.org/confluence/download/attachments/57904847/Retry_Count_Loop.xml?version=1&modificationDate=1433271239000&api=v2 The other scenario is where the type of failure is not likely to ever correct itself. Your mergeContent processor is a good example here. If the processor failed to merge some FlowFiles, it is extremely likely to happen again, so there is little benefit in looping this failure relationship back on the processor like we did above. In this case you may want to route this processors failure to a putEmail processor to notify the end user of the failure and where it occurred in the dataflow. The success of the putEmail processor may just feed another processor such as UpdateAttribute which is in a stopped/disabled state. This will hold the data in the dataflow until manually intervention can be taken to identify the issue and either reroute the data back in to the flow once corrected or discard the data. If there is concern over available space in your NiFi Content repository, i would some processor to write it out to a different error file location using putFile, PutHDFS, PutSFTP, etc... Hope this helps, Matt
... View more
01-10-2017
08:00 PM
3 Kudos
@Raj B Process groups can be nested inside process groups and with the granular access controls NiFi provides i may not be desirable for every user who has access to the NiFi Ui to be able to access all processors or the specific data those processors are using. So in addition to your valid example above, you may want to create stove pipe dataflows based off different input ports where only specific users are allowed view and modify to the stove pipe dataflow they are responsible for. While you of course can have flowfiles from multiple upstream sources feed into a single input port and then use a routing type processor to split them back out to different dataflows, it can be easier just to have multiple input ports to achieve the same affect with less configuration. Matt
... View more
01-10-2017
07:03 PM
@Joshua Adeleke If you found this information helpful in guiding you with your dataflow design, please accept the answer.
... View more
01-10-2017
06:57 PM
@Avish Saha The more bins the more of your NiFi JVM heap space that could be used. You just need to keep in mind that if all your bins have lets say 990 KB of data in them and the next file would put any of those queues over 1024 KB, then the oldest bin will still be merged at only 990 KB to make room for a new bin to hold the file that would not fit in any of the existing bins. More bins equals more opportunities for a flowfile to find a bin where it fits... Also keep in mind that as you have it configured, it is also possible for a bin to hang around for an indefinite amount of time. A bin at 999 KB which never gets another qualifying FlowFile that puts its size between 1000 and 1024 will sit forever unless you set the max bin age. This property tells the MergeContent processor to merge a bin no matter what its current state is if it reaches this max age. I recommend you always set this value to the max amount of data latency you are willing accept on this dataflow. If you found all this information helpful, please accept this answer. Matt
... View more
01-10-2017
06:46 PM
@Joshua Adeleke The SplitText processor simply splits the content of an incoming FlowFile into multiple FlowFiles. It gives you the ability to designate how many lines would be considered the header and ignored, but it does no extraction of content in to FlowFile attributes. The ExtractText processor can be used to read parts of the content and assign those parts to different NiFi FlowFile attributes. It will not remove the header form the content, that would still be done during the splitText processor operation. However, every FlowFile created by SplitText will inherit the unique FlowFile attributes from the parent FlowFile. Matt
... View more
01-10-2017
02:22 PM
@Avish Saha
Unless you know that your incoming FlowFiles content can be combined to exactly 1 MB with out going over by even a byte, there is little chance you will see files of exactly 1 MB in size. The mergerContent processor will not truncate the content of a FlowFile to make a 1 MB output FlowFile.
The more common use case to is to set an acceptable merged size range (min 800 KB - max 1 MB) for example. FlowFiles larger then 1 MB will still pass through unchanged.
... View more
01-10-2017
02:07 PM
@Avish Saha
In the case where you are seeing Merged FlowFile larger then 1 MB, i suspect the merge is a single FlowFile that was larger then the 1 MB max. When a FlowFile arrives that exceeds to configured max it is passed to both the original and merged relationships unchanged. decreasing bin number only impacts heap usage but does not change behavior.
... View more
01-10-2017
01:40 PM
1 Kudo
@Joshua Adeleke You could extract the header bits from the first two lines into FlowFile attributes before the SplitText processor. All the FlowFiles that come out of the SplitText processor will all get these new FlowFile attributes as well. You can then use the FlowFile Attributes in your PutSQL.
... View more
01-10-2017
01:31 PM
1 Kudo
@Avish Saha The behavior you should be seeing here is that the mergeContent processor will take the first incoming FlowFile it sees and add it to bin 1. It will then continue to attempt to add additional FlowFiles to Bin 1 until either 1000 total FlowFiles have been added or the min size has reached 1 MB. Now lets say bin 1 has grown to 1000KB (just shy of 1 MB) and the next FlowFile would cause that bin to exceed the max group size of 1 MB. In this case that File would not be allowed to go into bin 1 and would be the first file to start bin 2. Now bin 1 hangs around because the min requirement of 1 MB has not been met and neither max entries or max group size has been met either. So you can see it is possible to fill all 5 of your bins without meeting your very tightly configured thresholds. So what happens when a next FlowFile will not fit in any of the 5 existing bins? The mergeContent processor will merge to oldest bin to free room to start a new bin. So what I am assuming here is you are seeing few or no files that are exactly 1 MB. Thanks, Matt
... View more