Member since
06-08-2017
1049
Posts
518
Kudos Received
312
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 11236 | 04-15-2020 05:01 PM | |
| 7142 | 10-15-2019 08:12 PM | |
| 3124 | 10-12-2019 08:29 PM | |
| 11525 | 09-21-2019 10:04 AM | |
| 4351 | 09-19-2019 07:11 AM |
10-18-2018
01:11 AM
@Nisha Patel Could you please add your flow screenshot to understand how your flow is designed? and configurations of MergeContent processor.
... View more
10-16-2018
09:45 PM
1 Kudo
@Gary Mullen-Schultz You are having array of json and trying to extract name value as attribute to the flowfile. Configure EvaluateJsonPath processor as below: thename $.[0].name So by using above path we are navigate array and extracting name value. Correct way of doing this is.. Use SplitJson processor Configure processor as $.* Using splitjson processor we are splitting json array into individual flowfiles.. then use EvaluateJsonPath processor add new property as thename $.name - If the Answer helped to resolve your issue, Click on Accept button below to accept the answer, That would be great help to Community users to find solution quickly for these kind of issues.
... View more
10-16-2018
12:48 PM
@Pepelu Rico Could you once make sure the scheduling of GetHDFSFileInfo processor by default this processor scheduled to run 0 sec(always running), I think that is causing this 10000 flowfiles. GetHDFSFileInfo processor doesn't store the state so it will always list out the files in the directory. Change the Run schedule like (1 hr) then this processor will run once per hour and you will get only the number of files in directory.
... View more
10-15-2018
07:41 PM
@Nisha Patel Take off Max Bin Age property value, as you are configured as 10sec so merge content processor will wait 10 sec and merges all the flowfiles but for defragement strategy processor needs 38225 files and found only 10k files, So this is routing to failure. In Addition use Record oriented processor(update record..etc) and then use PublishKafkaRecord processor, record oriented processors are intended to work on batch of records you don't have to use multiple split processors at all. Flow: 1.ListFile 2.FetchFile 3.UpdateRecord //you can use this processor instead of Replace Text processor 4.PublishKafkaRecord --other processing
... View more
10-15-2018
01:00 PM
@Pepelu Rico
Please check my `updated answer` and we don't need to run the command in the processor as this processor designed to just configure the directory and all the commands will run by the processor it self.
... View more
10-15-2018
12:16 PM
@Pepelu Rico
Use GetHDFSFileInfo processor and configure the Full path property value as `<directory>` and this processor is stateless so you are going to list out all the files from the directory. GetHDFSFileInfo Configs: we are listing out all files in /tmp directory recursively and configured Destination as Attributes, so the flowfiles will have all the write attributes as flowfile attributes. This processor has been added in NiFi-1.7, if you are using earlier version of NiFi then you need to run a script that can list out all the files in the directory then extract the path and use the extracted attribute in FetchHDFS processor.
... View more
10-12-2018
08:01 PM
@David Sargrad Both ways are possible and the easiest way would be using file. Approach1: If you want to work with arrays then refer to this link section 2.2 to loop through array and process one string at a time. Approach2: You can also keep all url's in a file with new line as row delimiter then use Split Text processor(split one line each) and Extract text processor to extract the content as attribute and use the attribute value to make the call. Refer to this and this link for more details regards to second approach and serial data processing using NiFi.
... View more
10-10-2018
10:11 PM
1 Kudo
@Suresh Dendukuri `GenerateFlowfile` processor creates FlowFiles with random data or custom content, so this processor will output the custom text. Another way of doing is using `ReplaceText` processor with Replacement Value as your custom text and ReplacementStrategy as AlwaysReplace. There is `Duplicate flowfile` processor which is intended for load testing this processor will duplicate the incoming flowfile with the number of flowfiles that we configured in the processor. Apart from this i don't think there is no other processor(s) intended for providing custom text in NiFi.
... View more
10-09-2018
10:21 PM
@Suresh Dendukuri You can use NiFi RestApi for this case and start all processors in Processor group B based on the success of Processor Group A. Feed the success relation to InvokeHTTP processor (or) some shell script to execute curl api call to start Processor group B. Refer to this this and this links for more details regards to RestApi calls and start NiFi processor group.
... View more
10-09-2018
10:14 PM
@Suresh Dendukuri If your flowfile having only one key,value then use ReplaceText processor and configure the processor as shown below We are searching for project literal value and replacing that with _id.project by using ReplacementStrategy as LiteralReplace. Input: {"project" :"ABC-11"} Output: {"_id.project":"ABC-11"}
... View more