Member since
07-29-2020
574
Posts
323
Kudos Received
176
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1972 | 12-20-2024 05:49 AM | |
2198 | 12-19-2024 08:33 PM | |
2018 | 12-19-2024 06:48 AM | |
1324 | 12-17-2024 12:56 PM | |
1884 | 12-16-2024 04:38 AM |
09-27-2022
11:26 AM
Brilliant! Exactly what I was looking for. Although it seems a little peculiar to me that we need to rely on a Jolt transform for this operation and not the UpdateRecord processor. Particularly since NiFi makes it a point to discuss Arrays and Maps in the documentation. Thanks for the Jolt transform because I spent a lot of time trying to get the Jolt transform to work and couldn't quite figure it out. Now I see what I was doing wrong.
... View more
09-26-2022
07:08 AM
Hello, you can configure that just 1 Flowfile will be handled inside of a specific ProcessorGroup. There is the config option on PG which is called: Process Group FlowFile Concurrency for that you can set the value: Single FlowFile per Node After PutKudu you will destroy the FlowFile or route it out from ProcessorGroup, then the next FlowFile will be released to enter to ProcessorGroup In your case the Flow would look like: ListFile Processor -> ProcessorGroup (handels fetchFile, data Transformation and putKudu)
... View more
09-23-2022
10:51 AM
I dont think there is an out of the box processor where you can utilize such thing. However you can do some workaround where you can use the ExecuteSQL processor instead since this processor allows you to return the stored proc output in Avro format in new flowfile based on whatever your select statement is in the ExecuteSQL SQL Select Query property. Since this will generate new flowfile, the assumption here is that you dont care about the original flowfile. before going farther and give you an example how to do it, do you want to preserve the original flow file and you were thinking of adding the stored proc output as attribute?
... View more
09-20-2022
10:21 AM
Hi, Please try the following spec: [
{
"operation": "shift",
"spec": {
"timestamp": {
"*": {
"@(2,resourceid)": "[&1].resourceid",
"@": "[&1].timestamp"
}
},
"Key": {
"*": {
"@": "[&1].key"
}
},
"data": {
"*": {
"@": "[&1].data"
}
}
}
}
] If you find this helpful, please accept solution. Thanks
... View more
09-19-2022
10:03 PM
Thanks @SAMSAL and @araujo for the responses. The RouteOnAttribute is what I am using presently but it gets unwieldily after just a couple of route options. Looks like I'm just gonna need to build a custom validator using the ExecuteScript processor. Hopefully that scales.
... View more
09-19-2022
08:05 AM
Hi @SAMSAL, thanks, i have try this out and it works. I thought you can directly edit the value of the JSON file. Now I have to merge all split JSON files into one again
... View more
09-19-2022
06:58 AM
Hi, Is your input in Json format? if its json do you need to completely replace it with the output you specified? If that is the case then you dont need Jolt transformation, instead you can do the following: 1- Place the Gallery_Ids attribute as flowfile content. You can do that using the ReplaceText processor where the Replacement Value is the attribute ${GALLERY_IDS} 2- Do SplitJson to split the Array into a different flow file, this should give 7 flowfiles with the values: 1,2,3,4,5,6 & 7 3- Add another ReplaceText processor that will capture each split from above and replace the flowfile content with the following template in the Replacement Value: {"GALLERY_ID":"$1","PERSON_ID":"test$1"} Leave the Search Value as (?s)(^.*$) This should again give you 7 flowfiles with the expected output format. Hope that helps, if it does please accept solution. Thanks
... View more
09-18-2022
10:58 AM
Hi, It doesnt seem like "Stuff" function is recognized function through the QueryRecord sql. I think you have two options: 1- If you are dumping this data into a database where you can use the stuff function there then delegate this to SQL before storing\processing the data. 2- Instead of trying to use QueryRecord processor I would try and use the JoltTransformJson with the following spec: [
{
"operation": "modify-overwrite-beta",
"spec": {
"*": {
"tempStart": "=split('', @(1,start_time))",
"tempEnd": "=split('', @(1,start_end))",
"start_time": "=concat(@(1,tempStart[0]),@(1,tempStart[1]),':',@(1,tempStart[2]),@(1,tempStart[3]))",
"start_end": "=concat(@(1,tempEnd[0]),@(1,tempEnd[1]),':',@(1,tempEnd[2]),@(1,tempEnd[3]))"
}
}
},
{
"operation": "remove",
"spec": {
"*": {
"temp*": ""
}
}
}
] Not sure how this will perform with large dataset but its worth testing. Hope that helps, if it does please accept solution. Thanks
... View more
09-14-2022
07:56 AM
Actually, it looks like this kind of regEx does not work. Though my question. I'm using Nifi 11.4... Thank you for your feedback 🙂
... View more
09-13-2022
03:23 AM
@SAMSALthank you for the solution provided. Testing the solution provided i was aware the sometimes the fields vaues(string) are different. Sometime are in the form 20220807091252 which includes also hours and minutes, sometime 202209080. So i need to have a conversion process before in order to convert into datetime. Am i right? Do you came across this? Many thanks
... View more