Member since
07-29-2020
574
Posts
323
Kudos Received
176
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1945 | 12-20-2024 05:49 AM | |
2184 | 12-19-2024 08:33 PM | |
1991 | 12-19-2024 06:48 AM | |
1323 | 12-17-2024 12:56 PM | |
1845 | 12-16-2024 04:38 AM |
11-25-2024
09:21 AM
Hi , I dont see a function toNumber in the record path syntax , so Im not sure how did you come up with this. It would be helpful next time if you provide the following information: 1- input format. 2- screenshot of the processor configuration causing the error. As for your problem , the easiest and more efficient way - than splitting records- I can think of is using the QueryRecrod processor. lets assume you have the following csv input: id,date_time
1234,2024-11-24 19:43:17
5678,2024-11-24 01:10:10 You can pass the input to the QueryRecord Processor with the following config: The query above is added as a dynamic property which will expose new relationship with the property name that you can use to get the desired output. The query syntax is the following: select id,TIMESTAMPADD(HOUR, -3,date_time) as date_time from flowfile The trick for this to work is how you configure the CSV Reader and Writer to set the expectation on how to parse datetime fields in the reader\writer services: For the CSVReader, Make sure to set the following: CSVRecordSetWriter: Output through Result relationship: id,date_time
1234,2024-11-24 16:43:17
5678,2024-11-23 22:10:10 Hope that helps. If it does, please accept solution. Thanks
... View more
11-22-2024
05:44 AM
1 Kudo
@SAMSAL Jeez... I should not have prepared that flow at the end of 12 hour work... Of course, it works now, sorry for troubles and thanks for quick support
... View more
11-20-2024
09:03 AM
1 Kudo
Hi, Im unable to replicate the error . Can you provide more details about your flow with the processor configurations. here is what I tried and it worked: GenerateFlowFile: EvaluateXPath: Output flowfile attributes:
... View more
11-19-2024
09:31 AM
1 Kudo
Hi, Can you provide sample of this data highlighting what the problem and how do you expect to solve? Also what kind of processor\service are you trying to use to parse this data?
... View more
11-06-2024
07:38 AM
@fisblack Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
10-31-2024
04:55 PM
2 Kudos
Hi @Syed0000 , Sorry for the delay. You json is quite complex and very nested which make jolt very hard to write. You probably know your data very well and my recommendation before writing jolt for such Json is that you try to simplify it first by stripping un need blocks and\or flattening the structure where the transformation is not going to be on the same nested level. I honestly tried but when the jolt got so nested it got harder to reference the upper fields and upper array indexes to maintain the same array position and then maintain the required grouping of fields, and may be that is the draw back of using jolt spec in these scenarios (without prior simplification of course :). This leads me to the second option, which I think I have mentioned to you in prior post regarding using JSLT Transformation instead. JSLT is better option in these cases because you can traverse the structure much easier , and since you have some condition on how to set fields and values this also would be easier to achieve with some expression language like if-else . For example if we take the transformation to create catalogCondition strucutre which seems to the most complex, here is how the jstl looks like: let paths ={
"paths":[for(.tiers)
{
for(.conditions)
"catalogCondition":
{
"category":
[for(.conditionsMerch)
{
"include": if(.excludeInd==0) "yes" else "no",
"categoryList":[.webCategoryId]
}
if(.merchandiseLevel!=5 and not(.keyItemNum))]
}
+
{
"products" : [for(.conditionsMerch)
{
"include": if(.excludeInd==0) "yes" else "no",
"productList":[.keyItemNum]
}
if(not(.webCategoryId))]
}
}
]
}
{
"pricePaths": $paths
} I know this might look a little intimidating at beginning but its not near as bad as when you try to do it with jolt. I understand that jstl is bit of learning curve but I believe it would save you tons of time long term specially when dealing with long, complex and nested Json transformation. I know this is not the answer you were looking for but hopefully this can help you when facing other json transformation challenges in the future. Good luck.
... View more
10-30-2024
08:10 AM
For reference, I tried solution #1 where I added an UpdateAttribute processor and deleted any references to SQL attributes. That did it. Thanks!
... View more
10-30-2024
01:14 AM
1 Kudo
Thank you @SAMSAL , we used Execute Process with a curl script file, when writing the script into the Execute Process itself, the body still did not go away. Your message helped solve the problem, 👏but unfortunately I don't have a button to mark it as a solution.😪
... View more
10-20-2024
01:45 AM
@drewski7 wrote: @AndreyDE Is one flowfile going into the SplitText processor and outputting 10000 flowfiles? Yes - one flow file How big is the flowfile going into the SplitText processor? About 30 KB Or is the source of the pipeline recursively getting all objects in your S3 bucket? Yes, it searches all objects recursively
... View more
10-16-2024
02:59 PM
1 Kudo
@SAMSAL thank you so much for your detailed answer, I really appreciated (and sorry for not able to reply earlier)
... View more