Member since
07-29-2020
558
Posts
307
Kudos Received
167
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
112 | 11-28-2024 06:07 AM | |
78 | 11-25-2024 09:21 AM | |
213 | 11-22-2024 03:12 AM | |
116 | 11-20-2024 09:03 AM | |
314 | 10-29-2024 03:05 AM |
11-26-2024
11:37 AM
1 Kudo
Can you provide more information on your dataflow ? let's say you are using GenerateFlowFile to create the json Kafka output, what happens next? How are you enriching the data and what kind of processor where you are using the json reader\writer service that is causing the error? I need to see the full picture here because When I use same json you provided in GenerateFlowFile processor and then passed it to QueryRecord with the same Json reader\writer service configuration, it seems to be working!
... View more
11-26-2024
10:57 AM
1 Kudo
Hi @PradNiFi1236 , How are you adding the new fields? You Json appears to be invalid as provided.
... View more
11-26-2024
10:52 AM
1 Kudo
Hi , Can you provide more explanation\screenshot of your dataflow and the configuration set on each processor\controller service? Also if you can provide sample data that can be converted to parquet which can then reproduce the error that would be helpful as well. Thanks
... View more
11-25-2024
09:21 AM
Hi , I dont see a function toNumber in the record path syntax , so Im not sure how did you come up with this. It would be helpful next time if you provide the following information: 1- input format. 2- screenshot of the processor configuration causing the error. As for your problem , the easiest and more efficient way - than splitting records- I can think of is using the QueryRecrod processor. lets assume you have the following csv input: id,date_time
1234,2024-11-24 19:43:17
5678,2024-11-24 01:10:10 You can pass the input to the QueryRecord Processor with the following config: The query above is added as a dynamic property which will expose new relationship with the property name that you can use to get the desired output. The query syntax is the following: select id,TIMESTAMPADD(HOUR, -3,date_time) as date_time from flowfile The trick for this to work is how you configure the CSV Reader and Writer to set the expectation on how to parse datetime fields in the reader\writer services: For the CSVReader, Make sure to set the following: CSVRecordSetWriter: Output through Result relationship: id,date_time
1234,2024-11-24 16:43:17
5678,2024-11-23 22:10:10 Hope that helps. If it does, please accept solution. Thanks
... View more
11-22-2024
03:12 AM
3 Kudos
Hi, It seems like the Url you have in the invokehttp is invalid compared to the one in postman: invokehttp: https://tolingo-portal-test.s.xtrf.eu/home-api/quotes/ZQ6BPIPYHVE2FCVQ4HASNRIU3I/status Postman: https://tolingo-portal-test.s.xtrf.eu/home-api/v2/quotes/ZQ6BPIPYHVE2FCVQ4HASNRIU3I/status It seems like you are missing the v2 in invokehttp. When I try the correct url with the following configuration it works but I dont have access ofcourse 🙂
... View more
11-21-2024
01:41 PM
Hi, Can you elaborate more about the API call and what method its using? if you also can provide how the API works using postman or curl that would be helpful too. Just FYI, if you are trying to send json body with Get method its not going to work using this processor and you have to write custom code using ExecuteScript to work around it.
... View more
11-20-2024
09:03 AM
1 Kudo
Hi, Im unable to replicate the error . Can you provide more details about your flow with the processor configurations. here is what I tried and it worked: GenerateFlowFile: EvaluateXPath: Output flowfile attributes:
... View more
11-19-2024
09:31 AM
1 Kudo
Hi, Can you provide sample of this data highlighting what the problem and how do you expect to solve? Also what kind of processor\service are you trying to use to parse this data?
... View more
10-31-2024
04:55 PM
2 Kudos
Hi @Syed0000 , Sorry for the delay. You json is quite complex and very nested which make jolt very hard to write. You probably know your data very well and my recommendation before writing jolt for such Json is that you try to simplify it first by stripping un need blocks and\or flattening the structure where the transformation is not going to be on the same nested level. I honestly tried but when the jolt got so nested it got harder to reference the upper fields and upper array indexes to maintain the same array position and then maintain the required grouping of fields, and may be that is the draw back of using jolt spec in these scenarios (without prior simplification of course :). This leads me to the second option, which I think I have mentioned to you in prior post regarding using JSLT Transformation instead. JSLT is better option in these cases because you can traverse the structure much easier , and since you have some condition on how to set fields and values this also would be easier to achieve with some expression language like if-else . For example if we take the transformation to create catalogCondition strucutre which seems to the most complex, here is how the jstl looks like: let paths ={
"paths":[for(.tiers)
{
for(.conditions)
"catalogCondition":
{
"category":
[for(.conditionsMerch)
{
"include": if(.excludeInd==0) "yes" else "no",
"categoryList":[.webCategoryId]
}
if(.merchandiseLevel!=5 and not(.keyItemNum))]
}
+
{
"products" : [for(.conditionsMerch)
{
"include": if(.excludeInd==0) "yes" else "no",
"productList":[.keyItemNum]
}
if(not(.webCategoryId))]
}
}
]
}
{
"pricePaths": $paths
} I know this might look a little intimidating at beginning but its not near as bad as when you try to do it with jolt. I understand that jstl is bit of learning curve but I believe it would save you tons of time long term specially when dealing with long, complex and nested Json transformation. I know this is not the answer you were looking for but hopefully this can help you when facing other json transformation challenges in the future. Good luck.
... View more
10-30-2024
04:41 AM
1 Kudo
Hi @fisblack , Welcome to the community. Your transformation is kind of odd. Usually transformation is driven by existing values or structure even when generating new fields with defaults. I have never seen a case where newly generated values drive transformation of existing ones. I assume you are getting the list of courseSubCategory from somewhere and the script is rather dynamic utilizing some expression language & attributes. if you provide some context to how the list is coming about and what are you trying to achieve , maybe we can provide you with more accurate solution. There are multiple ways of solving your problem as is - assuming you always want to replicate an existing field(s) against newly generated 3 elements array with default values of 1,2&3- , here is one of them: [
{
"operation": "shift",
"spec": {
"*": "existing.&",
"#1|#2|#3": "courseSubCategory"
}
},
{
"operation": "shift",
"spec": {
"courseSubCategory": {
"*": {
"@(2,existing)": "[&1]",
"@": "[&1].courseSubCategory"
}
}
}
}
] If you want to make this more dynamic in terms of number of "courseSubCatgory", you can construct the array assignment string "#1|#2|#3" in some upstream processors and store the value as flow attribute (let says its called the same) , then you can make it more dynamic by using the flowfile attribute where the assignment of the courseSubCategory in the first shift will look like this: "${courseSubCategory}": "courseSubCategory" Hope that helps. If it does help, please accept the solution. Thanks
... View more