Member since
07-29-2020
452
Posts
184
Kudos Received
136
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
280 | 05-06-2024 02:21 PM | |
145 | 04-21-2024 05:53 AM | |
146 | 04-19-2024 08:30 PM | |
173 | 03-20-2024 02:58 AM | |
279 | 03-20-2024 02:07 AM |
05-08-2023
09:47 AM
if what you provided seems to be giving you the expected output then you are good to go :).
... View more
05-08-2023
07:58 AM
Hi, See if this helps: [
{
"operation": "shift",
"spec": {
"header": {
"timeStamp": "records.inv_activity_ts",
"activityId": "records.inv_activity_id",
"action": "records.action"
},
"resource": {
"drniId": "records.inv_id",
"subtype": "records.inv_subtype",
"name": "records.inv_name",
"resourceCharacteristic": {
"*": {
"name": {
"status": {
"$": "records.status",
"@(2,value)": "records.matchingStatus_value"
},
"installDate": {
"@(2,value)": "records.installDate"
}
}
}
}
}
}
}
] If that helps please accept solution. Thanks
... View more
05-07-2023
05:05 PM
1 Kudo
@ThienTrinh, As workaround you can follow this scenario: https://stackoverflow.com/questions/55118389/i-have-two-json-payload-i-want-to-merge-them-in-a-single-json-object/55124212#55124212
... View more
05-06-2023
09:06 AM
1 Kudo
I can confirm that there is behavior change in Fork\Join Enrichment processors between 1.16 and 1.20. I have uploaded the same template you sent using 1.16 and it worked using 1.16 JsonTreeReader\Writer , however the same template did not work on 1.20 using 1.20 processors and services. @MattWho, @steven-matison Is this a bug or misunderstanding if how such processors and services should work in the later versions? Please advise. Thanks
... View more
05-05-2023
01:39 PM
2 Kudos
Hi, One of the main differences that I can see is that the QueryDatabaseTableRecord has a RecordWriter which allows you to to decide the format of the output (Json, xml, csv , parquet ...etc.) where a service need to be setup for the record writer depending on the format, while the QueryDatabaseTable will only provide an Avro format output without the need to setup any record writer service. This is similar to the case of processors ExecuteSQL vs ExecuteSQLRecord. Another important difference I see is the QueryDatabaseTable has property setting for "Transaction Isolation Level" while the other doesn't If that helps please accept solution. Thanks
... View more
05-05-2023
06:37 AM
Hi, I'm not seeing any difference in the JsonTreeReader\Writer . I'm not sure why you are not getting the expected output. The only difference I see is that I'm using 1.16 version and you are on 1.20 so I'm not sure if the behavior is different between the two versions even though I suspect its the case. To troubleshoot and find out , try to create the same flow I created and use the GeneratedFlowFile and ReplaceText to generate the output A & B from your post above. If that works then I would double check your original flow and make sure that InvokeHttp is generating the desired output and the same response output is making it to the ForkEnrichment\JoinEnrichment as designed. Let me know how that goes.
... View more
05-04-2023
10:48 AM
1 Kudo
Hi, I dont think there is a bug here. It all depends on the value you set in ReplaceText for the Evaluation Mode property. My guess is that you have set to the default value which is "Line-by-Line" which makes sense in this case because you have no lines. To make this work you need to set it to "Entire Text" instead as follows: If you want to check if the file coming from the ListFile to see if it has content or not and then route accordingly , you can use RouteOnAttribute to check against the built-in flow file attribute "fileSize" using expression language: ${fileSize:equals(0)} If that helps please accept solution. Thanks
... View more
05-04-2023
10:31 AM
Hi Not sure what Enrichment Strategy are you using but this can be accomplished using Join Strategy "Insert Enrichment Fields" with Insertion Record Path "/": Here is my data flow I'm using the GenerateFlowFile and the ReplaceText to simulate generating Flow File A & B respectively. The output of JoinEnrichment : [ {
"id" : "CAT_c6848199-3594-453b-a733-c270bb837933",
"categoryId" : "category-external-id-456",
"updatedDate" : "2023-04-25T09:25:07.559Z",
"title" : {
"en-US" : "Industrial",
"vi-VN" : "Research and Development"
},
"parentCategory" : null,
"order" : 2,
"description" : {
"en-US" : "Aenean fermentum. Donec ut mauris eget massa tempor convallis. Nulla neque libero, convallis eget, eleifend luctus, ultricies eu, nibh.",
"vi-VN" : "Nulla mollis molestie lorem. Quisque ut erat."
},
"mediaUrls" : [ "http://dummyimage.com/239x100.png/5fa2dd/ffffff", "http://dummyimage.com/104x100.png/ff4444/ffffff" ],
"flexibleAttributes" : { },
"seoTitle" : {
"en-US" : "Books",
"vi-VN" : "Support"
},
"seoKeywords" : {
"en-US" : [ "augue", "lectus in quam fringilla rhoncus" ],
"vi-VN" : [ "nec dui luctus", "nulla sed" ]
},
"seoDescription" : {
"vi-VN" : "Aenean auctor gravida sem. Praesent id massa id nisl venenatis lacinia.",
"en-US" : "Nulla neque libero, convallis eget, eleifend luctus, ultricies eu, nibh. Quisque id justo sit amet sapien dignissim vestibulum. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Nulla dapibus dolor vel est."
},
"updatedDateExternal" : null,
"hashValue" : "e46e4c499f9a738c",
"channelId" : "bfc12c9e-821d-4351-93db-4092508f5a6e",
"output_channels" : [ "24194622-e1b1-11ed-860f-325096b39f47", "241947f8-e1b1-11ed-847a-325096b39f47", "24194866-e1b1-11ed-98dc-325096b39f47", "241948c0-e1b1-11ed-9370-325096b39f47", "24194906-e1b1-11ed-95f0-325096b39f47", "2419494c-e1b1-11ed-a8f9-325096b39f47", "24194992-e1b1-11ed-8da8-325096b39f47", "241949ce-e1b1-11ed-9f22-325096b39f47", "24194a14-e1b1-11ed-a030-325096b39f47", "24194b9a-e1b1-11ed-bfd2-325096b39f47" ]
} ] If that helps please accept solution. Thanks
... View more
05-03-2023
06:52 PM
1 Kudo
Hi, It seems like you are generating your SQL from JSONToSQL kind of processor and then using PUT SQL to execute the generated SQL statement from earlier processor , is this correct? If that is the case I dont think there is an easy way to capture the actual values in the SQL statement as they are expected to be part of the generated sql flow file attribute in the format of "sql.args.N.value" based on the PUTSQL documentation. The only suggestion I have to overcome such thing is to write your custom code inside ExecuteScript processpr after the "retries-exceeded" relationship to replace the place holders (?,?,?..) in the flowfile content with sql.args.N.value attribute where the N = place holder Index + 1, so you have to write some logic to extract the place holder , save into variable , split the variable using (,), loop through the array of "?", construct new variable with sql.args.[i+1]ivalue , when the loop finish replace the place holder string with the new value string , then store new result into new flowfile content and send to success. For more info on writing custom script using ExecuteScript : https://community.cloudera.com/t5/Community-Articles/ExecuteScript-Cookbook-part-1/ta-p/248922 https://community.cloudera.com/t5/Community-Articles/ExecuteScript-Cookbook-part-2/ta-p/249018 If anyone has a better idea please feel free to provide your input. If that helps please accept solution. Thanks
... View more
05-02-2023
09:48 AM
2 Kudos
Hi, This one was a little trickier from the first post, but it seems that there is nothing that you cant do with Jolt 🙂 . Please try the following spec: [
{
// combine all resourceRelationshipCharacteristic under one group
// and assign each element under the group unique key depending on
// its index location starting from first array under resourceRelationship (&3) and
// and ending with nested array resourceRelationshipCharacteristic ($1) so
// each element will have unique name 00,01,10,11...
"operation": "shift",
"spec": {
"resource": {
"resourceRelationship": {
"*": {
"resourceRelationshipCharacteristic": {
"*": {
"@(6,header.action)": "&3&1.action",
"@(6,header.timeStamp)": "&3&1.timeStamp",
"@(2,relationDrniId)": "&3&1.relationDrniId",
"*": "&3&1.&"
}
}
}
}
}
}
},
{
// bucket each element (00,01,10,11) value into new Array records
"operation": "shift",
"spec": {
"*": "records.[#1]"
}
}
] Hope that helps. I wonder if there is better\cleaner way @araujo @cotopaul @steven-matison
... View more