Member since
07-29-2020
574
Posts
323
Kudos Received
176
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3580 | 12-20-2024 05:49 AM | |
| 3826 | 12-19-2024 08:33 PM | |
| 3626 | 12-19-2024 06:48 AM | |
| 2356 | 12-17-2024 12:56 PM | |
| 3115 | 12-16-2024 04:38 AM |
05-05-2023
01:39 PM
2 Kudos
Hi, One of the main differences that I can see is that the QueryDatabaseTableRecord has a RecordWriter which allows you to to decide the format of the output (Json, xml, csv , parquet ...etc.) where a service need to be setup for the record writer depending on the format, while the QueryDatabaseTable will only provide an Avro format output without the need to setup any record writer service. This is similar to the case of processors ExecuteSQL vs ExecuteSQLRecord. Another important difference I see is the QueryDatabaseTable has property setting for "Transaction Isolation Level" while the other doesn't If that helps please accept solution. Thanks
... View more
05-04-2023
10:48 AM
1 Kudo
Hi, I dont think there is a bug here. It all depends on the value you set in ReplaceText for the Evaluation Mode property. My guess is that you have set to the default value which is "Line-by-Line" which makes sense in this case because you have no lines. To make this work you need to set it to "Entire Text" instead as follows: If you want to check if the file coming from the ListFile to see if it has content or not and then route accordingly , you can use RouteOnAttribute to check against the built-in flow file attribute "fileSize" using expression language: ${fileSize:equals(0)} If that helps please accept solution. Thanks
... View more
05-03-2023
06:52 PM
1 Kudo
Hi, It seems like you are generating your SQL from JSONToSQL kind of processor and then using PUT SQL to execute the generated SQL statement from earlier processor , is this correct? If that is the case I dont think there is an easy way to capture the actual values in the SQL statement as they are expected to be part of the generated sql flow file attribute in the format of "sql.args.N.value" based on the PUTSQL documentation. The only suggestion I have to overcome such thing is to write your custom code inside ExecuteScript processpr after the "retries-exceeded" relationship to replace the place holders (?,?,?..) in the flowfile content with sql.args.N.value attribute where the N = place holder Index + 1, so you have to write some logic to extract the place holder , save into variable , split the variable using (,), loop through the array of "?", construct new variable with sql.args.[i+1]ivalue , when the loop finish replace the place holder string with the new value string , then store new result into new flowfile content and send to success. For more info on writing custom script using ExecuteScript : https://community.cloudera.com/t5/Community-Articles/ExecuteScript-Cookbook-part-1/ta-p/248922 https://community.cloudera.com/t5/Community-Articles/ExecuteScript-Cookbook-part-2/ta-p/249018 If anyone has a better idea please feel free to provide your input. If that helps please accept solution. Thanks
... View more
05-02-2023
09:48 AM
2 Kudos
Hi, This one was a little trickier from the first post, but it seems that there is nothing that you cant do with Jolt 🙂 . Please try the following spec: [
{
// combine all resourceRelationshipCharacteristic under one group
// and assign each element under the group unique key depending on
// its index location starting from first array under resourceRelationship (&3) and
// and ending with nested array resourceRelationshipCharacteristic ($1) so
// each element will have unique name 00,01,10,11...
"operation": "shift",
"spec": {
"resource": {
"resourceRelationship": {
"*": {
"resourceRelationshipCharacteristic": {
"*": {
"@(6,header.action)": "&3&1.action",
"@(6,header.timeStamp)": "&3&1.timeStamp",
"@(2,relationDrniId)": "&3&1.relationDrniId",
"*": "&3&1.&"
}
}
}
}
}
}
},
{
// bucket each element (00,01,10,11) value into new Array records
"operation": "shift",
"spec": {
"*": "records.[#1]"
}
}
] Hope that helps. I wonder if there is better\cleaner way @araujo @cotopaul @steven-matison
... View more
05-01-2023
06:30 AM
Hi , Can you provide the expected json output? The json you provided seems complex and nested. I'm not sure what exactly you are expecting.
... View more
04-26-2023
10:58 AM
1 Kudo
Hi, Try the following spec: [
{
// This is to remove the @ character from @type property because I could not
// figure out how to make it work in the second shift using escape character '\'
"operation": "shift",
"spec": {
"header": "&",
"resource": {
"\\@type": "resource.type",
"*": "resource.&"
}
}
},
{
"operation": "shift",
"spec": {
"resource": {
"resourceCharacteristic": {
"*": {
"name": "records[&1].propname",
"*": "records[&1].&",
"@(2,name)": "records[&1].name",
"@(2,type)": "records[&1].type",
"@(2,subtype)": "records[&1].subtype",
"@(2,drniId)": "records[&1].drniId",
"@(3,header.activityId)": "records[&1].activityId",
"@(3,header.timeStamp)": "records[&1].timeStamp"
}
}
}
}
},
{
"operation": "modify-overwrite-beta",
"spec": {
"records": {
"*": {
"value": "=join(',', @(1,value))"
}
}
}
}
] If that helps please accept solution. Thanks
... View more
04-17-2023
01:18 PM
Hi , Did you try the following: ${latitude:toDecimal():math("toRadians")} Or ${latitude:math("toRadians")}
... View more
04-17-2023
09:21 AM
Try the below spec. Again not sure if this is the most efficient way. You might need to rethink your strategy if you are dealing with a lot of data: [
{
//package all test fields into an outer object
"operation": "shift",
"spec": {
"test*": "outer.&",
"Details": "Details"
}
},
{
// Insert each outer object into Details element 0,1,..
"operation": "shift",
"spec": {
"Details": {
"*": {
"*": "Details.&1.&",
"@(3,outer)": "Details.&1.outer"
}
}
}
},
{
//Bucket each outer object (test1, test2...) into each details element
"operation": "shift",
"spec": {
"Details": {
"*": {
"*": "Details.&1.&",
"outer": {
"*": {
"@": "Details.&3.&"
}
}
}
}
}
},
{
//package details into seperate element under an Array
"operation": "shift",
"spec": {
"Details": {
"*": "[#1].Details.&"
}
}
}
]
... View more
04-16-2023
11:49 AM
Im not sure what you specified is makes sense because you would have two keys with the same name "Details" but different values. I assume what you want is this: {
"Details": {
"0": {
"test1": "test output",
"test2": "test output",
"id": "first",
"name": "the first one"
},
"1": {
"test1": "test output",
"test2": "test output",
"id": "second",
"name": "the second one"
}
}
} In this case the spec would be like this: [
{
//package all test fields into an outer object
"operation": "shift",
"spec": {
"test*": "outer.&",
"Details": "Details"
}
},
{
// Insert each outer object into Details element 0,1,..
"operation": "shift",
"spec": {
"Details": {
"*": {
"*": "Details.&1.&",
"@(3,outer)": "Details.&1.outer"
}
}
}
},
{
//Bucket each outer object (test1, test2...) into each details element
"operation": "shift",
"spec": {
"Details": {
"*": {
"*": "Details.&1.&",
"outer": {
"*": {
"@": "Details.&3.&"
}
}
}
}
}
}
] Not sure if this is the best way, if someone knows a better way please provide your suggestion. If that answers your question please accept solution. Thanks
... View more
04-16-2023
08:19 AM
Hi, You have not specified the desired output, but would the following give you what you are looking for: [
{
"operation": "shift",
"spec": {
"Details": {
"*": {
"*": "Details[#2].&"
}
},
"*": "outer[].&"
}
},
{
"operation": "shift",
"spec": {
"Details": "Details",
"outer": {
"*": {
"*": "Details[&1].&"
}
}
}
}
] if that helps please accept solution. Thanks
... View more