Member since
07-29-2020
574
Posts
323
Kudos Received
176
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3556 | 12-20-2024 05:49 AM | |
| 3812 | 12-19-2024 08:33 PM | |
| 3602 | 12-19-2024 06:48 AM | |
| 2345 | 12-17-2024 12:56 PM | |
| 3091 | 12-16-2024 04:38 AM |
05-17-2023
11:31 AM
in Addition to above logic , if the input payload comes as below with number and Date combined in single array field. input: { "FatturaElettronicaBody": { "DatiGenerali": { "DatiGeneraliDocumento": { "TipoDocumento": "TD01", "Numero": "126587", "Data": "16.05.2023", "Divisa": "EUR", "ImportoTotaleDocumento": "7011.10" }, "DatiDDT": [ { "NumeroDDT": "126681-15.05.2023" }, { "NumeroDDT": "12680-10.05.2023" } ] } } } Used the spec below: [ { "operation": "modify-overwrite-beta", "spec": { "FatturaElettronicaBody": { "DatiGenerali": { "DatiDDT": { "*": { "b_NumeroDDT": "=split('[-.]',@(1,NumeroDDT))", "NumeroDDT": "@(1,b_NumeroDDT[0])", "DataDDT": "=concat(@(1,b_NumeroDDT[3]),'-',@(1,b_NumeroDDT[2]),'-',@(1,b_NumeroDDT[1]))" } } } } } }, { "operation": "shift", "spec": { "FatturaElettronicaBody": { "DatiGenerali": { "DatiGeneraliDocumento": "FatturaElettronicaBody.DatiGenerali.&", "DatiDDT": { "*": { "NumeroDDT": "FatturaElettronicaBody.DatiGenerali.DatiDDT[&1].NumeroDDT", "DataDDT": "FatturaElettronicaBody.DatiGenerali.DatiDDT[&1].Date" } } } } } } ] Output: { "FatturaElettronicaBody" : { "DatiGenerali" : { "DatiGeneraliDocumento" : { "TipoDocumento" : "TD01", "Numero" : "126587", "Data" : "16.05.2023", "Divisa" : "EUR", "ImportoTotaleDocumento" : "7011.10" }, "DatiDDT" : [ { "NumeroDDT" : "126681", "Date" : "2023-05-15" }, { "NumeroDDT" : "12680", "Date" : "2023-05-10" } ] } } }
... View more
05-13-2023
03:41 AM
Thank you so much too. This also works. But i am actually a learner using Jolt.
... View more
05-12-2023
04:35 AM
@sarithe, What is the format of the record you are trying to convert? What is the data type of the field you are trying to convert? Are you trying to modify the value of a record or the value of an attribute? If you are trying to update the values from within each record, you should try using an UpdateRecord processor, in which you define a Record Writer and a Record Reader. Next, you can add another property in your processor and define it like: Property: /your_column --> pay attention to the slash in front as it is very important. Value: ${field.value:multiply(1000):toNumber():toDate("yyyyMMdd", "GMT"):format("yyyyMMdd")} --> this is just an example, as I do not know how your data looks like and how you want it displayed. You can use NiFi's EL to define the exact format you required, but make sure you use field.value if you want to modify the value from within that specific column.
... View more
05-11-2023
03:50 PM
I think you had it right but you need to convert the division value into string before applying split on it. Here are the steps: "decImporto": "=divide(@(1,Importo),@(3,Quantita))",
"strImporto": "=toString(@(1,decImporto))",
"array_importo": "=split('[.]',@(1,strImporto))",
"pad_importo": "=rightPad(@(1,array_importo[1]), 8, '0')",
"Importo": "=concat(@(1,array_importo[0]),'.',@(1,pad_importo))"
... View more
05-08-2023
09:47 AM
if what you provided seems to be giving you the expected output then you are good to go :).
... View more
05-08-2023
05:29 AM
I have used the below spec for moving some of the resource characterstic to top level but iam getting null values. [ { "operation": "shift", "spec": { "header": { "timeStamp": "records.inv_activity_ts", "activityId": "records.inv_activity_id", "action": "records.action" }, "resource": { "drniId": "records.inv_id", "subtype": "records.inv_subtype", "name": "records.inv_name", "resourceCharacteristic": { "*": { "name": { "matchingkey": "records.matchingkey-value", "status": "records.status-value" }, "value": { "matchingkey": "records.matchingkey-value", "status": "records.status-value" } } } } } } ] Output iam getting { "records" : { "inv_activity_ts" : "1670484663189", "inv_activity_id" : "256388257993155783", "action" : "create", "inv_id" : "256383859946641699", "inv_subtype" : "Backplane Connection", "inv_name" : "Backplane Connection", "matchingkey-value" : null, "status-value" : null } } What is mistake iam making?.
... View more
05-05-2023
01:39 PM
2 Kudos
Hi, One of the main differences that I can see is that the QueryDatabaseTableRecord has a RecordWriter which allows you to to decide the format of the output (Json, xml, csv , parquet ...etc.) where a service need to be setup for the record writer depending on the format, while the QueryDatabaseTable will only provide an Avro format output without the need to setup any record writer service. This is similar to the case of processors ExecuteSQL vs ExecuteSQLRecord. Another important difference I see is the QueryDatabaseTable has property setting for "Transaction Isolation Level" while the other doesn't If that helps please accept solution. Thanks
... View more
05-04-2023
10:55 AM
Hi @SAMSAL , thanks for your response. You are right, changing to "Entire Text" worked I was in a hurry and didnt try changing the evaluation mode. I supposed that "Always Replace" would do the work. Thank you
... View more
05-04-2023
12:33 AM
1 Kudo
@danielhg1285, While the solution provided by @SAMSAL seems to be better for you and more production ready, you could also try the below things. This might work if you are using a stable statement all the time and if are not restricted to see the exact INSERT Statement but rather see the values trying to be inserted. - Shortly after RetryFlowFile, you can add an AttributesToJSON processor and manually define all the columns which you want to insert in the Attributes List Property. Make sure that you use the attribute name from your FlowFile (sql.args.N.value) in your correct order and you set Destination = flowfile-content. In this way, you will generate a JSON File with all the columns and all the values which you have tried to insert but failed. - After AttributesToJSON, you can keep your PutFile to save your file locally on your machine, hence opening it whenever and wherever you want 🙂 PS: This is maybe not the best solution, due to the following reasons, but it will get you started on your track: - You will need to know how many columns you have to insert and each time a new column will be added you will have to modify your AttributesToJSON processor. - You will not get the exact SQL INSERT/UPDATE Statement, but a JSON File containing the column-value pair, which can easily be analyzed by anybody.
... View more