Member since
08-10-2022
27
Posts
1
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2198 | 03-16-2023 02:57 AM | |
11923 | 08-29-2022 04:50 AM |
07-12-2024
05:03 AM
1 Kudo
What if I want to pass attributes instead of flowfile content?
... View more
06-20-2023
06:29 AM
Some progress: I've modified the transformation like this: [{
"operation": "shift",
"spec": {
"Orders": {
"*": {
"Headers": {
"*": "header.&"
},
"Locations": {
"Shipper": {
"*": "sender.&"
},
"Consignee": {
"*": "consignee.&"
},
"Unload": {
"*": "unload.&"
}
},
"Goods": {
"*": {
"GoodsDetails": {
"*": {
"@(2,GoodsTypeName)": "rows.GoodsTypeName",
"@(4,Locations.Consignee.ConsigneeName)": "rows.ConsigneeName",
"@(4,Locations.Consignee.IdPlanToLocCUS)": "rows.IdPlanToLocCUS",
"@(4,Locations.Consignee.ConsigneeAddress)": "rows.ConsigneeAddress",
"@(4,Locations.Consignee.ConsigneeCountry)": "rows.ConsigneeCountry",
"@(4,Locations.Consignee.ConsigneeEmail)": "rows.ConsigneeEmail",
"@(4,Locations.Consignee.ConsigneeNotes)": "rows.ConsigneeNotes",
"@(4,Locations.Consignee.ConsigneeReference)": "rows.ConsigneeReference",
"@(4,Locations.Consignee.ConsigneeRegion)": "rows.ConsigneeRegion",
"@(4,Locations.Consignee.ConsigneeTel)": "rows.ConsigneeTel",
"@(4,Locations.Consignee.ConsigneeTown)": "rows.ConsigneeTown",
"@(4,Locations.Consignee.ConsigneeZipCode)": "rows.ConsigneeZipCode",
"Packs": "rows.Packs",
"NetWeight": "rows.NetWeight",
"GrossWeight": "rows.GrossWeight",
"Cube": "rows.Cube",
"Meters": "rows.Meters",
"CodiceUnivocoCollo_1": "rows.CodiceUnivocoCollo_1",
"CodiceMaster": "rows.CodiceMaster",
"ItemCode": "rows.ItemCode",
"Seats": "rows.Seats",
"Height": "rows.Height",
"Width": "rows.Width",
"Depth": "rows.Depth",
"Note": "rows.Note",
"@(4,Locations.Unload.UnloadName)": "rows.UnloadName",
"@(4,Locations.Unload.IdShipToLocCUS)": "rows.IdShipToLocCUS",
"@(4,Locations.Unload.UnloadAddress)": "rows.UnloadAddress",
"@(4,Locations.Unload.UnloadCalendarNote)": "rows.UnloadCalendarNote",
"@(4,Locations.Unload.UnloadCountry)": "rows.UnloadCountry",
"@(4,Locations.Unload.UnloadDate)": "rows.UnloadDate",
"@(4,Locations.Unload.UnloadEmail)": "rows.UnloadEmail",
"@(4,Locations.Unload.UnloadNotes)": "rows.UnloadNotes",
"@(4,Locations.Unload.UnloadReference)": "rows.UnloadReference",
"@(4,Locations.Unload.UnloadRegion)": "rows.UnloadRegion",
"@(4,Locations.Unload.UnloadTel)": "rows.UnloadTel",
"@(4,Locations.Unload.UnloadTime)": "rows.UnloadTime",
"@(4,Locations.Unload.UnloadTown)": "rows.UnloadTown",
"@(4,Locations.Unload.UnloadZipCode)": "rows.UnloadZipCode",
"@(4,Locations.Unload.ObbligatoryUnloadDate)": "rows.ObbligatoryUnloadDate",
"@(4,References)": {
"*": {
"TypeReference": {
"OC": {
"@(2,ValueReference)": "rows.info1[]"
},
"CM": {
"@(2,ValueReference)": "rows.info7[]"
}
}
}
}
}
}
}
}
}
}
}
}, {
"operation": "cardinality",
"spec": {
"header": {
"*": "ONE"
},
"sender": {
"*": "ONE"
},
"consignee": {
"*": "ONE"
},
"unload": {
"*": "ONE"
}
}
}, {
"operation": "shift",
"spec": {
"header": {
"*": "header.&"
},
"sender": {
"*": "sender.&"
},
"consignee": {
"*": "consignee.&"
},
"unload": {
"*": "unload.&"
},
"rows": {
"*" : {
"*" : {
"@": "rows[&1].&2"
}
}
}
}
}] Everything works as expected when i have multiple elements in GoodDetails like this: "GoodsDetails": [
{
"Packs": 1,
"NetWeight": 3.800000,
"GrossWeight": 4.800000,
"Cube": 0.693000,
"Meters": 0.000000,
"Note": "good note"
},
{
"Packs": 1,
"NetWeight": 3.800000,
"GrossWeight": 4.800000,
"Cube": 0.693000,
"Meters": 0.000000,
"Note": "good note"
}
] However, when the element is only one like this: {
"Packs": 1,
"NetWeight": 3.800000,
"GrossWeight": 4.800000,
"Cube": 0.693000,
"Meters": 0.000000,
"Note": "good note"
} The result is really bad: rows": [{
"ConsigneeZipCode": null,
"UnloadZipCode": null,
"info7": "230500003530",
"info1": "V6157-0360",
"Seats": null
}, {
"Packs": null
}] It seems that when the element is only one, the first shift transformation does not consider resulting rows[ {}, {} ] as an array.... but something like: row{ name1: value1 name1: value2 ... } Any suggestion really appreciate.
... View more
06-20-2023
12:56 AM
Yes correct....the file is read in the first fragment, updated and then saved into a local folder....and so on for every fragment. I want to be sure that every updare is saved without loosing data.
... View more
06-16-2023
11:26 AM
You have to create a separate case statement for each column you are trying to update similar to what is done for the shipment number.
... View more
05-26-2023
07:17 AM
Hi, I think you can achieve this in two shift transformation as follows: [
{
// 1st transformation is basically to isolate
// "OC" value reference into Orders.ValueReference
"operation": "shift",
"spec": {
"Orders": {
"*": {
"Headers": "Orders[#2].&",
"Goods": "Orders[#2].&",
"References": {
"*": {
"TypeReference": {
"OC": {
"@(2,ValueReference)": "Orders[#2].ValueReference"
}
}
}
}
}
}
}
},
//2ed Transformation is the same as you had except for
//fetching the isolated ValueReference above
//into its own Array based on the GoodsDetails array
{
"operation": "shift",
"spec": {
"Orders": {
"*": {
"Headers": "header",
"Goods": {
"*": {
"GoodsDetails": {
"*": {
"@(2,GoodsTypeName)": "rows.GoodsTypeName",
"Packs": "rows.Packs",
"@(4,ValueReference)": "rows.ValueReference"
}
}
}
}
}
}
}
}
] If the helps please accept solution. Thanks
... View more
04-20-2023
12:02 AM
hi @Ray82, Assuming that you have a way to identify where you need to add your ";", you can easily use NiFi's Expression Language to add it. More details are available here: https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html. At first sight, if you identify a pattern, you can use any function with replace in order to replace your white space with semicolon. Next, to add the semicolon between your characters, you can use an regex, based on your pattern. In terms of processors, you have ReplaceText, UpdateAttribute, UpdateRecord, etc .... so you have plenty to choose from 🙂
... View more
03-16-2023
02:57 AM
This should work, in the past i've tried the same command with ExecuteProcess but it didn't accept upstream connections. Your solution is working great! Many thabks for your supporto.
... View more
10-20-2022
08:56 AM
Hey, Generally JOLT tries to maintain the input order. The only way that I can think about is an hard-coded approach. Add this after the default spec: {
"operation": "shift",
"spec": {
"Order": {
"*": {
"top-level": {
"ProcessType": "[&2].ProcessType",
"FreightTerm": "[&2].FreightTerm",
"OrderNumber2": "[&2].OrderNumber2",
"TransmissionCommand": "[&2].TransmissionCommand",
"SequenceNumber": "[&2].SequenceNumber",
"SenderAbbreviation": "[&2].SenderAbbreviation"
},
"CustomerAddress": "[&1].CustomerAddress",
"ShipmentAddress": "[&1].ShipmentAdress",
"Volume": "[&1].Volume",
"Weight": "[&1].Weight"
}
}
}
} Also, are you trying to "duplicate" the value of "OrderNumber2"? The other solution I can remember to "replace" JOLT is by using FasterXML/Jackson
... View more
10-12-2022
07:59 AM
Hi guys, hope someone can explain this strange behavior. I've created 2 different parameter context, let's say: Staging with #{db_host} = 192.168.1.2 Production with #{db_host} = 192.168.1.1 This #{db_host} is placed into a DBCPConnectionPool in order to switch from Staging to Production easily. The problem is that, when i change from Staging to Production in the Process Group general tab it still keeps the value of Staging instead of Production. Does any additional step is required? IS there any cache to delete or anything relevant? Many Thanks
... View more
Labels:
- Labels:
-
Apache NiFi