Member since
11-16-2015
905
Posts
665
Kudos Received
249
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 424 | 09-30-2025 05:23 AM | |
| 754 | 06-26-2025 01:21 PM | |
| 640 | 06-19-2025 02:48 PM | |
| 841 | 05-30-2025 01:53 PM | |
| 11352 | 02-22-2024 12:38 PM |
09-29-2021
09:06 AM
@mburgess I used your 1st suggestion and it worked like a charm with just one exception. The header row was index 1. I'm not sure if was just me, my data, or some property/attribute I set wrong. Just thought you should know. So, after modifying the user-defined attribute value to ${fragment.index:gt(1)} it worked. And, in case you ask, the header row is the first row in the CSV file which doesn't make sense unless the processor logic changed to 1-based indexing instead of 0-based indexing. Also, thanks for all of your blog posts. I use your suggestions a lot.
... View more
08-30-2021
07:52 PM
Hi @Sbofa Yes you are right. Based on kind it will decide which kind of spark shell needs to start.
... View more
07-22-2021
03:08 PM
cool tool. I updated to work with 1.12.1: https://gist.github.com/tpanagos/fb8ca4afb16b00862429ffc87dc65348
... View more
05-28-2021
03:07 PM
Hi @jerry_pylarinos and @mburgess I am seeing weird behavior in the same scenario but the only difference is my spec has expression language inside Jspec and it never resolved after pulling from the cache and passing as an attribute to jolttransform. Is that a bug? like if you jspec.json has [
{
"operation": "modify-overwrite-beta",
"spec": {
"id": "${UUID()}"
}
}
] and your generate flow file has {
"id" : "anyname"
} You result looks like this {
"id":"${UUID()}"
} How to evaluate expression langauge here?
... View more
05-11-2021
05:09 AM
1 Kudo
Thanks @VidyaSargur - I just started a new thread, per your suggestion.
... View more
03-24-2021
03:23 PM
1 Kudo
What are the column names in your table? Assuming "carId" and "carType", you can use JoltTransformJson or JoltTransformRecord with the following spec: [ { "operation": "shift", "spec": { "*": { "$": "carId", "@": "carType" } } }, { "operation": "shift", "spec": { "carId": { "*": { "@": "[&0].carId" } }, "carType": { "*": { "@": "[&0].carType" } } } } ]
... View more
03-02-2021
08:55 AM
How to perform the same for the very first occurrence of [ and last occurrence of ].
... View more
02-14-2021
12:23 AM
Hi @mburgess Can you please elaborate on which property needs to be configured in the GrokReader controller service for using the kv filter? I'm trying to parse the incoming key=value pair. Input: key1=value1,key2=value2,key3=value3,key4=value4 output: I need key1, key2, key3, key4 as attributes and their respective values as attribute values I can use %{GREEDYDATA:msgbody} in the GrokExpression property but I do not know where to provide kv { source = "msgbody" } Your help is appreciated
... View more
01-29-2021
04:54 PM
Is there anything in the logs before/after the "already marked for transfer" entry? Trying to figure out how a flow file can get transferred and then something goes wrong (where we'd try to also send it to failure)
... View more
10-22-2020
04:18 PM
@mburgess Helpful information shared. I am using Nifi 1.7.1 For my case, the incremental fetching does not seem to work correctly. All records gets ingested from the database but do not make it all the way to the destination. Processor used is GenerateTableFetch then Execute SQL and the other corresponding processors down the data processing flow. Record id is captured correctly on the GenerateTableFetch property state and its up to date as the record id from the source (db). However, it will still miss some records when processing the files making the number of records on the destination out of sync with the source from the db. Am i missing something, Would scheduling times for fetching help and how can I do that?
... View more