Member since
01-02-2020
40
Posts
3
Kudos Received
5
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 11189 | 12-23-2020 09:33 AM | |
| 2831 | 05-18-2020 01:27 AM | |
| 2419 | 04-28-2020 11:02 AM | |
| 6751 | 04-23-2020 12:20 PM | |
| 3091 | 01-25-2020 11:50 PM |
02-28-2021
07:16 PM
Hi, I have json flowfile, [{"prediction":"Test2"},{"prediction":"Test2"}]. I am want extract the 1st part of the array ie prediction":"Test2 my destination is flowfile-attribute using EvaluateJsonPath . I have added a custom attribute mlresult --->$.prediction[0], to extract the "Test2" into mlresult, but I am getting empty string for the mlresults for the flow file attribute section. flow file : [{"prediction":"Test2"},{"prediction":"Test2"}] EvaluateJsonPath configuration: How to get the value(only one) Test2 to mlresult? Thanks --Murali
... View more
Labels:
- Labels:
-
Apache NiFi
01-07-2021
09:29 AM
Hi All,
I have a scenario where I will get a number of records from csv file, the task is to read the csv and split each record as one file and save the each file and sheet name with the record name of the 1st column(exclude 1st row as header).
1. What I have done is read the csv file using Getfile,
2. then used splittext processor, to split each record as one csv file by setting property Header line count to 1.
3. then need to extract the 1st record at column1, use that record(2nd row and 2nd column) value as file name and sheet name for the each individual file
Original csv file:
after split the there should two files, one with the name ab123.csv and c35ks.csv and also sheet name also should be changed.
ID
Description
status
ab123
Eldon Base for stackable storage shelf, platinum
ab123.csv
ID
Description
status
c35ks
1.7 Cubic Foot Compact "Cube" Office Refrigerators
c35ks.csv
How to get above out puts after the work flow.
... View more
Labels:
- Labels:
-
Apache NiFi
12-23-2020
09:33 AM
Hi Matt, Great, with your suggestion, I got what I was expecting. Thank You, --Murali
... View more
12-23-2020
05:29 AM
I have a scenario where list of files are coming from previous processor, where for each file, I have to create json file with attributes of the flowfile. In AttributesToJSON processor configuration there is option to extract pipeline attributes and can create json files/object, if we set Include Core Attributes to true, it will read some of the file properties and forms the json file. the out for the above case in my scenario is …
{"fragment.size":"125"
file.group:"root",
file.lastModifiedTime:"2020-12-22T15:09:13+0000",
fragment.identifier:"ee5770ea-8406-400a-a2fd-2362bd706fe0",
fragment.index:"1",
file.creationTime:"2020-12-22T15:09:13+0000",
file.lastAccessTime:"2020-12-22T17:34:22+0000",
segment.original.filename:"Sample-Spreadsheet-10000-rows.csv",
file.owner:"root",
fragment.count:"2",
file.permissions:"rw-r--r--",
text.line.count:"1"}
}
But the files has other properties, like absolute.path, filename, uuid are missing in the above json file.
My requirement is, get the absolute.path, filename and uuid and concatenate absolute.path+/+filename, assign this to custom attribute say filepath:absolute.path+/+filename and also add uuid to json object. so my json file should like { uuid:"file uuid value", filepath:"absolute.path+/+filename" } any inputs to get above form of json file.
... View more
Labels:
- Labels:
-
Apache NiFi
05-18-2020
01:27 AM
The issue is with the missing of square brackets at starting and @ ending .. The working query is .. [{ "$group": { "_id": { "X": "$X", "Y_DT": "$Y_DT", "Z": "$Z" }, "adj": {"$sum": "$adj" }, "bjc": {"$sum": "$bjc" }, "jbc": {"$sum": "$jbc" }, "mnk": {"$sum": "$mnk"} } }]
... View more
05-14-2020
09:44 AM
Hi friends, Help me out get out this situation
... View more
05-13-2020
12:29 PM
Hi Friends, I have mongo query which running perfectly fine from mongo shell. b.test650.aggregate( [ { $group: { "_id": { X: "$X", Y_DT: "$Y_DT", Z: "$Z" }, adj: {$sum: "$adj" }, bjc: {$sum: "$bjc" }, jbc: {$sum: "$jbc" }, mnk: {$sum: "$mnk"} } } ] ) The same query when in ran from Nifi , RunMangoAggregation throwing out error, though the mongo aggregation query changed to json type query ... { "$group": { "_id": { "X": "$X", "Y_DT": "$Y_DT", "Z": "$Z" }, "adj": {"$sum": "$adj" }, "bjc": {"$sum": "$bjc" }, "jbc": {"$sum": "$jbc" }, "mnk": {"$sum": "$mnk"} } } Getting following error .. error run mongodb aggregation query.: com.fasterxml.jackson.databind.exc.MismatchedInputException:canot deserialize instace of java.util.ArayList' out of START_OBJECT token Nifi workflow.. Processor(runMangoAggregation) configuration What is the change I need to do in json query which supposed to executed @ runMongoAggregation processor?
... View more
Labels:
- Labels:
-
Apache NiFi
04-28-2020
11:02 AM
it did work after adding '\t' to read_csv as 2nd arg.
... View more
04-28-2020
07:47 AM
I have scenario where I am getting the file(stream:flowfile of NIFI) as of type csv file, then creating the dataframe and dumping it thats it. But after creating the dataframe the structure of the file got disturbed, if I open the same flowfile on my disk i could see clear structure with columns separated with tab, but with python dataframe I am not getting the same structure, if I get same structure i can perform row manipulation. Here What I am doing:
1: using ExecuteSQL processor, I am getting database record,
2: then passing this record to ConvertRecord processor to convert this avro record type csv file separated by tab.
convertRecordSetWriter settings...
The output of the flowfile is ... 3: Then the reading flowfile(step 2 folowfile) as python data using ExecuteStreamCommand, coz I am want to perform some action on the database record, to do this my record structure has been changed in data frame.
... View more
Labels:
- Labels:
-
Apache NiFi
04-23-2020
12:20 PM
Hi, Faerballert, Really it did work, thank you very much.
... View more