Member since
11-16-2015
902
Posts
664
Kudos Received
249
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 166 | 09-30-2025 05:23 AM | |
| 622 | 06-26-2025 01:21 PM | |
| 458 | 06-19-2025 02:48 PM | |
| 710 | 05-30-2025 01:53 PM | |
| 9738 | 02-22-2024 12:38 PM |
03-07-2017
02:42 PM
1 Kudo
To put each of the ServiceCodes values into its own "row", you can use JoltTransformJSON with the following shift specification: {
"operation": "shift",
"spec": {
"ServiceCodes": {
"*": {
"@(2,Time)": "[&].Time",
"@(2,Subscription)": "[&].Subscription",
"@": "[&].ServiceCode"
}
}
}
} Given your input above, it will produce the following: [ {
"ServiceCode" : "SERVICE_CODE1",
"Subscription" : "1234567",
"Time" : "03/07/2017 11:45:46.365"
}, {
"ServiceCode" : "SERVICE_CODE2",
"Subscription" : "1234567",
"Time" : "03/07/2017 11:45:46.365"
}, {
"ServiceCode" : "SERVICE_CODE3",
"Subscription" : "1234567",
"Time" : "03/07/2017 11:45:46.365"
}, {
"ServiceCode" : "SERVICE_CODE4",
"Subscription" : "1234567",
"Time" : "03/07/2017 11:45:46.365"
} ] This might be able to go directly into ConvertJSONToSQL, but if it doesn't, you can use SplitJSON with $[*] or $.* as the JSON Path expression, and it will divide the array up into one flow file per object in the array. Then you should be able to transform it to SQL.
... View more
03-07-2017
02:06 PM
Your input is an array but that specification works on a single JSON object. Try one of those objects at a time. And when you run your flow, make sure you have the SplitJson before the JoltTransformJSON processor, to divide up the array into individual flow files, each containing a single JSON object. As I said above, if you need to process the entire array at once, you will need a different specification, and I couldn't create one that worked.
... View more
03-07-2017
03:54 AM
2 Kudos
If your configuration of ExecuteStreamCommand outputs an integer and you would like it in an attribute, try setting the "Output Destination Attribute" property of ExecuteStreamCommand to the attribute name you'd like, and use the "original" relationship to transfer the flow file downstream. That will give you a flow file with the same incoming content as well as an attribute whose name is of your choosing and whose value is the output stream returned (hopefully the same value you mention your command returns) by the command you are executing. If instead you want the exit code of the command, you will find it in the "execution.status" attribute of the outgoing flow file (see doc here).
... View more
03-06-2017
04:22 PM
1 Kudo
All Property Descriptors (required or optional) must have a Validator set explicitly, otherwise it will return the error you are seeing. It appears you are not looking to perform validation, but you still must set a validator, so on your optional properties add the following to the builder: .addValidator(Validator.VALID)
... View more
03-06-2017
03:42 PM
What does your table look like? Is there a column that is guaranteed to be "strictly increasing" for each added/updated row? Sometimes this is the ID column (if using an autoincrementing integer that doesn't roll over), or perhaps a timestamp column such as "Last Updated". If you have no such column, then you will want to follow Bryan's advice on scheduling and start/stop.
... View more
03-06-2017
02:55 PM
2 Kudos
Are you trying to retain the structure of the JSON Array and/or objects, and just rename the fields? If so, try the JoltTransformJSON processor. I am guessing you will eventually need to split the JSON array into individual objects in order to insert them into your database? If so, then try SplitJson first (with a JSON Path expression of $[*] or $.*) to get each JSON object into its own flow file. Then you can use JoltTransformJSON with the following Shift specification: {
"operation": "shift",
"spec": {
"X": "A",
"Y": "B",
"Z": "C",
"W": "D"
}
} That should map your fields the way you have described (X=A, Y=B, Z=C, W=D). It may be possible to have a Shift or Chain specification that would handle the mapping for your entire original array, but I wasn't able to get a working spec (using this demo app), perhaps someone else (@Yolanda M. Davis ?) can chime in on that.
... View more
03-06-2017
02:26 PM
This question has a "nifi-processor" tag, which NiFi processor are you using? Also which processor(s) are you using to get the email messages? I suspect you should be able to use RouteOnAttribute or RouteOnContent to send emails with ZIP attachments to some other relationship, and those without attachments can go directly to PutSolrContentStream (or whatever you're using to push data to Solr). Perhaps the branch with ZIP attachments can use processor(s) to remove the ZIP part of the attachment, retain the email message, and route back to the "main" branch to retry the "put".
... View more
03-03-2017
02:23 PM
2 Kudos
If you are trying to add an attribute to a flow file, you can use UpdateAttribute for that. If you are trying to add a property to a processor, then it depends on the processor whether it supports dynamic (or "User-Defined") properties. If a processor does not support dynamic properties, then when you try to add one, the processor will be deemed invalid.
... View more
03-02-2017
03:24 PM
Certainly! You can get files from HDFS using the GetHDFS processor or the ListHDFS -> FetchHDFS processors.
... View more
03-01-2017
06:24 PM
1 Kudo
QueryDatabaseTable is usually used for "incremental" fetching, meaning it will only grab "new" rows. This is based on the "Maximum Value Columns" property, which is usually set to an ID or timestamp field in the database. That is what allows the processor to only grab "new" rows, as it will keep track of the maximum value it has seen so far for the column, and the next time it runs, it will fetch only those rows whose value is greater than the last max it saw. If you are not setting a Maximum-Value Column, then QueryDatabaseTable acts much like ExecuteSQL, in the sense that it will keep repeating the same query and thus give duplicate rows. So for your use case I recommend setting a Maximum Value Column for that processor. If there is no such field in the table, then you're really looking at more of a "run-once" scenario, which is not currently supported.
... View more