Member since
11-16-2015
905
Posts
666
Kudos Received
249
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 505 | 09-30-2025 05:23 AM | |
| 823 | 06-26-2025 01:21 PM | |
| 750 | 06-19-2025 02:48 PM | |
| 930 | 05-30-2025 01:53 PM | |
| 11672 | 02-22-2024 12:38 PM |
05-08-2018
01:07 PM
4 Kudos
Do you mean MergeContent rather than UpdateAttribute? The former merges incoming flow files' content into outgoing flow file(s), the latter just adds/deletes/changes metadata about the flow files. If you mean MergeContent, try setting the Demarcator field to the newline character (\n), that should separate the incoming messages by a new line.
... View more
05-05-2018
01:42 AM
Unfortunately at the time of this answer, that field is not being populated by the framework and thus doesn't show up in the output. I have written NIFI-5155 to cover this improvement. Please feel free to comment on the Jira case as to whether you'd like to see the IP, hostname, or both, thanks in advance!
... View more
05-04-2018
01:52 PM
I use the Advanced UI in the JoltTransformJSON processor or this webapp to test out specs, also there are a bunch of examples and doc in the javadoc but it can be a bit difficult to follow. You can also search the jolt tag on StackOverflow for a number of questions, answers, and examples.
... View more
05-03-2018
09:33 PM
There's a bulletinNodeAddress field, it's probably an IP not a hostname (I didn't check), would that work?
... View more
04-30-2018
06:24 PM
2 Kudos
I wrote up a quick Chain spec you can use in a JoltTransformJSON processor, that way you can skip the Split/Merge pattern and work on the entire JSON object at once: [
{
"operation": "shift",
"spec": {
"Objects": {
"*": {
"Item": {
"Inventory": {
"Elements": {
"Element": {
"*": {
"Height": "[&1].Height",
"Weight": "[&1].Weight",
"Features": {
"Feature": {
"*": "[&3].&"
}
}
}
}
}
},
"Status": {
"ElementsStatus": {
"ElementStatus": {
"*": {
"@(3,Id)": "[&1].Id",
"Status": "[&1].Status"
}
}
}
}
}
}
}
}
}
] Note that this assumes the Element and ElementStatus arrays are parallel, meaning the first object in the Element array corresponds to the first object in the ElementStatus array (i.e. their FeatureId fields match). If that is not true, you'd either need a more complicated JOLT spec or perhaps a scripted solution using ExecuteScript.
... View more
04-26-2018
08:02 PM
Your schema says that null values are allowed. If you don't want to allow nulls for particular fields, try a ValidateRecord processor using a schema that does not allow null values for the desired fields. I can't remember whether the "non-null" schema would be set on the Reader or Writer for ValidateRecord, but I believe it is the Reader. In that case, use the current schema (that allows nulls) for the Writer so the valid and invalid records can be output from the processor. Then you can send the "valid" relationship to the Elasticsearch processor, and handle the flowfiles/records on the "invalid" relationship however you choose.
... View more
04-26-2018
02:58 PM
You only need one session per execution of the script. Using that session, you can get, create, remove, and transfer as many flow files as you want. If you get or create a flow file from the session, then you must transfer or remove it before the end of the script, or else you will get a "Transfer relationship not specified" error. Also you can only transfer each flow file once, if you attempt to transfer the same flow file more than once, you will get the error you describe above.
... View more
04-26-2018
02:46 PM
I can't reproduce this, I used GenerateFlowFile with your input XML (adding two Transactions) -> SplitXML (level 1) and got the same "sub-xml" you did, then I used the same settings for EvaluateXPath and my content attribute has the correct value of 1. The only way I got it to show "Empty string set" is when I used /Transaction/@type as the XPath (note the wrong case for Type/type), is it possible there's a typo or case-sensitivity issue between your input XML and the XPath?
... View more
04-25-2018
07:32 PM
I think it was an error in the blog software, seems to be fixed now?
... View more
04-24-2018
01:06 PM
1 Kudo
PutDatabaseRecord allows you to put multiple records from one flow file into a database at a time, without requiring the user to convert to SQL (you can use PutSQL for the latter, but it is less efficient). In your case you just need GetFile -> PutDatabaseRecord. Your CSVReader will have the schema for the data, which will indicate the types of the fields to PutDatabaseRecord. It will use that to insert the fields appropriately into the prepared statement and execute the whole flow file as a single batch.
... View more