Member since
11-16-2015
905
Posts
665
Kudos Received
249
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 442 | 09-30-2025 05:23 AM | |
| 777 | 06-26-2025 01:21 PM | |
| 675 | 06-19-2025 02:48 PM | |
| 863 | 05-30-2025 01:53 PM | |
| 11438 | 02-22-2024 12:38 PM |
03-03-2017
02:23 PM
2 Kudos
If you are trying to add an attribute to a flow file, you can use UpdateAttribute for that. If you are trying to add a property to a processor, then it depends on the processor whether it supports dynamic (or "User-Defined") properties. If a processor does not support dynamic properties, then when you try to add one, the processor will be deemed invalid.
... View more
03-02-2017
03:24 PM
Certainly! You can get files from HDFS using the GetHDFS processor or the ListHDFS -> FetchHDFS processors.
... View more
03-01-2017
06:24 PM
1 Kudo
QueryDatabaseTable is usually used for "incremental" fetching, meaning it will only grab "new" rows. This is based on the "Maximum Value Columns" property, which is usually set to an ID or timestamp field in the database. That is what allows the processor to only grab "new" rows, as it will keep track of the maximum value it has seen so far for the column, and the next time it runs, it will fetch only those rows whose value is greater than the last max it saw. If you are not setting a Maximum-Value Column, then QueryDatabaseTable acts much like ExecuteSQL, in the sense that it will keep repeating the same query and thus give duplicate rows. So for your use case I recommend setting a Maximum Value Column for that processor. If there is no such field in the table, then you're really looking at more of a "run-once" scenario, which is not currently supported.
... View more
03-01-2017
05:51 PM
What does the SQL statement look like? For the error rows, what are the flow file's attributes set to?
... View more
02-28-2017
06:04 PM
What do you mean by receiving attributes from a URL? Do you mean query parameters? Or are you asking if there is a way to combine the fetching of the document via the URL and the extraction of JSON fields from the response? If the latter, there isn't a way to combine those currently; NiFi is a highly modular system so fetching is really a different operation than extractions, hence the two separate processors. What do you see as the issue in receiving the whole JSON response? The third-party libraries I'm familiar with still retrieve the whole HTTP response whether they expose it via an input stream or a big string or whatever. After the initial fetch/response, if you don't touch the content after that, there won't be an operational impact with the respect to the size of the JSON response. If you no longer need the content, you can replace it with something else (or nothing) using ReplaceText. One concept that is (hopefully) becoming more popular with web services is support for GraphQL. Using that paradigm and query language, you could ask for just the fields/structures you want, and you get the response you expect.
... View more
02-28-2017
05:46 PM
3 Kudos
I don't think there is any other way, besides ExecuteScript or ExecuteStreamCommand (both of which require the HBase client libraries / programs, the former of which is already in the HBase NAR). I have added NIFI-3538 to track the addition of DeleteHBase processor(s).
... View more
02-23-2017
07:41 PM
1 Kudo
As of NIFI-3418, NiFi will allow the user to set both of the aforementioned properties.
... View more
02-21-2017
06:00 PM
1 Kudo
It seems feasible to remove this restriction from Date and Time data types, as was done for Timestamps in NIFI-3430. Please feel free to write up a Jira case for this improvement. In the meantime, if you are trying to do something like use the value '2017-02-21' in a field specified as Date/Time type, you could first change the value (possibly in an UpdateAttribute processor) using NiFi Expression Language's toDate() function. So if you have an attribute such as 'my.date.string' set to '2017-02-21' you could set an attribute 'my.date' equal to the following: ${my.date.string:format('yyyy-MM-dd')}
... View more
02-21-2017
04:13 PM
2 Kudos
FetchFile employs the following outgoing relationships: success Any FlowFile that is successfully fetched from the file system will be transferred to this Relationship. not.found Any FlowFile that could not be fetched from the file system because the file could not be found will be transferred to this Relationship. permission.denied Any FlowFile that could not be fetched from the file system due to the user running NiFi not having sufficient permissions will be transferred to this Relationship. failure Any FlowFile that could not be fetched from the file system for any reason other than insufficient permissions or the file not existing will be transferred to this Relationship. You could handle the non-success cases by routing the other relationship(s) back to FetchFile. Each non-success flow file will be penalized (the default is 30 seconds but is configurable). Then when processing the file no longer causes an error, it should be successfully processed and transferred to the success relationship.
... View more