Member since
11-16-2015
885
Posts
645
Kudos Received
244
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
500 | 02-02-2023 07:07 AM | |
1722 | 12-07-2021 09:19 AM | |
3052 | 03-20-2020 12:34 PM | |
10369 | 01-27-2020 07:57 AM | |
3464 | 08-09-2019 05:46 PM |
04-27-2016
06:26 PM
3 Kudos
Dynamic properties (like your "testProperty") are passed in as variables, and they are PropertyValue objects. PropertyValue has a method called evaluateAttributeExpressions() and if you want to to resolve attributes from a FlowFile, you can pass in a reference to that flow file. Then you call getValue() (or just ".value" in Groovy) and the property will have been evaluated correctly. Since you're using the "filename" attribute, I assume you will be getting that from an incoming flow file. So you will need to get the flow file from the session, pass it into evaluateAttributeExpressions(), then don't forget to transfer or remove the flow file. Here is an example in Groovy: flowFile = session.get()
if(!flowFile) return
log.info("-----------------------" + testProperty.evaluateAttributeExpressions(flowFile).value)
session.transfer(flowFile, REL_SUCCESS)
... View more
04-27-2016
04:27 AM
Absolutely! with ExecuteScript (with, say, Groovy as the language), a script body could be something like: import org.apache.commons.lang3.StringUtils
flowFile = session.get()
if(!flowFile) return
flowFile = session.putAttribute('my_nifi_attribute', StringUtils.reverse( flowFile.getAttribute('my_nifi_attribute') )
session.transfer(flowFile, REL_SUCCESS)
... View more
04-27-2016
03:34 AM
1 Kudo
To @Artem Ervits comment, if the Java class is available in the class loader (either by being part of the NiFi Core or being specified as a JAR in the ExecuteScript's Module Directory property), then you can call it directly, you don't need reflection. If I'm not answering your question, can you provide more details? I'm happy to help 🙂
... View more
04-05-2016
01:13 PM
8 Kudos
As of NiFi 0.6.0, there is a processor called QueryDatabaseTable that does something like this: https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.standard.QueryDatabaseTable/index.html You can enter the columns for which you'd like the processor to keep track of, it will store the maximum value it has "seen" so far for each of the specified columns. When the processor runs, it will generate a SQL query to return rows whose values in those columns are greater than the observed maximum values. So if you have a unique (and increasing) ID field, for example, you can specify that in the QueryDatabaseTable processor, then the first execution will return all rows, and subsequent executions will not return any rows until one (or more) have an value for ID greater than the max. This can be used to do incremental fetching like Sqoop does for only added/updated rows, using columns that contain timestamps, IDs, or other values that are increasing.
... View more
04-05-2016
11:44 AM
2 Kudos
Even with the standalone JAR, ExecuteSQL will not work with Hive due to the Hive JDBC driver not implementing some of the JDBC API calls made by ExecuteSQL. There is a Jira case to add Hive support: https://issues.apache.org/jira/browse/NIFI-981. There are some workarounds described in the following nifi-users email (from the archive): https://mail-archives.apache.org/mod_mbox/nifi-users/201601.mbox/%3C43A01CF2-2CC6-4A01-8836-86B9B3BEA31B@gmail.com%3E
... View more
03-11-2016
06:33 PM
1 Kudo
Being the client OS superuser doesn't imply that user is a superuser on HDFS. You'd need to add a user to Hadoop with the same UID and GID as your client user, and make the Hadoop user a superuser on that system, or better yet, just give the user the permissions it needs for the desired folder(s): http://stackoverflow.com/questions/24184306/how-to-add-user-in-supergroup-of-hdfs-in-linux
... View more
03-11-2016
04:01 PM
3 Kudos
If NiFi is running as a user that can change permissons in HDFS, you should be able to use the following properties in PutHDFS to do the chown/chmod: Permissions umask Remote Owner Remote Group ...according to the documentation: https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.hadoop.PutHDFS/index.html
... View more
03-11-2016
01:51 PM
8 Kudos
If the expression refers to attributes on a flow file, you will need to pass a reference to the flow file into evaluateAttributeExpressions: FlowFile flowFile = session.get();
jsonObject.put("hostname", context.getProperty(ADD_ATTRIBUTE).evaluateAttributeExpressions(flowFile).getValue()); If the property contains an attribute name (rather than an Expression containing an attribute name) and you want the value from the flow file: jsonObject.put("hostname", flowFile.getAttribute(context.getProperty(ADD_ATTRIBUTE).getValue())); If the attribute value itself contains Expression Language and you want to evaluate it, take a look at the following class: org.apache.nifi.attribute.expression.language.Query
... View more
03-02-2016
10:00 PM
3 Kudos
I was able to generate and send JSON to InvokeHttp POST using a test endpoint (http://httpbin.org/post), it read the incoming flow file contents (the JSON) and correctly responded based on the input file. Does the InvokeHttp response make sense for what you intend to send via POST? If not, you may need to (if you haven't already) set the "mime.type" attribute to "application/json" before sending the flow file to InvokeHttp. This can be done with the UpdateAttribute processor (see my example template below). If GetFile still doesn't seem to work, perhaps try ListFile followed by FetchFile. Having said that, I wouldn't think this would make a difference, I suspect there's something else in the flowfile content and/or attributes (or a lack thereof) that causes InvokeHttp to not respond as intended. My template is available as a Gist (here).
... View more
02-29-2016
06:23 PM
3 Kudos
I don't think PostHTTP outputs the response as a flow file. If you are looking for the response to be in a flow file, try InvokeHttp (with HTTP Method property set to "POST") instead and send its "response" relationship to your PutHDFS processor.
... View more
- « Previous
- Next »