Member since
11-16-2015
905
Posts
665
Kudos Received
249
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 412 | 09-30-2025 05:23 AM | |
| 726 | 06-26-2025 01:21 PM | |
| 629 | 06-19-2025 02:48 PM | |
| 836 | 05-30-2025 01:53 PM | |
| 11330 | 02-22-2024 12:38 PM |
10-11-2017
05:38 PM
1 Kudo
I have written NIFI-4479 to cover the addition of a DeleteMongo processor.
... View more
10-11-2017
04:29 PM
It's hard to tell from the formatting of your code above, but if you are executing session.remove(flowFile1) then later trying to transfer it to REL_SUCCESS, you will get that error. You can either change the logic to put the remove/transfer calls into an if-else block, or keep a boolean variable indicating whether the flow file has already been removed, and transfer if it has not been removed. It looks like you already have an if-clause checking the firstChild for "false", perhaps you could put the transfer in an else-clause.
... View more
10-09-2017
05:58 PM
What do you mean by "no convert happens?" PutParquet should write Parquet file(s) to the HDFS directory you configured in the processor. I believe the incoming flow file is the one transferred to the "success" relationship once the converted file has been successfully written to HDFS, not the converted file. For that I imagine there would have to be a ParquetRecordSetWriter and you'd use ConvertRecord instead of PutParquet.
... View more
10-09-2017
02:10 PM
How are you uploading the file? Processors like GetFile will set the filename attribute to the same name that was on disk.
... View more
10-05-2017
06:07 PM
I'm not sure what your custom GetFile does, but the existing GetFile has an option of whether to "Keep Source File", which defaults to false. When set to false, the file will be deleted from the file system once it has been processed by GetFile. If set to true, the source file will remain and thus will be processed again when the GetFile processor is triggered to run again. In your custom processor you may want to support this same behavior for your use case.
... View more
09-16-2017
01:29 AM
1 Kudo
You should be able to use line-by-line as the Evaluation Mode in the ReplaceText processor, using \n or \\n (the slash may need to be escaped, e.g.) as the match value and | or \| as the replace value (again, the latter if it needs to be escaped to be proper regex for the pipe character).
... View more
09-16-2017
01:26 AM
1 Kudo
Try || instead of CONCAT or +, the former is the standard and the latter are not, according to this.
... View more
09-05-2017
10:42 PM
2 Kudos
There could be a couple of things going on here, there is some discussion of each in the thread you mentioned: 1) The X-Pack JAR has multiple dependencies that are not included. When you install the X-Pack plugin into an Elasticsearch node, these dependencies are extracted and added to the ES path so the ES code can find them. In a NiFi node this must be done manually. Check the other thread for the X-Pack ZIP (not JAR), you will need to unzip that somewhere and point to the elasticsearch/ folder underneath that. Your "X-Pack Transport Location" property should be set to a comma-delimited list with two items, one being the transport JAR, and one being the elasticsearch/ subfolder that contains the x-pack JAR and all its dependencies. 2) The Elasticsearch native client (used by all the ES processors that don't end in "Http") is VERY particular about versions, meaning there is no guarantee that the one used by NiFi will be compatible with the ES cluster unless they are the same major and minor versions (I think dot releases -- X.Y.1 or X.Y.2 -- are ok). PutES5 comes with the 5.0.1 client, which means it should work with all ES 5.0.x clusters. However there is no guarantee that it will work with a 5.5.x cluster. In fact I believe Elastic has replaced the native client in 5.5 with a Java one that wraps the REST API. You can try the 5.0.1 X-Pack and Transport JARs (as one person from the other thread did) to see if that works. If you don't require the native client, you may be better served by using PutElasticsearchHttp and enabling TLS/SSL for your Elasticsearch cluster. This (plus setting up access controls for authorization) should give you a robust way to deal with secure Elasticsearch clusters of any version. Also with such an approach you should be able to have X-pack installed on your ES cluster but interact with the cluster from NiFi using the Http versions of the processors; this is how you'd interact with other X-pack capabilities such as Marvel and Watcher. In this case you shouldn't need the X-pack plugin or the transport JAR on the NiFi node, as you won't be using the native client if you use PutElasticsearchHttp.
... View more
08-30-2017
06:03 AM
If you want to include NPM modules, check this link for more details on how to use them with Nashorn.
... View more
08-29-2017
03:31 PM
1 Kudo
Where did you get ImportSqoopFull? To my knowledge that processor is not in Apache NiFi nor HDF NiFi. Wherever you got that processor, hopefully there is some documentation/code there to help you with your issues.
... View more