Member since
08-17-2016
45
Posts
21
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2397 | 09-05-2018 09:20 PM | |
1900 | 06-29-2017 06:50 PM | |
11023 | 02-28-2017 07:12 PM | |
2288 | 11-11-2016 01:57 AM |
06-29-2017
06:41 PM
1 Kudo
@Ilya Li I agree with @Bryan Bende that the best approach is to refactor things such that shared classes are moved to something under nifi-extension-utils. I did this mainly for ListAzureBlobStorage, since it used the AbstractListProcessor code. You can take a look at https://github.com/apache/nifi/pull/1719, and take a look at the last four commits before the PR was merged to master, for an example of the refactoring.
... View more
05-30-2017
04:07 PM
@Alejandro A. Did this answer end up solving your use case?
... View more
05-24-2017
08:51 PM
3 Kudos
@Alejandro A. Are you saying you would like, at the end of this particular portion of your flow, to have the original content in a flowfile, and a second flowfile with the output generated by your external jar? If so, there are a few ways you could do this... One of them would be to use a DuplicateFlowFile processor to create a second copy of your flowfile, and use a ReplaceText processor on flowfile with the attribute value as content. You can use the Wait and Notify processors to wait for the processing of that flowfile. An example usage of the Wait/Notify processors can be found here. For the Release Signal Identifier, you can use ${filename} as the example suggests, but if your filenames aren't unique, you could use an UpdateAttribute processor to capture the original UUID of the flowfile before the DuplicateFlowFile processor. This is probably the easiest way to be able to know when that second flowfile has been processed. You could use MergeContent with a Correlation Attribute Name set to the same value as the Release Signal Identifier (and Max Number of Entries set to 2), and make sure the original flowfile gets routed from its Wait processor success relationship to the MergeContent processor, along with the success relationship of the second flowfile. If you're processing many different files concurrently, make sure that Maximum Number of Bins is equal to or greater than the number of concurrent files. I could probably create a sample flow of this, if you have trouble putting it together.
... View more
05-04-2017
07:04 PM
@Jatin Kheradiya In nifi.properties on each node of your cluster, nifi.state.management.embedded.zookeeper.start set to false?
... View more
03-13-2017
03:06 PM
@Mohammed El Moumni Are other, smaller files merging? I notice in both of your screenshots that the MergeContent processor is stopped, which will prevent files from being merged. Was the processor stopped just to take the screenshots?
... View more
03-10-2017
08:36 PM
1 Kudo
@Mohammed El Moumni If you take a look at the details of the flowfiles in the input queue for MergeContent, do you see the correlation attribute present on both flowfiles? Is it possible that, elsewhere in the flow, a flowfile with a correlation ID the same as one of the two flowfiles in the incoming queue was sent to a failure relationship and had been dropped from the flow? In the past, I have done a bit of processing of files from one of the Split* processors, and encountered errors processing one of the fragments. Due to the way I had designed the flow, the fragment with the error was routed to a failure relationship to another processor that terminated the processing of that flowfile, so not all the fragments from the split were sent to MergeContent. This caused all the other fragments to sit in the incoming queue of MergeContent indefinitely.
... View more
03-09-2017
06:51 PM
@Saikrishna Tarapareddy
In addition to the answer submitted by @Matt Clarke, ExecuteProcess and ExecuteStreamCommand should work as well. However, you'll want to move the arguments you're passing to kinit to the "Command Arguments" properties in the respective processors. The "Command" property should be set to "kinit" (or "/usr/bin/kinit", the full path to the executable can be provided). The "Command Arguments" property should be set to -k -t /etc/security/keytabs/nifi.keytab nifi/server@domain.COM The "Argument Delimiter" should be set to the space character, since you do not have any embedded spaces in the arguments you're using, or you can use the ";" character, for instance. In that case, "Command Arguments" should be set to -k;-t;/etc/security/keytabs/nifi.keytab;nifi/server@domain.COM
... View more
03-01-2017
03:46 AM
In your NiFi install, can you try renaming the work directory to something else, such as work-backup, and restart NiFi? Also, have you changed or replaced any of the other jars or nars in the lib dir?
... View more
02-28-2017
07:12 PM
2 Kudos
@Raj B Most likely, the nifi-hive-nar-1.1.0.nar file you placed in the lib directory was unpacked to the work directory, in which case, it will need to be removed. Removing the NAR from the lib directory is one step of the cleanup. You could try renaming the work directory to something else, and restarting NiFi, to see if that resolves your issue, provided the lib directory is clean, i.e. containing only the default libraries from the install. You'd have to do this on each node that runs NiFi. When NiFi restarts, it will unpack all the NARs to the work directory again. If you're running NiFi in HDF, installed by Ambari, that working directory should be located at /var/lib/nifi/work by default.
... View more
- « Previous
-
- 1
- 2
- Next »