Member since
07-30-2019
105
Posts
129
Kudos Received
43
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1303 | 02-27-2018 01:55 PM | |
1704 | 02-27-2018 05:01 AM | |
4692 | 02-27-2018 04:43 AM | |
1272 | 02-27-2018 04:18 AM | |
4098 | 02-27-2018 03:52 AM |
04-26-2016
08:40 PM
5 Kudos
Hello Unfortunately this means the process NiFi was told to execute has not returned. In such a case there is an outstanding thread and we intentionally prevent additional instances from being started until this one is dealt with. From the stacktrace you provided we see "Timer-Driven Process Thread-9" Id=69 RUNNABLE (in native code)
at java.io.FileInputStream.readBytes(Native Method)
at java.io.FileInputStream.read(FileInputStream.java:272)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
- waiting on java.lang.UNIXProcess$ProcessPipeInputStream@e6f0a2f
This tells us we're sitting and waiting for that command to do something (finish, respond with data). It appears to be in a hung state. You'll need to restart NiFi and try to assess why that command isn't doing anything. What command are you trying to run? It might be that for your case the 'ExecuteProcess' processor is a better fit. Thanks Joe
... View more
04-21-2016
09:25 PM
1 Kudo
Hello, Keep in mind each FlowFile is comprised of attributes and content. RouteOnAttribute is built specifically to look at FlowFile attributes. RouteOnContent looks at FlowFile content. In the case of JSON you can, for example, use EvaluateJSONPath to extract fields from the JSON to become FlowFile attributes. You can use SplitJSON to split the JSON bundles into invidual flowfiles then extract values and then route on attributes. The various processors offer you a variety of ways to route, transform, and deliver the data. Thanks
Joe
... View more
04-12-2016
02:02 PM
2 Kudos
Using UnpackContent processor you can take the items out of tar or zip archives as individual flow files. Metadata about those objects will be retained on each flow file. You can then operate on those individual unpacked items to do what you need then you could if needed recombine them back into a zip or tar using the merge strategy of 'defragment'.
... View more
04-08-2016
03:33 PM
This is due to this issue https://issues.apache.org/jira/browse/NIFI-990 which fixed the fact that the failure relationship was mistakenly not provided before. When importing templates made against the previous version they will not have that relationship checked since it wasn't there. So it will show as invalid until you check it as auto-terminate or use it. You may wish to recreate the template.
... View more
03-16-2016
02:08 PM
5 Kudos
The HDF release does support interacting with Kerberized Kafka instances as found within the HDP stack. This is because HDP added support for Kerberized Kafka in advance of the community supporting it (Kafka 0.8.x). In Apache Kafka world now (0.9x) there is Kerberos support. So, - Apache NiFi supports non Kerberized Kafka clusters today. - HDF releases of NiFi have patched support for Kerberized Kafka clusters in HDP - Upcoming Apache NiFi releases will add support for the 0.9x Apache Kafka kerberos model Thanks Joe
... View more
02-26-2016
03:11 AM
2 Kudos
I haven't tested this myself but NAS/SAN arrangements have worked quite well in the past. Needs testing to understand latencies/tradeoffs but frankly I suspect it will work just fine.
... View more
02-12-2016
02:03 AM
1 Kudo
In the Get/PutKafka processors you should be able to add a dynamic property called 'fetch.message.max.bytes' and set the value you need. The processor should allow you to add dynamic properties which map to Kafka consumer properties and it will pass them to the consumer/producer config as needed.
... View more
01-28-2016
02:04 AM
2 Kudos
NiFi does not at present offer any generic SOAP interface support. You would need to built a custom processor to do that. NiFi you can think of as a great host for the Java process suggested in this thread. Once the data is pulled via the SOAP API you can then use NiFi to do any number of things such as delivery to Kafka all within a managed process. Then you get the benefits NiFi offers and address your core use case.
... View more
01-23-2016
01:33 AM
Vance currently in NiFi any user with the DFM permission can create as many flows as necessary. It is not uncommon for a single instance of NiFi to be handling hundreds or more processors representing what can be dozens or hundreds of distinct dataflows. It is also quite common for people to be surprised by that however a lot of effort has gone into the design of the repositories, threading model, and user interface to allow it to support a wide variety of functions and flows. It is certainly a solid compliment to the powerful analysis and processing platforms that systems like Storm and Spark provide or the storage/access systems that Kafka and HDFS provide.
... View more
01-22-2016
12:48 AM
4 Kudos
Vance, We completely agree with you. NiFi already supports some powerful security and multi-role authorization capabilities. But as you mention we should support multiple different groups with different levels of access to various parts of the flow. That is an important roadmap item and work is underway. You can see a bit about the nifi community thinking on this wiki page https://cwiki.apache.org/confluence/display/NIFI/Multi-Tentant+Dataflow and there are related threads such as https://cwiki.apache.org/confluence/display/NIFI/Redesign+User+Interface and https://cwiki.apache.org/confluence/display/NIFI/Support+Authorizer+API If you need help setting up secure NiFi you can read more here https://community.hortonworks.com/articles/886/securing-nifi-step-by-step.html and in the administration guide https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#security-configuration Thanks Joe
... View more