Member since
09-29-2015
871
Posts
723
Kudos Received
255
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 4261 | 12-03-2018 02:26 PM | |
| 3203 | 10-16-2018 01:37 PM | |
| 4309 | 10-03-2018 06:34 PM | |
| 3165 | 09-05-2018 07:44 PM | |
| 2425 | 09-05-2018 07:31 PM |
11-20-2016
07:07 PM
1 Kudo
Could you use the MonitorActivity processor? If you have a constant flow of data to your database then you could connect the success relationship from PutSQL to a MonitorActivity processor, and send an alert if nothing has been successful in X minutes. It is not really tied to the queue threshold at all, but would likely indicate an error if you hadn't seen any successful flow files in a while. You could also write a custom ReportingTask that went through all the bulletins (warnings/errors) to find bulletins for any PutSQL processor and then send an email. This requires a bit of development work though.
... View more
11-20-2016
06:34 PM
2 Kudos
There isn't a way that I know of to send alerts based on the threshold. The threshold is used by NiFi to trigger back-pressure, which means when the threshold is reached the components right before the queue will no longer be triggered to run anymore until the queue goes below the threshold. In the next Apache NiFi release (1.1) there will be a color indicator as to whether back-pressure is occurring or not:
... View more
11-18-2016
01:52 PM
I think you need to put the custom properties in the "Custom nifi-properties" section since they ultimately need to get written out to nifi.properties. You can check if it worked by looking at /etc/nifi/conf/nifi,properties, by default you will see nifi.content.repository.directory.default=/var/lib/nifi/content_repository, if your additions worked you should see two more entries.
... View more
11-17-2016
08:54 PM
2 Kudos
Check in your nifi.properties, there are two properties: nifi.cluster.flow.election.max.wait.time=5 mins nifi.cluster.flow.election.max.candidates= It is probably waiting 5 mins based on the first property. Usually you want to set the second property to a number <= to the nodes in your cluster, and then they will vote on the flow which will likely happen way faster than the 5 mins.
... View more
11-17-2016
05:51 PM
1 Kudo
This is a known problem when Phoenix is enabled, see similar posts here: https://community.hortonworks.com/questions/57874/error-unable-to-find-orgapachehadoophbaseipccontro.html That class is actually from Phoenix: https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/controller/ServerRpcControllerFactory.java It will be fixed in Apache NiFi 1.1 by allowing users to specify the path to the phoenix client JAR. For now you can copy phoenix-client.jar to nifi_home/work/nar/extensions/nifi-hbase_1_1_2-client-service-nar-1.1.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/ obviously adjusting the directories for your version.
... View more
11-17-2016
05:24 PM
2 Kudos
It looks like something is wrong with the users.xml file, can you paste the contents of that?
... View more
11-17-2016
04:19 PM
2 Kudos
Can you provide more of the stacktrace? I think there should be more after the UnmarshallException.
... View more
11-15-2016
05:38 PM
NiFi's Kafka processors use the KafkaConsumer [1] provided by the Apache Kafka client library, and that consumer uses the "bootstrap.servers" for versions 0.9 and 0.10 of the client library, so there is no way to use ZooKeeper. (https://kafka.apache.org/0100/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html)
... View more
11-15-2016
02:38 PM
3 Kudos
What Andrew is saying is that Kafka has moved away from using ZooKeeper for a lot of the consumer side things. Starting in 0.9 they store offsets directly in Kafka and have a new way of doing group management. You can find a lot of articles about it, one of them http://www.jesse-anderson.com/2016/04/kafka-0-9-0-changes-for-developers/ You'll see all examples for 0.9 and 0.10 consumers take "bootstrap servers" (i.e. brokers) rather than ZK connection string.
... View more
11-15-2016
02:07 AM
PutEmail is definitely good for specific parts of the flow. As you mentioned it can get complex quickly trying to route all the failures to a single PutEmail. The ReportingTask is definitely a good idea. When a ReportingTask executes it gets access to a ReportingContext which has access to BulletinRepository, which then gets you access to any of the bulletins you see in the UI. You could have one that got all the error bulletins and sent them somewhere or emailed them. Along the lines of monitoring the logs, you could probably configure NiFi's logback.xml to do UDP or TCP forwarding of all log events at the ERROR level, and then have a ListenUDP/ListenTCP processor in NiFi receive them and send an email. If you are in a cluster I guess you would have all nodes forward to only one of the nodes. This introduces possibility for circular logic, meaning if the ListenUDP/ListenTCP had problems that would generate more ERROR logs which would get sent back to ListenUDP/ListenTCP, and this produce more errors and keep doing this until the problem was resolved, but that is probably rare.
... View more