Member since
09-29-2015
871
Posts
723
Kudos Received
255
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 4269 | 12-03-2018 02:26 PM | |
| 3210 | 10-16-2018 01:37 PM | |
| 4317 | 10-03-2018 06:34 PM | |
| 3175 | 09-05-2018 07:44 PM | |
| 2431 | 09-05-2018 07:31 PM |
03-06-2017
06:25 PM
This post describes the behavior well: https://stackoverflow.com/questions/32390265/what-determines-kafka-consumer-offset
... View more
03-06-2017
06:22 PM
What version of NiFi and what version of Kafka? NIFi 1.x has GetKafka for Kafka 0.8, ConsumeKafka for Kafka 0.9, and ConsumeKafka_0_10 for Kafka 0.10. Whenever possible the matching processor should be used with the matching broker. I believe all of them have some kind of property that controls the initial offset for the first time the processor is ever started, basically saying whether to start at the beginning of the topic, or at the latest offset. After that it is always going to use the last offset that the Kafka client has consumed in order to never miss data. If you ever want to start back over at the current time I believe you can just change to a new consumer group id with Offset Reset set to latest.
... View more
03-06-2017
06:14 PM
1 Kudo
You need to configure NiFi to use a login identity provider such as LDAP or Kerberos, and then navigate to the secure URL of the NiFi web ui, for example https://localhost:8443/nifi. If you have a certificate in your browser and get prompted, make sure to decline otherwise it will use that to authenticate. There are many reasons covering how to configure LDAP or Kerberos: https://pierrevillard.com/2017/01/24/integration-of-nifi-with-ldap/ http://bryanbende.com/development/2016/08/31/apache-nifi-1.0.0-kerberos-authentication https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html
... View more
03-06-2017
06:10 PM
2 Kudos
Are you asking how to configure IMB MQ to use TLS/SSL? or is your IBM MQ already configured for TLS/SSL and you want to know how to get NiFi to talk to it? For the latter, you need to configure the StandardSSLContextService with a truststore that trusts the certificate that IBM MQ is using. Basically this means that there is a certificate authority (CA) that signed a certificate that IMB MQ is using, and you need a truststore that contains the public key of the CA so NiFi will trust IMB MQ.
... View more
03-06-2017
04:42 PM
1 Kudo
If there are specific error scenarios that we want to handle differently, we may want to have additional failure relationships, like "failure_duplicate". This way the processor itself would detect this and route the flow file to the appropriate relationship.
... View more
03-02-2017
08:06 PM
1 Kudo
You should use a MergeContent processor before PutHDFS to merge flow files together based on a minimum size.
... View more
03-01-2017
02:03 PM
1 Kudo
It is hard to tell without being able to see your code, but it seems like this is saying that in LookupProcessor you have a PropertyDescriptor that represents a Controller Service, but the class you specified is not the interface from nifi-customservice-api-nar, but instead it is the implementation from nifi-customservice-nar. As an example, InvokeHttp uses SSLContextService: https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/InvokeHTTP.java#L205 SSLContextService is the interface that comes from nifi-standard-services-api: https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-services/nifi-ssl-context-service-api/src/main/java/org/apache/nifi/ssl/SSLContextService.java The implementation is StandardSSLContextService which is is nifi-ssl-context-bundle: https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-services/nifi-ssl-context-bundle/nifi-ssl-context-service/src/main/java/org/apache/nifi/ssl/StandardSSLContextService.java
... View more
02-28-2017
08:12 PM
Makes sense, I think haproxy (http://www.haproxy.org/) is a free load balancer that supports TCP, then your data producer can just send to the haproxy address.
... View more
02-28-2017
03:35 PM
Ok, I'm going to assume ListenTCP is the entry point then, let me know if that is not true. My thought is to reverse this a little bit, because right now if your first NiFi instance goes down then your data producer has nowhere to send the data. Data Producer -> Load Balancer (nginx supports TCP) -> NiFi Cluster with each node having ListenTCP. Then have this cluster push the provenance data to a standalone NiFi instance that just puts it into HDFS. This way this second NiFi instance is not in the critical path of the real data and is only responsible for the provenance data. Depending how important the provenance data is to you, you could make this a two node cluster to ensure at least a minimum amount of failover.
... View more
02-28-2017
03:03 PM
1 Kudo
What protocol is the data producer using to push data to the first NiFi instance?
... View more