Member since
09-29-2015
871
Posts
723
Kudos Received
255
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 4258 | 12-03-2018 02:26 PM | |
| 3199 | 10-16-2018 01:37 PM | |
| 4305 | 10-03-2018 06:34 PM | |
| 3164 | 09-05-2018 07:44 PM | |
| 2423 | 09-05-2018 07:31 PM |
10-24-2016
08:02 PM
Are you saying that Kafka is kerberized? Currently you have the Security Protocol set to PLAINTEXT which means an unsecure Kafka.
... View more
10-24-2016
07:29 PM
The toolkit produces a nifi.properties with the keystore and truststore, and that nifi.properties has the values for the keystore and truststore password filled in. You should use the toolkit to generate a client cert (p12) and load that into your browser and use that to access NiFI.
... View more
10-24-2016
06:45 PM
2 Kudos
This is a known issue when Phoenix is installed: https://issues.apache.org/jira/browse/NIFI-1712 https://github.com/apache/nifi/pull/1156 Only work around currently is to copy phoenix-client.jar to the appropriate nifi/work/nar directory to get it on the HBase processor's classpath.
... View more
10-24-2016
01:55 PM
Some of these properties are a little bit mis-leading and were renamed in 1.0 to make it clearer: https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#kerberos_properties I think if you are calling an external script you would need to handle renewing the kerberos tickets yourself. You could just setup a GenerateFlowFile on a timer schedule that went to an ExecuteStreamCommand and just did a kinit every couple of hours.
... View more
10-24-2016
01:48 PM
As many listeners as cluster nodes, you would need to route the traffic to each node appropriately, one option being a load balancer in front that supports tcp or udp. The concurrent tasks only affects processing the messages that have already been read by the listener.
... View more
10-24-2016
01:45 PM
4 Kudos
LogAttribute relies on the logback configuration that everything uses in NiFi uses for logging (logback.xml in conf). Right now it goes into nifi-app.log which is the default appender, but you could probably create a new file appender for something like "nifi-logatttribute.log" and then modify this line: <logger name="org.apache.nifi.processors.standard.LogAttribute" level="INFO"/>
To be something like: <logger name="org.apache.nifi.processors.standard.LogAttribute" level="INFO">
<appender-ref ref="LOGATTRIBUTE_FILE" />
</logger> Assuming you defined a file appender named LOGATTRIBUTE_FILE.
... View more
10-21-2016
10:43 AM
1 Kudo
I think the issue might be a typo in: nifi.zookeeper.connect.string=apsrt3391:2181,apsrt3390:2181,apsrt3401:2181 In other places you had apsrt3402 and not 3401. I'm not totally sure about the setup of having one certificate that has all the server names. The DN is the distinguished name and is usually different per server, and each server would have a keystore that has that certificate for the DN of the given server. In your example the DN is not ccc.com, it should be something like: CN=apsrt3391, OU=... CN=apsrt3390, OU=... CN=apsrt3402, OU=... And each of them needs to be listed as a node identity in authorizers.xml.
... View more
10-20-2016
06:32 PM
3 Kudos
I believe you can do what you described... You would select "User-Defined" as the partition strategy, which means it will send to the partition provided in the "Partition" property. Since "Partition" supports expression language, this could be a reference to an attribute of an incoming flow file such as ${partition}. If I am understanding your scenario correctly, lets say you flow files with a "type" attribute that can be either "A" or "B", and you want all "A" messages to go to partition 1 and all B messages to partition 2. You could use a RouteOnAttribute processor to route A message to one relationship, and B messages to the other, and then for each of them have an UpdateAttribute processor that adds a new attribute like partition = 1 or partition = 2, so that when it gets to PutKafka you can reference ${partition}.
... View more
10-20-2016
05:34 PM
1 Kudo
Hello, It looks like one of the nodes just can't connect to ZooKeeper. In my example everything was local and there was only one embedded ZK, which isn't really a production scenario, so I assume you have something slightly different. Can you describe the ZooKeeper setup a little bit? are you running embedded ZK? and if so how many ZK instances and how many nodes in the NiFi cluster?
... View more
10-20-2016
12:52 PM
Correct. If you truly only want to run it once, then make the timer schedule larger and just manually start and stop the processor.
... View more