Member since
02-07-2020
4
Posts
0
Kudos Received
0
Solutions
11-06-2020
07:24 AM
Hi, I'm having issues with ConsumeKafka2.0 where i give it a Max Poll Records config of 10k but it never ends up consuming more than ~1k records, even if I use a long Max Uncommitted Time (e.g. 60s). Is there any other config I should change to enable this?
... View more
Labels:
11-06-2020
07:22 AM
Hi all, I have a NiFi cluster with 3 nodes running which consumes data from multiple Kafka queues. The issue I'm being faced with is the cluster won't balance properly and will have one node consuming from a lot of partitions and the other two nodes consuming from little or none at all. This affects the consumption rates and eventually clogs the pipeline. Here's the current situation: - Kafka queue with 12 partitions - 3 nodes running ConsumeKafka2.0 with 4 concurrent tasks (so that each task handles a partition) - 7 out of 12 partitions being handled by node3 - 4 out of 12 partitions being handled by node2 - 1 out of 12 partitions being handled by node1 I have no idea why the balancing happens like this. If I restart then they'll start OK with each task handling a different partition but eventually it changes and it is unmanageable if I always have to manually check and restart.
... View more
Labels:
02-17-2020
03:18 AM
Hi, can you explain how did you solve this problem?
... View more
02-07-2020
06:19 AM
Hi!
The issue I'm trying to solve is: Tail the nifi-app.log files and with the the ID of the processor that had a failure, get its processor Group Name.
Alternatively, if there's a way to write to the logs the name of the group (e.g. using the ScriptedReportingTask), I would appreciate it even more!
... View more
Labels: