Member since
05-17-2017
23
Posts
1
Kudos Received
0
Solutions
10-04-2017
06:45 AM
I have a Nifi cluster with 3 nodes. Curently I'm using ListenWebsocket processor with JettyWebSocketServer listen on 9001 port to get JSON data from multiple client. I research about load balance for ListenXXX on Nifi and almost post suggest using a HA proxy between client and Nifi cluster. I also worked with ListXXX/FetchXXX and it can use Remote Process Group with setting primary node on List/Fetch processor. So I confuse that we can use Remote Process Group for ListenXXX as ListenWebsocket to load balancing cluster. Client can send data to one node of Nifi cluster then that node will forward data to Remote Process Group to load balance all node. If there is any duplicate or losing data?
... View more
Labels:
- Labels:
-
Apache NiFi
09-01-2017
07:21 AM
@Matt Clarke Thanks your reply, I did follow the second option. But I had to remove data content on the disconnected node before restarting it. And I found that the node disconnected because of overload queue when executing job. I confuse that if we can configure queue size up to contain more data. How can we do this? Please help me if you have solutions for these problems. (overload queue). Thanks, Kiem
... View more
08-31-2017
08:38 AM
I have a cluster Nifi with 3 nodes: node-1, node-2, node-3. When I run a job on cluster, there is some errors and node-2 disconnected to cluster. Then I want to go UI admin of node-1 or node-3 to stop this job. But I can not stop it. It notices: Cluster is unable to service request to change flow: Node node-2:8092 is currently disconnected
... View more
Labels:
- Labels:
-
Apache NiFi
06-14-2017
02:49 AM
@Wynner
Yes. I think problem is about configure number of concurrent task on Consumer_Kafka_0_10 processors. I will accept your answer as you determine right problem. But Can you help me configure number of concurrent task with my case: In my case, I have 3 PublishKafka_0_10 processors A, B, C. A push data to topic aa, B to topic bb and C to topic cc. Each PublishKafka_0_10 processor has number of concurrent task is default 1. As you see, my Kafka cluster has 4 partitions for all 3 topics. Then I have 3 Consumer_Kafka_0_10 processors D, E, F. D receives data from topic aa, E receives data from topic bb, F receives data from topic cc. How many concurrent task need to configure for each Consumer_Kafka_0_10 processors D, E, F? Please help me understand relation between partition and concurrent task. Thank you so much!
... View more
06-13-2017
07:09 AM
@Wynnwe I have 3 topics in Kafka cluster: aa, bb, cc. I use this syntax to check number of partitions. It seems have 4 partition. ./bin/kafka-topics.sh --describe --zookeeper 10.42.53.16:2181,10.42.53.17:2181,10.42.53.18:2181 --topic aa Result: Topic:aa PartitionCount:4 ReplicationFactor:1 Configs:
Topic: aa Partition: 0 Leader: 17 Replicas: 17 Isr: 17
Topic: aa Partition: 1 Leader: 18 Replicas: 18 Isr: 18
Topic: aa Partition: 2 Leader: 17 Replicas: 17 Isr: 17
Topic: aa Partition: 3 Leader: 18 Replicas: 18 Isr: 18
It is same result when I check with topic bb and cc. However, the loss of the package just occurs in ConsumerKafka_0_10 processor that receives data from topic aa or bb. The ConsumerKafka_0_10 processor that receives data from topic cc is always enough.
... View more
06-09-2017
06:25 AM
1 Kudo
I have 1 producers (PublishKafka_0_10 processor) and 1 consumer (ConsumerKafka_0_10 processor) to receive flowfile from Kafka cluster. I see on Nifi UI admin, the total out of producers is 7 packages but the consumer just receives only 4 packages. I also use kafka_console_consumer.sh to view the packages from producer and it displays whole 7 packages. I don't know why and where I lost 3 packages from consumerKafka_0_10 processor. I use kafka cluster with 3 nodes and nifi cluster with 3 nodes too.
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache NiFi
06-06-2017
08:06 AM
@Kiran Hebbar Hi Kiran Hebbar, I think you can use StreamCallback to read your content from a Flowfile. You can modify or add more attributes into flowfile. Here I find a Matt Burgess's example using Javascript:
http://funnifi.blogspot.com/2016/03/executescript-json-to-json-revisited.html Hope that help you, Thanks,
... View more
06-05-2017
04:08 AM
It works well. Thanks Matt !!! Hadoop Configuration Resources Properties: point to core-site.xml and hdfs-site.xml after copying them into each Nifi node.
... View more
06-02-2017
03:07 AM
I have a Nifi cluster on machine A, B, C and a HDFS standalone on machine D. Now I want to get all file from HDFS server and push into nifi cluster to execute jobs. Please help me to configure this dataflow and how to setup properties on nifi processor. Thanks,
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache NiFi
05-17-2017
10:06 AM
Thanks @Pierre Villard, That exactly what I want: use a RPG pointing to the cluster itself.
... View more
- « Previous
-
- 1
- 2
- Next »