Created 06-28-2016 05:28 PM
Hi All,
i have 3 node nifi cluster(1 Master and 2 Slaves) and 3 node hadoop cluster ,if i created dataflow send or receive data to hdfs
1.How internally Nifi works?
2.is Nifi uses MapReduce ?
3.is there any load balancing algorithms it uses?
4.How Master and slaves co-ordinates with each other?
5. is Nifi stores any data internally ?
please explain with simple architecture
Created 06-28-2016 05:56 PM
NiFi is not built on top of hadoop and therefore does not use MapReduce or any other processing platform. NiFi is a dataflow tool for moving data between systems, performing simple event processing, routing and transformations. Each node in a NiFi cluster runs the same flow, and it is up to the designer of the flow to partition the data across the NiFi cluster.
This presentation shows strategies for how to divide the data across your cluster:
http://www.slideshare.net/BryanBende/data-distribution-patterns-with-apache-nifi
This presentation has an architecture diagram of what a cluster looks like with the internal repositories (slide 17):
http://www.slideshare.net/BryanBende/nj-hadoop-meetup-apache-nifi-deep-dive
Created 06-28-2016 05:56 PM
NiFi is not built on top of hadoop and therefore does not use MapReduce or any other processing platform. NiFi is a dataflow tool for moving data between systems, performing simple event processing, routing and transformations. Each node in a NiFi cluster runs the same flow, and it is up to the designer of the flow to partition the data across the NiFi cluster.
This presentation shows strategies for how to divide the data across your cluster:
http://www.slideshare.net/BryanBende/data-distribution-patterns-with-apache-nifi
This presentation has an architecture diagram of what a cluster looks like with the internal repositories (slide 17):
http://www.slideshare.net/BryanBende/nj-hadoop-meetup-apache-nifi-deep-dive