Community Articles

Find and share helpful community-sourced technical articles.
Announcements
Celebrating as our community reaches 100,000 members! Thank you!
avatar

Introduction

NiFi Site to Site (S2S) is a communication protocol used to exchange data between NiFi instances or clusters. This protocol is useful for use case where we have geographically distributed clusters that need to communicate. Examples include:

  • IoT: collect data from edge node (MiNiFi) and send them to NiFi for aggregation/storage/analysis
  • Connected cars : collect data locally by city or country with a local HDF cluster, and send it back to a global HDF cluster in core Data Center
  • Replication : synchronization between two HDP clusters (on prem/cloud or Principal/DR)

S2S provides several benefits such as scalability, security, load balancing and high availability. More information can be found here

Contexte

NiFi can be secured by enabling SSL and requiring users/nodes to authenticate with certificates. However, in some scenarios, customers have secured and unsecured NiFi clusters that should communicate. The objective of this tutorial is to show two approaches to achieve this. Discussions on having secure and unsecured NiFi cluster in the same application are outside the topic of this tutorial.

Prerequisites

Let's assume that we have already installed an unsecure HDF cluster (Cluster2) that needs to send data to a secure cluster (Cluster1).

Cluster1 is a 3 node NiFi cluster with SSL : hdfcluster0, hdfcluster1 and hdfcluster2. We can see the HTTPS in the URLs as well as the connected user 'ahadjidj'.

13557-secure-cluster.png

13558-cluster-1-nodes.png

Cluster2 is also a 3 nodes NiFi cluster but without SSL enabled : hdfcluster20, hdfcluster21 and hdfcluster22

13559-unsecure-cluster.png

Option 1: the lazy option

The easiest way to get data from cluster 2 to cluster 1 is to use a Pull method. In this approach, cluster 1 will use a Remote Process Group to pull data from cluster 2. We will configure the RPG to use HTTP and no special configurations are required. However, data will go unencrypted over the network. Let's see how to implement this.

Step 1: configure Cluster2 to generate data

  • The easiest way to generate data in cluster 2 is to use a GenerateFlowFile processor. Set the File Size to something different from 0 and Run Schedule to 60 sec
  • Add an ouput port to the canvas and call it 'fromCluster2'
  • Connect and start the two processors
  • At this level, we can see data being generated and queued before the output port

13560-cluster-2-flow-1.png

Step 2: configure Cluster1 to pull data

  • Add a RPG and configure it with HTTP addresses of the three Cluster2' nodes. Use HTTP as Transport Protocol and enable the transmission.
  • Add a PutFile processor to grab the data. Connect the RPG to the PutFile and chose the 'fromCluster2' output when you are asked for.
  • Right click on the RPG and activate the toggle next 'fromCluster2'

13571-rpg.png

13572-toggle.png

We should see flow files coming from the RPG and buffering before the PutFile processor.

13573-cluster1-flow-1.png

Option 2: the secure option

The first approach was easy to configure but data was sent unencrypted over the wire. If we want to leverage SSL and send data encrypted even between the two clusters, we need to generate and use certificates for each node in the Cluster2. The only difference here is that we don't activate SSL.

Step 1: generate and add Cluster2 certs

I suppose that you already know how to generate certificates for CA/nodes and add them to Truststore/KeyStore. Otherwise, there are several HCC articles that explain how to do it.

We need to configure Cluster2 with its certificats

  • Upload nodes' certificate to each node and add it to the KeyStore (eg. keystore.pfx). Set also the KeyStore type and password.
  • Upload the CA (Certificate Authority) certificate to each node and add it to the TrustStore (eg. truststore.jks). Set also the TrustStore type and password.

13574-certconf-cluster-2.png

Step 2: configure Cluster2 to push data to Cluster1

In Cluster1, add an input port (toCluster1) and connect it to a PutFile processor.

13575-cluster-1-flow-2.png

Use a GenerateFlowFile to generate data in Cluster2 and a RPG to push data to Cluster1. Here we will use HTTPS addresses when configuring the RPG.

13576-rpg-2.png

Cluster2 should be able to send data to Cluster1 via the toCluster1 input port. However, the RPG shows a Forbidden error

13577-forbidden.png

Step 3: add policies to authorize cluster2 to use the S2S protocol

The previous error is triggered because nodes belonging to Cluster2 are not authorized to access to Cluster1 resources. To solve the problem, let's do the following configurations:

1) Go the users menu in Cluster1 and add a user for each node from Cluster2

13578-menu.png

13579-users.png

2) Go to the policies menu in Cluster1, and add each node from Cluster2 to the retrieve site-to-site details policy

13580-s2s-details.png

At this point, the RPG in Cluster2 is working however the input port is not visible yet

13581-holla.png

3) The last step is editing the input port policy in Cluster1 to authorize nodes from Cluster2 to send data through S2S. Select the toCluster1 Input port and click on the key to edit it's policies. Add cluster2 nodes to the list.

13582-policies.png

13583-done.png

4) Now, go back to cluster2 and connect the GenerateFlowFile with the RPG. The input port should be visible and data start flowing "securely" 🙂

13584-yay1.png

13585-yay2.png

14,915 Views
Version history
Last update:
‎08-17-2019 01:51 PM
Updated by:
Contributors