Community Articles

Find and share helpful community-sourced technical articles.
Labels (2)
avatar

Cloudera recently  announced the release of Kubernetes Operators for Data in Motion. These Kubernetes Operators  enable customers to deploy Apache NiFi, Apache Kafka, and Apache Flink clusters on Kubernetes application platforms such as Red Hat OpenShift, the industry’s leading hybrid cloud application platform powered by Kubernetes. With these Kubernetes Operators, customers can easily deploy end-to-end data streaming capabilities on their existing Kubernetes clusters and benefit from auto-scaling, efficient resource management, and streamlined setup and operations. 

In this blog post, we will revisit the advantages of running Apache NiFi on Kubernetes and take a closer look at the architecture and deployment model when using the Cloudera Flow Management Kubernetes Operator.

Advantages of running Apache NiFi on Kubernetes

Apache NiFi is a powerful tool for building data movement pipelines using a visual flow designer. Hundreds of built-in processors make it easy to connect to any application and transform data structures or data formats. Since it supports both structured and unstructured data for streaming and batch integrations, Apache NiFi is a core component of modern data pipelines and multimodal Generative AI use cases.  While it is incredibly versatile and easy to use, Apache NiFi often requires significant administrative overhead.  The recent release of Cloudera Flow Management Kubernetes Operator simplifies administration, improving development speed and resource utilization to deliver greater innovation at a lower Total Cost of Ownership (TCO).

Apache NiFi deployments typically start small, with a limited number of users and data flows, and  they grow quickly once organizations realize how easy it is to implement new use cases. During this growth phase, organizations run into multi-tenancy issues, like resource contention and all of their data flows sharing the same failure domain. To overcome these challenges, organizations typically start creating isolated clusters to separate data flows based on business units, use cases, or SLAs. This adds significant overhead in terms of operations, maintenance, and more.

Depending on how the clusters are sized initially, organizations often need  to add additional compute resources to keep up with the growing number of use cases and ever-increasing data volumes. While NiFi nodes can be added to an existing cluster, it is a multi-step process that requires organizations to set up constant monitoring of resource usage, detect when there is enough demand to scale, automate the provisioning of a new node with the required software, and configure security. Downscaling is even more complex because users must make sure that the NiFi node they want to decommission has processed all of its data and does not receive any new data to avoid potential data loss. Implementing an automated scale-up and scale-down procedure for NiFi clusters is complex and time-consuming.

Finally, when running Apache NiFi on bare metal or Virtual Machines (VMs), losing a node will make the in-flight data unavailable as long as the node is. NiFi ensures no data is lost, but temporarily having data unavailable can be an issue for many critical use cases. Users can use the NiFi Stateless engine and proper flow design to solve this problem, but implementing this solution can be another challenge for the development team.

Ultimately, these challenges force NiFi teams to spend a lot of time on managing the cluster infrastructure instead of building new data flows, which slows down adoption.

Running NiFi on Kubernetes solves all of these challenges:

  • It is easy to scale a NiFi cluster up and down, either manually by updating the deployment configuration file, or automatically based on resources consumption using an Horizontal Pod Autoscaler (HPA).
  • It is easy to start new NiFi clusters in minutes and manage multiple NiFi deployments in one place. It is also possible to easily have many different NiFi versions to test new features, and users can separate use cases into dedicated NiFi clusters to ensure resource isolation.
  • It ensures data durability, even if a NiFi node goes down. Kubernetes takes care of provisioning a new node and moving the volumes from the dead node to the new node, ensuring uninterrupted processing of the in-flight data.

In addition to those advantages, the Cloudera Flow Management Kubernetes Operator brings several additional features to the table:

  • It removes the Zookeeper dependency by doing leader election and state management in Kubernetes.
  • It supports  rolling upgrades with NiFi 2.
  • It enables customers to bring their own Kubernetes cluster. Users don’t have to dedicate the Kubernetes cluster to Cloudera workloads, and there is no dependency on any other Cloudera product.

Diving into running Cloudera Flow Management on RedHat OpenShift

The Cloudera Flow Management Operator is responsible for managing NiFi and NiFi Registry deployments. It is deployed within a designated operator namespace, while the actual NiFi and NiFi Registry instances are managed within one or more separate namespaces. The following diagram shows a typical NiFi deployment with Cloudera Flow Management Operator:

Cloudera Flow Management Operator architecture overviewCloudera Flow Management Operator architecture overview

Installing the Cloudera Flow Management Kubernetes Operator

To make things extremely easy, Cloudera provides a CLI to help with the installation of the operator in the Kubernetes cluster.

After connecting to the cluster:

 

$ oc login https://api….openshiftapps.com:6443/ -u myUser

 

Users must install cert-manager, which will be used to provision and sign the certificates for the NiFi and NiFi Registry resources.

 

$ helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--set installCRDs=true

 

You will want to configure cert-manager with a CA certificate of your choice to sign the provisioned certificates. You can also use a self-signed CA certificate for Proof-of-Concept and testing.

Once you are done, and once you have downloaded the cfmctl CLI provided by Cloudera, you can easily install the operator:

 

$ kubectl create namespace cfm-operator-system
namespace/cfm-operator-system created

$ kubectl create secret docker-registry docker-pull-secret \
  --namespace cfm-operator-system \
  --docker-server container.repository.cloudera.com \
  --docker-username ****** \
  --docker-password ******

secret/docker-pull-secret created

$ ./cfmctl install –-license <license file>

 

The section above shows how to create a new namespace ‘cfm-operator-system’ and a secret, which is configured with your Cloudera credentials and will be used to access the required container images. Once the namespace and secret are in place, all you have to do is run cfmctl and specify the license file, and it will install the operator on your cluster.

Deploying a NiFi cluster on Kubernetes

As described in the documentation, you can now write a YAML file that will describe your desired NiFi and NiFi Registry deployments. While examples of such files can be found in the documentation, let’s look at the specific parts that constitute a deployment file.

 

apiVersion: cfm.cloudera.com/v1alpha1
kind: Nifi
metadata:
  name: mynifi
  namespace: my-nifi-cluster

 

The section above defines the name that will be used for the NiFi cluster and the NiFi nodes, and it also specifies the namespace in which resources will be deployed. A single NiFi cluster  should be deployed in a given namespace.

 

replicas: 3

 

The number of replicas defines the number of NiFi nodes in the NiFi cluster. Editing this value and submitting the updated file to the operator will effectively scale up and down the NiFi cluster. This is the manual approach for scaling the cluster.

 

  image:
    repository: container.repository.cloudera.com/cloudera/cfm-nifi-k8s
    tag: 2.8.0-b10
    pullPolicy: IfNotPresent
    pullSecret: docker-pull-secret
  tiniImage:
    repository: container.repository.cloudera.com/cloudera/cfm-tini
    tag: 2.8.0-b10
    pullPolicy: IfNotPresent
    pullSecret: docker-pull-secret

 

This section describes the images used for running NiFi. This provides a way to  manually upgrade the NiFi version in an existing cluster or very quickly roll out NiFi clusters with new versions.

 

  persistence:
    size: 10Gi
    storageClass: nifi-storage-class
    contentRepo:
      size: 10Gi
      storageClass: nifi-storage-class
    flowfileRepo:
      size: 10Gi
      storageClass: nifi-storage-class
    provenanceRepo:
      size: 10Gi
      storageClass: nifi-storage-class

 

This section specifies the storage that will be used for the NiFi Repositories. It can be defined globally or overridden for specific repositories. The storage classes must be specified at the OpenShift level to match the IOPS expectations for your NiFi workloads. NiFi requires persistent volumes and the storage class to support read and write operations.

 

  security:
    ldap:
      authenticationStrategy: SIMPLE
      managerDN: "cn=admin,dc=example,dc=org"
      secretName: openldap-creds
      referralStrategy: FOLLOW
      connectTimeout: 3 secs
      readTimeout: 10 secs
      url: ldap://my.ldap.server.com:389
      userSearchBase: "dc=example,dc=org"
      userSearchFilter: "(uid={0})"
      identityStrategy: USE_USERNAME
      authenticationExpiration: 12 hours
      sync:
        interval: 1 min
        userObjectClass: inetOrgPerson
        userSearchScope: SUBTREE
        userIdentityAttribute: cn
        userGroupNameAttribute: ou
        userGroupNameReferencedGroupAttribute: ou
        groupSearchBase: "dc=example,dc=org"
        groupObjectClass: organizationalUnit
        groupSearchScope: OBJECT
        groupNameAttribute: ou
    initialAdminIdentity: nifiadmin
    nodeCertGen:
      issuerRef:
        name: self-signed-ca-issuer
        kind: ClusterIssuer

 

The security section contains the information that will be injected into the identity provider configuration file of NiFi, as well as the authorizers configuration file. In this example, we retrieve the users and groups from an external LDAP server. The user nifiadmin is the initial admin of NiFi. This user can log into the NiFi UI and set the appropriate policies based on users and groups for other users to access the NiFi UI and use the NiFi deployment.

Don't forget to create the secret for storing the password of the user accessing LDAP:

 

kubectl -n my-nifi-cluster create secret generic openldap-creds --from-literal=managerPassword=Not@SecurePassw0rd

 

Finally, the security section also references the certificate issuer that should be used from cert-manager to issue certificates for all the provisioned resources.

 

  hostName: mynifi.openshiftapps.com
  uiConnection:
    type: Route
    serviceConfig:
      sessionAffinity: ClientIP
    routeConfig:
      tls:
        termination: passthrough

 

The section above defines how to provide access to the NiFi UI to users, who use a Route with the CFM Kubernetes Operator on OpenShift.

 

  configOverride:
    nifiProperties:
      upsert:
        nifi.cluster.load.balance.connections.per.node: "1"
        nifi.cluster.load.balance.max.thread.count: "4"
        nifi.cluster.node.connection.timeout: "60 secs"
        nifi.cluster.node.read.timeout: "60 secs"
        nifi.cluster.leader.election.implementation: "KubernetesLeaderElectionManager"
    bootstrapConf:
      upsert:
        java.arg.2: -Xms2g
        java.arg.3: -Xmx2g
        java.arg.13: -XX:+UseConcMarkSweepGC
  stateManagement:
   clusterProvider:
     id: kubernetes-provider
     class: org.apache.nifi.kubernetes.state.provider.KubernetesConfigMapStateProvider

 

The section above provides a way to customize the NiFi deployment by overriding properties that are defined in the NiFi properties configuration file, as well as the bootstrap configuration file to configure the amount of memory allocated for the heap of NiFi. This  example also includes the configuration that specifies that Kubernetes should handle leader election and state management.

Finally, we can specify the resources (CPU and memory) that should be allocated to the NiFi containers:

 

  resources:
    nifi:
      requests:
        cpu: "2"
        memory: 4Gi
      limits:
        cpu: "2"
        memory: 4Gi
    log:
      requests:
        cpu: 50m
        memory: 128Mi

 

Cloudera recommends using a ratio of 1:1:2 to configure the number of cores, the amount of memory allocated for the heap of NiFi, and the amount of memory for the NiFi containers. In the example above, there are 2 cores, 2GB of memory for the heap of NiFi and 4GB of memory for the NiFi container.

For automatic auto-scaling based on resources consumption, configure an Horizontal Pod Autoscaler:

 

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
 name: nifi-hpa
spec:
 maxReplicas: 3
 minReplicas: 1
 metrics:
   - type: Resource
     resource:
       name: cpu
       target:
         type: Utilization
         averageUtilization: 75
  scaleTargetRef:
   apiVersion: cfm.cloudera.com/v1alpha1
   kind: Nifi
   name: mynifi

 

Submit the deployment file to Kubernetes to deploy and/or update the NiFi cluster: 

 

$ kubectl create namespace my-nifi-cluster
namespace/my-nifi-cluster created

$ kubectl apply -f nifi.yaml
nifi.cfm.cloudera.com/mynifi created

$ kubectl apply -f nifi-hpa.yaml

 

The NiFi cluster is deployed and the initial administrator can access the NiFi UI.

Deploying a NiFi Registry instance on Kubernetes

Deploying a NiFi Registry instance is very similar to the process for deploying a NiFi cluster. The file below describes a NiFi Registry deployment:

 

apiVersion: cfm.cloudera.com/v1alpha1
kind: NifiRegistry
metadata:
    name: mynifiregistry
    namespace: nifi-registry
spec:
    image:
        repository: container.repository.cloudera.com/cloudera/cfm-nifiregistry-k8s
        tag: 2.8.0-b10
    tiniImage:
        repository: container.repository.cloudera.com/cloudera/cfm-tini
        tag: 2.8.0-b10
    hostName: mynifiregistry.openshiftapps.com
    uiConnection:
        type: Route
        routeConfig:
          tls:
            termination: passthrough
    security:
      initialAdminIdentity: nifiadmin
      nodeCertGen:
        issuerRef:
          name: self-signed-ca-issuer
          kind: ClusterIssuer
      ldap:
        authenticationStrategy: SIMPLE
        managerDN: "cn=admin,dc=example,dc=org"
        secretName: secret-openldap
        referralStrategy: FOLLOW
        connectTimeout: 3 secs
        readTimeout: 10 secs
        url: ldap://my.ldap.server.com:389
        userSearchBase: "dc=example,dc=org"
        userSearchFilter: "(uid={0})"
        identityStrategy: USE_USERNAME
        authenticationExpiration: 12 hours
        sync:
          interval: 1 min
          userObjectClass: inetOrgPerson
          userSearchScope: SUBTREE
          userIdentityAttribute: cn
          userGroupNameAttribute: ou
          userGroupNameReferencedGroupAttribute: ou
          groupSearchBase: "dc=example,dc=org"
          groupObjectClass: organizationalUnit
          groupSearchScope: OBJECT
          groupNameAttribute: ou

 

Similarly, the nifiadmin user is the initial admin user who can access the NiFi Registry UI and set the proper policies for users and groups.

Additionally,  the admin should create proxy policies for the nodes of the NiFi clusters that interact with the NiFi Registry instance by adding a user with the name CN=<your CR name>, O=Cluster Node for every cluster via the NiFi Registry UI, and proxy permissions for each user.

 

$ kubectl create namespace nifi-registry
namespace/nifi-registry created

$ kubectl -n nifi-registry create secret generic secret-openldap --from-literal=managerPassword=Not@SecurePassw0rd

$ kubectl apply -f nifiregistry.yaml
nifiregistry.cfm.cloudera.com/mynifiregistry created

 

The screenshot below shows the  user and  permissions required for NiFi to successfully interact with the NiFi Registry instance:

Create this user and the associated policies in the NiFi Registry to enable NiFi requestsCreate this user and the associated policies in the NiFi Registry to enable NiFi requests

Putting everything together

The following screenshots show access to the NiFi cluster where a process group can be versioned-controlled in the NiFi Registry instance:

The NiFi canvas with a version-controlled process group running on KubernetesThe NiFi canvas with a version-controlled process group running on Kubernetes

NiFi Registry running on Kubernetes with one bucket where flows are stored and versionedNiFi Registry running on Kubernetes with one bucket where flows are stored and versioned

When scaling down the NiFi cluster, the operator executes all of the required steps in the right order to ensure no data is lost, and the data is properly offloaded from the removed nodes onto the remaining nodes.

Conclusion

The Cloudera Flow Management Kubernetes Operator makes it easy to deploy NiFi clusters and NiFi Registry instances on an OpenShift cluster.  Running your data flows at scale on Kubernetes is now only a few commands away. 

For a demonstration, watch our release webinar and visit the product documentation to learn more.

1,356 Views