Community Articles

Find and share helpful community-sourced technical articles.
Announcements
Celebrating as our community reaches 100,000 members! Thank you!
avatar
Master Guru

2019-08-27_22-08-31.jpg

 

Part 2 of Autoscaling MiNiFi on K8S is focused on deploying the artifacts on AKS - Amazon Kubernetes Service.  My knew jerk reaction was all Kubernetes as a Service would play well but that is definitely not the case.  Hence why GCP Anthos product direction for this space is a key. The net-net of my observation is k8s app deployment on any single cloud vendor would cause deployment complexities any other k8s deployment, cloud or OnPrem..  Vendor lock in theory is ALIVE and WELL.  In Azure I leveraged ACS for EFM and NiFi Registry; however, the natural evolution was to deploy EFM and NiFi Registry (NR) on K8S.  EFM, NR, and MiNiFi are integrated components (refer to part 1 on architecture).  I will leverage several key out of the box k8s components to make this all work together.  The good news is, the deployment is super simple!

 

Prerequisites

  • Some knowledge of Kubernetes and AKS
  • AKS Cluster
  • kubectl cli
  • eksctl cli
  • VPC
  • 2 public subnets within a VPC
  • NR, EFM, and MiNiFi images uploaded to and available ECS
    • Refer to part 1 on image locations

 

Create a AKS Cluster

eksctl Makes this simple.  I tried using aws eks and it was painful.  

 

 

eksctl create cluster \
--name sunman-k8s \
--version 1.13 \
--nodegroup-name standard-workers \
--node-type t3.medium \
--nodes 3 \
--nodes-min 1 \
--nodes-max 4 \
--vpc-public-subnets=subnet-067d0ffbc09152382,subnet-037d8c6750c5de236 \
--node-ami auto

 

 

 

Deployment

All the contents in the ymls below can be placed into single file for deployment.  For this demonstration, chucking it into smaller components makes it easier to explain.

NiFi Registry (NR)

Edge Flow Manager has a dependency on NR.  Flow versions are stored in NR.  Here is nifiregistry.yml.  

 

 

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nifiregistry
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nifiregistry
  template:
    metadata:
      labels:
        app: nifiregistry
    spec:
      containers:
      - name: nifiregistry-container
        image: your-image-location/nifiregistry
        ports:
        - containerPort: 18080
          name: http
        - containerPort: 22
          name: ssh
        resources:
          requests:
            cpu: ".5"
            memory: "2Gi"
          limits:
            cpu: "1"
        env:
        - name: VERSION
          value: "11"
---
kind: Service             #+
apiVersion: v1            #+
metadata:                 #+
  name: nifiregistry-service     #+
spec:                     #+
  selector:               #+
    app: nifiregistry            #+
  ports:                  #+
  - protocol: TCP         #+
    targetPort: 18080     #+
    port: 80           #+
    name: http            #+
  - protocol: TCP         #+
    targetPort: 22        #+
    port: 22              #+
    name: ssh             #+
  type: LoadBalancer      #+
  loadBalancerSourceRanges:
  - 0.0.0.0/0

 

 

 

Update the following line in nifiregistry.yml with the location of your NR image.

 

 

image: your-image-location/nifiregistry

 

 

Also take note the load balancer for NR is open to the world.  You may want to lock this down.

 

Deploy NR on k8s

 

 

kubectl apply -f nifiregistry.yml

 

 

 

Edge Flow Manager (EFM)

Next deploy EFM on k8s.   Here is efm.yml

 

 

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: efm
spec:
  replicas: 1
  selector:
    matchLabels:
      app: efm
  template:
    metadata:
      labels:
        app: efm
    spec:
      containers:
      - name: efm-container
        image: your-image-location/efm
        ports:
        - containerPort: 10080
          name: http
        - containerPort: 22
          name: ssh
        resources:
          requests:
            cpu: ".5"
            memory: "2Gi"
          limits:
            cpu: "1"
        env:
        - name: VERSION
          value: "11"
        - name: NIFI_REGISTRY_ENABLED
          value: "true"
        - name: NIFI_REGISTRY_BUCKETNAME
          value: "testbucket"
        - name: NIFI_REGISTRY
          value: "<a href="<a href="<a href="<a href="<a href="http://nifiregistry-service.default.svc.cluster.local" target="_blank">http://nifiregistry-service.default.svc.cluster.local</a>" target="_blank"><a href="http://nifiregistry-service.default.svc.cluster.local</a" target="_blank">http://nifiregistry-service.default.svc.cluster.local</a</a>>" target="_blank"><a href="<a href="http://nifiregistry-service.default.svc.cluster.local</a" target="_blank">http://nifiregistry-service.default.svc.cluster.local</a</a>" target="_blank"><a href="http://nifiregistry-service.default.svc.cluster.local</a</a" target="_blank">http://nifiregistry-service.default.svc.cluster.local</a</a</a>>>" target="_blank"><a href="<a href="<a href="http://nifiregistry-service.default.svc.cluster.local</a" target="_blank">http://nifiregistry-service.default.svc.cluster.local</a</a>" target="_blank"><a href="http://nifiregistry-service.default.svc.cluster.local</a</a" target="_blank">http://nifiregistry-service.default.svc.cluster.local</a</a</a>>" target="_blank"><a href="<a href="http://nifiregistry-service.default.svc.cluster.local</a</a" target="_blank">http://nifiregistry-service.default.svc.cluster.local</a</a</a>" target="_blank"><a href="http://nifiregistry-service.default.svc.cluster.local</a</a</a" target="_blank">http://nifiregistry-service.default.svc.cluster.local</a</a</a</a>>>>" target="_blank"><a href="<a href="<a href="<a href="http://nifiregistry-service.default.svc.cluster.local</a" target="_blank">http://nifiregistry-service.default.svc.cluster.local</a</a>" target="_blank"><a href="http://nifiregistry-service.default.svc.cluster.local</a</a" target="_blank">http://nifiregistry-service.default.svc.cluster.local</a</a</a>>" target="_blank"><a href="<a href="http://nifiregistry-service.default.svc.cluster.local</a</a" target="_blank">http://nifiregistry-service.default.svc.cluster.local</a</a</a>" target="_blank"><a href="http://nifiregistry-service.default.svc.cluster.local</a</a</a" target="_blank">http://nifiregistry-service.default.svc.cluster.local</a</a</a</a>>>" target="_blank"><a href="<a href="<a href="http://nifiregistry-service.default.svc.cluster.local</a</a" target="_blank">http://nifiregistry-service.default.svc.cluster.local</a</a</a>" target="_blank"><a href="http://nifiregistry-service.default.svc.cluster.local</a</a</a" target="_blank">http://nifiregistry-service.default.svc.cluster.local</a</a</a</a>>" target="_blank"><a href="<a href="http://nifiregistry-service.default.svc.cluster.local</a</a</a" target="_blank">http://nifiregistry-service.default.svc.cluster.local</a</a</a</a>" target="_blank"><a href="http://nifiregistry-service.default.svc.cluster.local</a</a</a</a" target="_blank">http://nifiregistry-service.default.svc.cluster.local</a</a</a</a</a>>>>>"
---
kind: Service             #+
apiVersion: v1            #+
metadata:                 #+
  name: efm-service     #+
spec:                     #+
  selector:               #+
    app: efm            #+
  ports:                  #+
  - protocol: TCP         #+
    targetPort: 10080     #+
    port: 80           #+
    name: http            #+
  - protocol: TCP         #+
    targetPort: 22        #+
    port: 22              #+
    name: ssh             #+
  type: LoadBalancer      #+
  loadBalancerSourceRanges:
  - 0.0.0.0/0

 

 

 

Update the following line in efm..yml with the location of your EFM image.

 

 

image: your-image-location/efm

 

 

Also take note the load balancer for EFM is open to the world.  You may want to lock this down.

 

Deploy EFM on k8s

 

 

kubectl apply -f efm.yml

 

 

 

 

MiNiFi

Lastly,  deploy MiNiF on k8s.   Here is minifi.yml

 

 

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: minifi
spec:
  replicas: 1
  selector:
    matchLabels:
      app: minifi
  template:
    metadata:
      labels:
        app: minifi
    spec:
      containers:
      - name: minifi-container
        image: your-image-location/minifi-azure-aws
        ports:
        - containerPort: 10080
          name: http
        - containerPort: 6065
          name: listenhttp  
        - containerPort: 22
          name: ssh
        resources:
          requests:
            cpu: "500m"
            memory: "1Gi"
          limits:
            cpu: "1"
        env:
        - name: NIFI_C2_ENABLE
          value: "true"
        - name: MINIFI_AGENT_CLASS
          value: "listenSysLog"
        - name: NIFI_C2_REST_URL
          value: "<a href="<a href="<a href="<a href="http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/heartbeat" target="_blank">http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/heartbeat</a>" target="_blank"><a href="http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/heartbeat</a" target="_blank">http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/heartbeat</a</a>>" target="_blank"><a href="<a href="http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/heartbeat</a" target="_blank">http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/heartbeat</a</a>" target="_blank"><a href="http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/heartbeat</a</a" target="_blank">http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/heartbeat</a</a</a>>>" target="_blank"><a href="<a href="<a href="http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/heartbeat</a" target="_blank">http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/heartbeat</a</a>" target="_blank"><a href="http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/heartbeat</a</a" target="_blank">http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/heartbeat</a</a</a>>" target="_blank"><a href="<a href="http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/heartbeat</a</a" target="_blank">http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/heartbeat</a</a</a>" target="_blank"><a href="http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/heartbeat</a</a</a" target="_blank">http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/heartbeat</a</a</a</a>>>>"
        - name: NIFI_C2_REST_URL_ACK
          value: "<a href="<a href="<a href="<a href="http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/acknowledge" target="_blank">http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/acknowledge</a>" target="_blank"><a href="http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/acknowledge</a" target="_blank">http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/acknowledge</a</a>>" target="_blank"><a href="<a href="http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/acknowledge</a" target="_blank">http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/acknowledge</a</a>" target="_blank"><a href="http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/acknowledge</a</a" target="_blank">http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/acknowledge</a</a</a>>>" target="_blank"><a href="<a href="<a href="http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/acknowledge</a" target="_blank">http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/acknowledge</a</a>" target="_blank"><a href="http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/acknowledge</a</a" target="_blank">http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/acknowledge</a</a</a>>" target="_blank"><a href="<a href="http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/acknowledge</a</a" target="_blank">http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/acknowledge</a</a</a>" target="_blank"><a href="http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/acknowledge</a</a</a" target="_blank">http://efm-service.default.svc.cluster.local/efm/api/c2-protocol/acknowledge</a</a</a</a>>>>"
---
kind: Service             #+
apiVersion: v1            #+
metadata:                 #+
  name: minifi-service     #+
spec:                     #+
  selector:               #+
    app: minifi            #+
  ports:                  #+
  - protocol: TCP         #+
    targetPort: 10080     #+
    port: 10080              #+
    name: http            #+
  - protocol: TCP         #+
    targetPort: 9877     #+
    port: 9877              #+
    name: tcpsyslog    
  - protocol: TCP         #+
    targetPort: 9878     #+
    port: 9878             #+
    name: udpsyslog   
  - protocol: TCP         #+
    targetPort: 22        #+
    port: 22              #+
    name: ssh             #+
  - protocol: TCP         #+
    targetPort: 6065        #+
    port: 6065              #+
    name: listenhttp             #+
  type: LoadBalancer      #+
  loadBalancerSourceRanges:
  - 0.0.0.0/0

 

 

 

 

Update the following line in minifi..yml with the location of your MiNiFI image.

 

 

image: your-image-location/minifi-azure-aws

 

 

Also take note the load balancer for MiNiFi is open to the world.  You may want to lock this down.

 

Deploy MiNiFi on k8s

 

 

kubectl apply -f minifi.yml

 

 

 

 

Thats it!  Part 1 of this series demonstrated how to autoscale MiNiFi and those same k8s commands can be used here to scale this out properly.  The next part of this series will add the concept of k8s stateful sets and their impact EFM/NR/MiNiFi for a resilient backend persistence layer.

 

2,330 Views