Created on 03-30-2018 09:58 AM - edited 09-16-2022 06:02 AM
Hello
I would like to ask you how to properly scale up a node automatically with nifi 1.5.0 and open-id.
Here is my config:
3 nodes, with zookeeper embedded, all the 3 nodes have the same -new style- authorizers.xml as following:
<authorizers> [[ content hidden ]] </authorizers>
Then the nifi.properties for the zk nodes (only the interesting things):
# Site to Site properties nifi.remote.input.host= nifi.remote.input.secure=true nifi.remote.input.socket.port=9997 nifi.remote.input.http.enabled=false # web properties nifi.web.http.host= nifi.web.http.port= nifi.web.https.host=nifi-test-zk-${id_of_the_node}.example.com nifi.web.https.port=8443 # security # keystorePasswd, keyPasswd, truststorePasswd, and keystore/truststore are # generated with tls-toolkit client connecting to tls-toolkit server nifi.security.keystore=/${somepath}/nifi/ssl/keystore.jks nifi.security.keystoreType=jks nifi.security.keystorePasswd=${keystorePasswd} nifi.security.keyPasswd=${keyPasswd} nifi.security.truststore=/${somepath}/nifi/ssl/truststore.jks nifi.security.truststoreType=jks nifi.security.truststorePasswd=${truststorePasswd} nifi.security.needClientAuth=true nifi.security.user.authorizer=managed-authorizer # Cluster settings: nifi.cluster.is.node=true nifi.cluster.node.address=nifi-test-zk-${id_of_the_node}.example.com nifi.cluster.node.protocol.port=9998 # Open ID nifi.security.user.oidc.discovery.url=https://accounts.google.com/.well-known/openid-configuration nifi.security.user.oidc.connect.timeout=5 secs nifi.security.user.oidc.read.timeout=5 secs nifi.security.user.oidc.client.id=${my_google_oauth_id} nifi.security.user.oidc.client.secret=${my_google_oauth_key} nifi.security.user.oidc.preferred.jwsalgorithm=
To access the UI I have nginx on the node 1, with a proxy to the UI port as follow
server { listen 443 default; server_name nifi-test-zk-1.example.com; access_log /var/log/nginx/nifi-test-zk-1.example.com-access.log; include /etc/nginx/block.conf; error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } ssl on; ssl_certificate /etc/nginx/ssl/wildcard.example.com.crt; ssl_certificate_key /etc/nginx/ssl/wildcard.example.com.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 900; proxy_pass https://${ip_of_the_machine_or_localhost}:8443; } }
The 3 nodes start correctly, I have the cluster running, I can login using google account. All is fine, the issue is when I add a non-zk node to the cluster. I am not sure how should look like the authorizers.xml file. I have read that it should be empty so it can inherit from the cluster while joining there we go:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <authorizers> [[ content hidden ]] </authorizers>
I am also not quite sure how should the following nifi.properies keys be set up:
nifi.cluster.node.address=${ ? ip or hostname? } nifi.web.https.host=${ ? ip or hostname? }
I tried both setting IP and hostname and I have the following error while accessing the UI from zk-node-1:
javax.ws.rs.ProcessingException: java.net.UnknownHostException: nifi-test-i-0069cf32f0939dfb0.example.com
I provision all the nodes with puppet, the 3 zk nodes have static hostname with DNS record and have the same authorizers.xml with the 3 nodes set in it. For the 4th and + nodes, they are also provisioned with puppet, so the hostname is automatically set with an id in it, meaning I don't have any DNS records for it.
I access the UI only with the zk node 1 for now (it might be also good to know if we can have a load balancer for the UI? I tried but I have some problem with cert hostname mismatch - new in 1.5.0 I guess?).
The question is, what am I doing wrong that my 4th node get blocked at some point preventing me to access the UI at all? I need to have it all automated, so adding the 4node identity to the authorizers.zml is really not an option.
Thank you in advance.
Created 05-22-2018 02:53 PM
found how to do it.
Created 04-04-2018 08:44 AM
no ideas ? 😞
Created 05-09-2018 12:34 PM
Still in need of help if anyone have an idea 🙂
Created 05-09-2018 02:40 PM
Follow the Ambari setup and use that cluster approach
Created 05-09-2018 08:37 PM
I use puppet to provision my server I cannot use Ambari (AFAIK).
or do you know a way to automate the cluster setup, with aws auto-scalling groups for the nodes ?
Created 05-22-2018 02:53 PM
found how to do it.