Created 04-27-2022 07:51 AM
Hi community,
I am a newbie at Nifi and unfortunately i have to build 3 node securedcluster and nifi registry on my own. The company had a standalone nifi node which is connected to nifi registry. I wanted from network team to clone this standalone nifi node and wanted to build 3 node cluster with them only adjusting configurations. Lets say the standalone node was server1 and I also have server2,server3(clones of server1) at the moment. My plan was build 2 node cluster with clone servers first because i didn't want to malfunction the original server while i was trying to configure cluster. If i succeed 2 node cluster without a problem then i will add the original server to cluster.
The certificates of server2 and server3 didn't created by nifi-toolkit it was provided by another team in the company and i used them in keystore and truststores for this servers. I thought the configurations mostly correct because the automated process on the original server also works on clone nodes and send mail every morning. I have some errors and sadly I am stuck here I don't have experience on this matter. Could you help me if it is possible ?
Here is my files. All identical except the server names(server2,server3)
nifi.properties
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Core Properties #
nifi.flow.configuration.file=./conf/flow.xml.gz
nifi.flow.configuration.archive.enabled=true
nifi.flow.configuration.archive.dir=./conf/archive/
nifi.flow.configuration.archive.max.time=30 days
nifi.flow.configuration.archive.max.storage=500 MB
nifi.flow.configuration.archive.max.count=
nifi.flowcontroller.autoResumeState=true
nifi.flowcontroller.graceful.shutdown.period=10 sec
nifi.flowservice.writedelay.interval=500 ms
nifi.administrative.yield.duration=30 sec
# If a component has no work to do (is "bored"), how long should we wait before checking again for work?
nifi.bored.yield.duration=10 millis
nifi.queue.backpressure.count=10000
nifi.queue.backpressure.size=1 GB
nifi.authorizer.configuration.file=./conf/authorizers.xml
nifi.login.identity.provider.configuration.file=./conf/login-identity-providers.xml
nifi.templates.directory=./conf/templates
nifi.ui.banner.text=
nifi.ui.autorefresh.interval=30 sec
nifi.nar.library.directory=./lib
nifi.nar.library.autoload.directory=./extensions
nifi.nar.working.directory=./work/nar/
nifi.documentation.working.directory=./work/docs/components
####################
# State Management #
####################
nifi.state.management.configuration.file=./conf/state-management.xml
# The ID of the local state provider
nifi.state.management.provider.local=local-provider
# The ID of the cluster-wide state provider. This will be ignored if NiFi is not clustered but must be populated if running in a cluster.
nifi.state.management.provider.cluster=zk-provider
# Specifies whether or not this instance of NiFi should run an embedded ZooKeeper server
nifi.state.management.embedded.zookeeper.start=true
# Properties file that provides the ZooKeeper properties to use if <nifi.state.management.embedded.zookeeper.start> is set to true
nifi.state.management.embedded.zookeeper.properties=./conf/zookeeper.properties
# H2 Settings
nifi.database.directory=./database_repository
nifi.h2.url.append=;LOCK_TIMEOUT=25000;WRITE_DELAY=0;AUTO_SERVER=FALSE
# Repository Encryption properties override individual repository implementation properties
nifi.repository.encryption.protocol.version=
nifi.repository.encryption.key.id=
nifi.repository.encryption.key.provider=
nifi.repository.encryption.key.provider.keystore.location=
nifi.repository.encryption.key.provider.keystore.password=
# FlowFile Repository
nifi.flowfile.repository.implementation=org.apache.nifi.controller.repository.WriteAheadFlowFileRepository
nifi.flowfile.repository.wal.implementation=org.apache.nifi.wali.SequentialAccessWriteAheadLog
nifi.flowfile.repository.directory=./flowfile_repository
nifi.flowfile.repository.checkpoint.interval=20 secs
nifi.flowfile.repository.always.sync=false
nifi.flowfile.repository.retain.orphaned.flowfiles=true
nifi.swap.manager.implementation=org.apache.nifi.controller.FileSystemSwapManager
nifi.queue.swap.threshold=20000
# Content Repository
nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository
nifi.content.claim.max.appendable.size=1 MB
nifi.content.repository.directory.default=./content_repository
nifi.content.repository.archive.max.retention.period=7 days
nifi.content.repository.archive.max.usage.percentage=50%
nifi.content.repository.archive.enabled=true
nifi.content.repository.always.sync=false
nifi.content.viewer.url=../nifi-content-viewer/
# Provenance Repository Properties
nifi.provenance.repository.implementation=org.apache.nifi.provenance.WriteAheadProvenanceRepository
# Persistent Provenance Repository Properties
nifi.provenance.repository.directory.default=./provenance_repository
nifi.provenance.repository.max.storage.time=30 days
nifi.provenance.repository.max.storage.size=10 GB
nifi.provenance.repository.rollover.time=10 mins
nifi.provenance.repository.rollover.size=100 MB
nifi.provenance.repository.query.threads=2
nifi.provenance.repository.index.threads=2
nifi.provenance.repository.compress.on.rollover=true
nifi.provenance.repository.always.sync=false
# Comma-separated list of fields. Fields that are not indexed will not be searchable. Valid fields are:
# EventType, FlowFileUUID, Filename, TransitURI, ProcessorID, AlternateIdentifierURI, Relationship, Details
nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID, Filename, ProcessorID, Relationship
# FlowFile Attributes that should be indexed and made searchable. Some examples to consider are filename, uuid, mime.type
nifi.provenance.repository.indexed.attributes=
# Large values for the shard size will result in more Java heap usage when searching the Provenance Repository
# but should provide better performance
nifi.provenance.repository.index.shard.size=500 MB
# Indicates the maximum length that a FlowFile attribute can be when retrieving a Provenance Event from
# the repository. If the length of any attribute exceeds this value, it will be truncated when the event is retrieved.
nifi.provenance.repository.max.attribute.length=65536
nifi.provenance.repository.concurrent.merge.threads=2
# Volatile Provenance Respository Properties
nifi.provenance.repository.buffer.size=100000
# Component and Node Status History Repository
nifi.components.status.repository.implementation=org.apache.nifi.controller.status.history.VolatileComponentStatusRepository
# Volatile Status History Repository Properties
nifi.components.status.repository.buffer.size=1440
nifi.components.status.snapshot.frequency=1 min
# QuestDB Status History Repository Properties
nifi.status.repository.questdb.persist.node.days=14
nifi.status.repository.questdb.persist.component.days=3
nifi.status.repository.questdb.persist.location=./status_repository
# Site to Site properties
nifi.remote.input.host=server2.abc.tr
nifi.remote.input.secure=true
nifi.remote.input.socket.port=10443
nifi.remote.input.http.enabled=true
nifi.remote.input.http.transaction.ttl=30 sec
nifi.remote.contents.cache.expiration=30 secs
# web properties #
#############################################
# For security, NiFi will present the UI on 127.0.0.1 and only be accessible through this loopback interface.
# Be aware that changing these properties may affect how your instance can be accessed without any restriction.
# We recommend configuring HTTPS instead. The administrators guide provides instructions on how to do this.
nifi.web.http.host=
nifi.web.http.port=
nifi.web.http.network.interface.default=
nifi.web.https.host=server2.abc.tr
nifi.web.https.port=8443
nifi.web.https.network.interface.default=
nifi.web.jetty.working.directory=./work/jetty
nifi.web.jetty.threads=200
nifi.web.max.header.size=16 KB
nifi.web.proxy.context.path=
nifi.web.proxy.host=
nifi.web.max.content.size=
nifi.web.max.requests.per.second=30000
nifi.web.max.access.token.requests.per.second=25
nifi.web.request.timeout=60 secs
nifi.web.request.ip.whitelist=
nifi.web.should.send.server.version=true
# Include or Exclude TLS Cipher Suites for HTTPS
nifi.web.https.ciphersuites.include=
nifi.web.https.ciphersuites.exclude=
# security properties #
nifi.sensitive.props.key=eeIEzE6a8ZmPlol883gHYTWYIPHS1DbQ
nifi.sensitive.props.key.protected=
nifi.sensitive.props.algorithm=NIFI_PBKDF2_AES_GCM_256
nifi.sensitive.props.additional.keys=
nifi.security.autoreload.enabled=false
nifi.security.autoreload.interval=10 secs
nifi.security.keystore=./conf/nifitest2keystore.jks
nifi.security.keystoreType=jks
nifi.security.keystorePasswd=nifitest2
nifi.security.keyPasswd=nifitest3
nifi.security.truststore=./conf/nifitest2truststore.jks
nifi.security.truststoreType=jks
nifi.security.truststorePasswd=nifitest2
nifi.security.user.authorizer=managed-authorizer
nifi.security.allow.anonymous.authentication=false
nifi.security.user.login.identity.provider=ldap-provider
nifi.security.user.jws.key.rotation.period=PT1H
nifi.security.ocsp.responder.url=
nifi.security.ocsp.responder.certificate=
# OpenId Connect SSO Properties #
nifi.security.user.oidc.discovery.url=
nifi.security.user.oidc.connect.timeout=5 secs
nifi.security.user.oidc.read.timeout=5 secs
nifi.security.user.oidc.client.id=
nifi.security.user.oidc.client.secret=
nifi.security.user.oidc.preferred.jwsalgorithm=
nifi.security.user.oidc.additional.scopes=
nifi.security.user.oidc.claim.identifying.user=
nifi.security.user.oidc.fallback.claims.identifying.user=
# Apache Knox SSO Properties #
nifi.security.user.knox.url=
nifi.security.user.knox.publicKey=
nifi.security.user.knox.cookieName=hadoop-jwt
nifi.security.user.knox.audiences=
# SAML Properties #
nifi.security.user.saml.idp.metadata.url=
nifi.security.user.saml.sp.entity.id=
nifi.security.user.saml.identity.attribute.name=
nifi.security.user.saml.group.attribute.name=
nifi.security.user.saml.metadata.signing.enabled=false
nifi.security.user.saml.request.signing.enabled=false
nifi.security.user.saml.want.assertions.signed=true
nifi.security.user.saml.signature.algorithm=http://www.w3.org/2001/04/xmldsig-more#rsa-sha256
nifi.security.user.saml.signature.digest.algorithm=http://www.w3.org/2001/04/xmlenc#sha256
nifi.security.user.saml.message.logging.enabled=false
nifi.security.user.saml.authentication.expiration=12 hours
nifi.security.user.saml.single.logout.enabled=false
nifi.security.user.saml.http.client.truststore.strategy=JDK
nifi.security.user.saml.http.client.connect.timeout=30 secs
nifi.security.user.saml.http.client.read.timeout=30 secs
# Identity Mapping Properties #
# These properties allow normalizing user identities such that identities coming from different identity providers
# (certificates, LDAP, Kerberos) can be treated the same internally in NiFi. The following example demonstrates normalizing
# DNs from certificates and principals from Kerberos into a common identity string:
# nifi.security.identity.mapping.pattern.dn=^CN=(.*?), OU=(.*?), O=(.*?), L=(.*?), ST=(.*?), C=(.*?)$
# nifi.security.identity.mapping.value.dn=$1@$2
# nifi.security.identity.mapping.transform.dn=NONE
# nifi.security.identity.mapping.pattern.kerb=^(.*?)/instance@(.*?)$
# nifi.security.identity.mapping.value.kerb=$1@$2
# nifi.security.identity.mapping.transform.kerb=UPPER
# Group Mapping Properties #
# These properties allow normalizing group names coming from external sources like LDAP. The following example
# lowercases any group name.
#
# nifi.security.group.mapping.pattern.anygroup=^(.*)$
# nifi.security.group.mapping.value.anygroup=$1
# nifi.security.group.mapping.transform.anygroup=LOWER
# cluster common properties (all nodes must have same values) #
nifi.cluster.protocol.heartbeat.interval=5 sec
nifi.cluster.protocol.heartbeat.missable.max=8
nifi.cluster.protocol.is.secure=true
# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=true
nifi.cluster.node.address=server2.abc.tr
nifi.cluster.node.protocol.port=11443
nifi.cluster.node.protocol.max.threads=50
nifi.cluster.node.event.history.size=25
nifi.cluster.node.connection.timeout=5 sec
nifi.cluster.node.read.timeout=5 sec
nifi.cluster.node.max.concurrent.requests=100
nifi.cluster.firewall.file=
nifi.cluster.flow.election.max.wait.time=5 mins
nifi.cluster.flow.election.max.candidates=2
# cluster load balancing properties #
nifi.cluster.load.balance.host=
nifi.cluster.load.balance.port=6342
nifi.cluster.load.balance.connections.per.node=1
nifi.cluster.load.balance.max.thread.count=8
nifi.cluster.load.balance.comms.timeout=30 sec
# zookeeper properties, used for cluster management #
nifi.zookeeper.connect.string=server2.abc.tr:2181,server3.abc.tr:2181
nifi.zookeeper.connect.timeout=10 secs
nifi.zookeeper.session.timeout=10 secs
nifi.zookeeper.root.node=/nifi
nifi.zookeeper.client.secure=false
nifi.zookeeper.security.keystore=
nifi.zookeeper.security.keystoreType=
nifi.zookeeper.security.keystorePasswd=
nifi.zookeeper.security.truststore=
nifi.zookeeper.security.truststoreType=
nifi.zookeeper.security.truststorePasswd=
nifi.zookeeper.jute.maxbuffer=
# Zookeeper properties for the authentication scheme used when creating acls on znodes used for cluster management
# Values supported for nifi.zookeeper.auth.type are "default", which will apply world/anyone rights on znodes
# and "sasl" which will give rights to the sasl/kerberos identity used to authenticate the nifi node
# The identity is determined using the value in nifi.kerberos.service.principal and the removeHostFromPrincipal
# and removeRealmFromPrincipal values (which should align with the kerberos.removeHostFromPrincipal and kerberos.removeRealmFromPrincipal
# values configured on the zookeeper server).
nifi.zookeeper.auth.type=
nifi.zookeeper.kerberos.removeHostFromPrincipal=
nifi.zookeeper.kerberos.removeRealmFromPrincipal=
# kerberos #
nifi.kerberos.krb5.file=
# kerberos service principal #
nifi.kerberos.service.principal=
nifi.kerberos.service.keytab.location=
# kerberos spnego principal #
nifi.kerberos.spnego.principal=
nifi.kerberos.spnego.keytab.location=
nifi.kerberos.spnego.authentication.expiration=12 hours
# external properties files for variable registry
# supports a comma delimited list of file locations
nifi.variable.registry.properties=
# analytics properties #
nifi.analytics.predict.enabled=false
nifi.analytics.predict.interval=3 mins
nifi.analytics.query.interval=5 mins
nifi.analytics.connection.model.implementation=org.apache.nifi.controller.status.analytics.models.OrdinaryLeastSquares
nifi.analytics.connection.model.score.name=rSquared
nifi.analytics.connection.model.score.threshold=.90
# runtime monitoring properties
nifi.monitor.long.running.task.schedule=
nifi.monitor.long.running.task.threshold=
# Create automatic diagnostics when stopping/restarting NiFi.
# Enable automatic diagnostic at shutdown.
nifi.diagnostics.on.shutdown.enabled=false
# Include verbose diagnostic information.
nifi.diagnostics.on.shutdown.verbose=false
# The location of the diagnostics folder.
nifi.diagnostics.on.shutdown.directory=./diagnostics
# The maximum number of files permitted in the directory. If the limit is exceeded, the oldest files are deleted.
nifi.diagnostics.on.shutdown.max.filecount=10
# The diagnostics folder's maximum permitted size in bytes. If the limit is exceeded, the oldest files are deleted.
nifi.diagnostics.on.shutdown.max.directory.size=10 MB
Authorizers.xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<authorizers>
<userGroupProvider>
<identifier>file-user-group-provider</identifier>
<class>org.apache.nifi.authorization.FileUserGroupProvider</class>
<property name="Users File">./conf/users.xml</property>
<property name="Legacy Authorized Users File"></property>
<property name="Initial User Identity 1">CN=server2.abc.tr,O=ABC,L=Ankara,ST=Ankara,C=TR</property>
<property name="Initial User Identity 2">CN=server3.abc.tr,O=ABC,L=Ankara,ST=Ankara,C=TR</property>
</userGroupProvider>
<userGroupProvider>
<identifier>ldap-user-group-provider</identifier>
<class>org.apache.nifi.ldap.tenants.LdapUserGroupProvider</class>
<property name="Authentication Strategy">LDAPS</property>
<property name="Manager DN">CN=mnguser,CN=Users,DC=ABC,DC=gov,DC=tr</property>
<property name="Manager Password">mngpwd</property>
<property name="TLS - Keystore">./conf/nifitest2keystore.jks</property>
<property name="TLS - Keystore Password">nifitest2</property>
<property name="TLS - Keystore Type">JKS</property>
<property name="TLS - Truststore">./conf/nifitest2truststore.jks</property>
<property name="TLS - Truststore Password">nifitest2</property>
<property name="TLS - Truststore Type">JKS</property>
<property name="TLS - Client Auth">WANT</property>
<property name="TLS - Protocol">TLS</property>
<property name="TLS - Shutdown Gracefully">true</property>
<property name="Referral Strategy">FOLLOW</property>
<property name="Connect Timeout">10 secs</property>
<property name="Read Timeout">10 secs</property>
<property name="Url">ldaps://ldaps.abc.tr:636</property>
<property name="Page Size">500</property>
<property name="Sync Interval">30 mins</property>
<property name="Group Membership - Enforce Case Sensitivity">false</property>
<property name="User Search Base">CN=Users,DC=ABC,DC=gov,DC=tr</property>
<property name="User Object Class">person</property>
<property name="User Search Scope">ONE_LEVEL</property>
<property name="User Search Filter">(cn=*)</property>
<property name="User Identity Attribute">cn</property>
<property name="User Group Name Attribute">memberOf</property>
<property name="User Group Name Attribute - Referenced Group Attribute"></property>
<property name="Group Search Base">CN=Users,DC=ABC,DC=gov,DC=tr</property>
<property name="Group Object Class">group</property>
<property name="Group Search Scope">ONE_LEVEL</property>
<property name="Group Search Filter">(cn=*)</property>
<property name="Group Name Attribute">cn</property>
<property name="Group Member Attribute">member</property>
<property name="Group Member Attribute - Referenced User Attribute"></property>
</userGroupProvider>
<userGroupProvider>
<identifier>composite-configurable-user-group-provider</identifier>
<class>org.apache.nifi.authorization.CompositeConfigurableUserGroupProvider</class>
<property name="Configurable User Group Provider">file-user-group-provider</property>
<property name="User Group Provider 1">ldap-user-group-provider</property>
</userGroupProvider>
<accessPolicyProvider>
<identifier>file-access-policy-provider</identifier>
<class>org.apache.nifi.authorization.FileAccessPolicyProvider</class>
<property name="User Group Provider">ldap-user-group-provider</property>
<property name="Authorizations File">./conf/authorizations.xml</property>
<property name="Initial Admin Identity">mnguser</property>
<property name="Legacy Authorized Users File"></property>
<property name="Node Identity 1">CN=server2.abc.tr,O=ABC,L=Ankara,ST=Ankara,C=TR</property>
<property name="Node Identity 2">CN=server3.abc.tr,O=ABC,L=Ankara,ST=Ankara,C=TR</property>
<property name="Node Group"></property>
</accessPolicyProvider>
<authorizer>
<identifier>managed-authorizer</identifier>
<class>org.apache.nifi.authorization.StandardManagedAuthorizer</class>
<property name="Access Policy Provider">file-access-policy-provider</property>
</authorizer>
</authorizers>
login-identity-providers.xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<loginIdentityProviders>
<provider>
<identifier>single-user-provider</identifier>
<class>org.apache.nifi.authentication.single.user.SingleUserLoginIdentityProvider</class>
<property name="Username">1d386860-68a3-450b-b5b5-085e717fe1de</property>
<property name="Password">$2b$12$0/T4WPZ1eRN.r6okIvB5xObxLBbPCLEPbpwlOdP1PXWDIETLd4y0a</property>
</provider>
<provider>
<identifier>ldap-provider</identifier>
<class>org.apache.nifi.ldap.LdapProvider</class>
<property name="Authentication Strategy">LDAPS</property>
<property name="Manager DN">CN=mnguser,CN=Users,DC=ABC,DC=gov,DC=tr</property>
<property name="Manager Password">mngpwd</property>
<property name="TLS - Keystore">./conf/nifitest2keystore.jks</property>
<property name="TLS - Keystore Password">nifitest2</property>
<property name="TLS - Keystore Type">JKS</property>
<property name="TLS - Truststore">./conf/nifitest2truststore.jks</property>
<property name="TLS - Truststore Password">nifitest2</property>
<property name="TLS - Truststore Type">JKS</property>
<property name="TLS - Client Auth">WANT</property>
<property name="TLS - Protocol">TLS</property>
<property name="TLS - Shutdown Gracefully">true</property>
<property name="Referral Strategy">FOLLOW</property>
<property name="Connect Timeout">10 secs</property>
<property name="Read Timeout">10 secs</property>
<property name="Url">ldaps://ldaps.abc.tr:636</property>
<property name="User Search Base">CN=Users,DC=ABC,DC=gov,DC=tr</property>
<property name="User Search Filter">(cn={0})</property>
<property name="Identity Strategy">USE_USERNAME</property>
<property name="Authentication Expiration">12 hours</property>
</provider>
</loginIdentityProviders>
NIFI LOGS
nifi-bootstrap.log
2022-04-27 17:36:00,582 INFO [main] o.a.n.b.NotificationServiceManager Successfully loaded the following 0 services: []
2022-04-27 17:36:00,585 INFO [main] org.apache.nifi.bootstrap.RunNiFi Registered no Notification Services for Notification Type NIFI_STARTED
2022-04-27 17:36:00,585 INFO [main] org.apache.nifi.bootstrap.RunNiFi Registered no Notification Services for Notification Type NIFI_STOPPED
2022-04-27 17:36:00,585 INFO [main] org.apache.nifi.bootstrap.RunNiFi Registered no Notification Services for Notification Type NIFI_DIED
2022-04-27 17:36:00,587 INFO [main] org.apache.nifi.bootstrap.RunNiFi Runtime Java version: 11.0.12
2022-04-27 17:36:00,616 INFO [main] org.apache.nifi.bootstrap.Command Starting Apache NiFi...
2022-04-27 17:36:00,616 INFO [main] org.apache.nifi.bootstrap.Command Working Directory: /opt/nifi-1.15.1
2022-04-27 17:36:00,617 INFO [main] org.apache.nifi.bootstrap.Command Command: java -classpath /opt/nifi-1.15.1/./conf:/opt/nifi-1.15.1/./lib/javax.servlet-api-3.1.0.jar:/opt/nifi-1.15.1/./lib/jcl-over-slf4j-1.7.32.jar:/opt/nifi-1.15.1/./lib/jetty-schemas-3.1.jar:/opt/nifi-1.15.1/./lib/jul-to-slf4j-1.7.32.jar:/opt/nifi-1.15.1/./lib/log4j-over-slf4j-1.7.32.jar:/opt/nifi-1.15.1/./lib/logback-classic-1.2.8.jar:/opt/nifi-1.15.1/./lib/logback-core-1.2.8.jar:/opt/nifi-1.15.1/./lib/nifi-api-1.15.1.jar:/opt/nifi-1.15.1/./lib/nifi-framework-api-1.15.1.jar:/opt/nifi-1.15.1/./lib/nifi-nar-utils-1.15.1.jar:/opt/nifi-1.15.1/./lib/nifi-properties-1.15.1.jar:/opt/nifi-1.15.1/./lib/nifi-property-utils-1.15.1.jar:/opt/nifi-1.15.1/./lib/nifi-runtime-1.15.1.jar:/opt/nifi-1.15.1/./lib/nifi-server-api-1.15.1.jar:/opt/nifi-1.15.1/./lib/nifi-stateless-api-1.15.1.jar:/opt/nifi-1.15.1/./lib/nifi-stateless-bootstrap-1.15.1.jar:/opt/nifi-1.15.1/./lib/slf4j-api-1.7.32.jar:/opt/nifi-1.15.1/./lib/java11/istack-commons-runtime-3.0.12.jar:/opt/nifi-1.15.1/./lib/java11/jakarta.activation-1.2.2.jar:/opt/nifi-1.15.1/./lib/java11/jakarta.activation-api-1.2.2.jar:/opt/nifi-1.15.1/./lib/java11/jakarta.xml.bind-api-2.3.3.jar:/opt/nifi-1.15.1/./lib/java11/javax.annotation-api-1.3.2.jar:/opt/nifi-1.15.1/./lib/java11/jaxb-runtime-2.3.5.jar:/opt/nifi-1.15.1/./lib/java11/txw2-2.3.5.jar -Dorg.apache.jasper.compiler.disablejsr199=true -Xmx9g -Xms9g -Dcurator-log-only-first-connection-issue-as-error-level=true -Djavax.security.auth.useSubjectCredsOnly=true -Djava.security.egd=file:/dev/urandom -Dzookeeper.admin.enableServer=false -Dsun.net.http.allowRestrictedHeaders=true -Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -Djava.protocol.handler.pkgs=sun.net.www.protocol -Dnifi.properties.file.path=/opt/nifi-1.15.1/./conf/nifi.properties -Dnifi.bootstrap.listen.port=38104 -Dapp=NiFi -Dorg.apache.nifi.bootstrap.config.log.dir=/opt/nifi-1.15.1/logs org.apache.nifi.NiFi
2022-04-27 17:36:00,637 INFO [main] org.apache.nifi.bootstrap.Command Launched Apache NiFi with Process ID 25903
2022-04-27 17:36:03,392 INFO [NiFi Bootstrap Command Listener] org.apache.nifi.bootstrap.RunNiFi Apache NiFi now running and listening for Bootstrap requests on port 46211
nifi-user.log
2022-04-27 16:30:59,539 INFO [main] o.a.n.a.FileAccessPolicyProvider Populating authorizations for Initial Admin: mnguser
2022-04-27 16:30:59,543 INFO [main] o.a.n.a.FileAccessPolicyProvider Authorizations file loaded at Wed Apr 27 16:30:59 EET 2022
2022-04-27 16:35:13,252 INFO [NiFi Web Server-18] o.a.n.w.s.NiFiAuthenticationFilter Authentication Started 10.53.254.91 [<anonymous>] GET https://server2.abc.tr:8443/nifi-api/flow/current-user
2022-04-27 16:35:13,254 WARN [NiFi Web Server-18] o.a.n.w.s.NiFiAuthenticationFilter Authentication Failed 10.53.254.91 GET https://server2.abc.tr:8443/nifi-api/flow/current-user [Anonymous authentication has not been configured.]
nifi-app.log
2022-04-27 17:39:33,459 INFO [Clustering Tasks Thread-2] o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2022-04-27 17:39:33,365 and sent to server2.abc.tr:11443 at 2022-04-27 17:39:33,459; send took 94 millis
2022-04-27 17:39:35,962 INFO [Cleanup Archive for default] o.a.n.c.repository.FileSystemRepository Successfully deleted 0 files (0 bytes) from archive
2022-04-27 17:39:38,543 INFO [Clustering Tasks Thread-2] o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2022-04-27 17:39:38,460 and sent to server2.abc.tr:11443 at 2022-04-27 17:39:38,543; send took 82 millis
2022-04-27 17:39:43,630 INFO [Clustering Tasks Thread-2] o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2022-04-27 17:39:43,544 and sent to server2.abc.tr:11443 at 2022-04-27 17:39:43,630; send took 85 millis
2022-04-27 17:39:48,709 INFO [Clustering Tasks Thread-2] o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2022-04-27 17:39:48,631 and sent to server2.abc.tr:11443 at 2022-04-27 17:39:48,709; send took 78 millis
2022-04-27 17:39:50,273 INFO [pool-13-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository
2022-04-27 17:39:50,273 INFO [pool-13-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 85 records in 0 milliseconds
2022-04-27 17:39:53,791 INFO [Clustering Tasks Thread-2] o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2022-04-27 17:39:53,710 and sent to server2.abc.tr:11443 at 2022-04-27 17:39:53,791; send took 80 millis
2022-04-27 17:39:55,525 ERROR [Timer-Driven Process Thread-4] o.a.nifi.groups.StandardProcessGroup Failed to synchronize StandardProcessGroup[identifier=3e1f1521-1312-1c92-a434-787753753948,name=SetNifiTokenToCache] with Flow Registry because could not retrieve version 6 of flow with identifier cbde8c2c-9573-4262-9c0f-4566214365ca in bucket 03c8c1a8-372a-4885-96f8-c012e5f93cce
org.apache.nifi.registry.client.NiFiRegistryException: Error retrieving flow snapshot: Unknown user with identity 'CN=server2.abc.tr, O=ABC, L=Ankara, ST=Ankara, C=TR'. Contact the system administrator.
at org.apache.nifi.registry.client.impl.AbstractJerseyClient.executeAction(AbstractJerseyClient.java:117)
at org.apache.nifi.registry.client.impl.JerseyFlowSnapshotClient.get(JerseyFlowSnapshotClient.java:97)
at org.apache.nifi.registry.flow.RestBasedFlowRegistry.getFlowContents(RestBasedFlowRegistry.java:213)
at org.apache.nifi.registry.flow.RestBasedFlowRegistry.getFlowContents(RestBasedFlowRegistry.java:227)
at org.apache.nifi.groups.StandardProcessGroup.synchronizeWithFlowRegistry(StandardProcessGroup.java:3761)
at org.apache.nifi.controller.FlowController$6.run(FlowController.java:972)
at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: javax.ws.rs.ForbiddenException: HTTP 403 Forbidden
at org.glassfish.jersey.client.JerseyInvocation.convertToException(JerseyInvocation.java:913)
at org.glassfish.jersey.client.JerseyInvocation.translate(JerseyInvocation.java:723)
at org.glassfish.jersey.client.JerseyInvocation.lambda$invoke$1(JerseyInvocation.java:643)
at org.glassfish.jersey.client.JerseyInvocation.call(JerseyInvocation.java:665)
at org.glassfish.jersey.client.JerseyInvocation.lambda$runInScope$3(JerseyInvocation.java:659)
at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
at org.glassfish.jersey.internal.Errors.process(Errors.java:205)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:390)
at org.glassfish.jersey.client.JerseyInvocation.runInScope(JerseyInvocation.java:659)
at org.glassfish.jersey.client.JerseyInvocation.invoke(JerseyInvocation.java:642)
at org.glassfish.jersey.client.JerseyInvocation$Builder.method(JerseyInvocation.java:417)
at org.glassfish.jersey.client.JerseyInvocation$Builder.get(JerseyInvocation.java:313)
at org.apache.nifi.registry.client.impl.JerseyFlowSnapshotClient.lambda$get$1(JerseyFlowSnapshotClient.java:104)
at org.apache.nifi.registry.client.impl.AbstractJerseyClient.executeAction(AbstractJerseyClient.java:103)
... 12 common frames omitted
2022-04-27 17:39:55,528 ERROR [Timer-Driven Process Thread-4] o.a.nifi.groups.StandardProcessGroup Failed to synchronize StandardProcessGroup[identifier=c9d7aa55-987d-39f0-952c-30d1c56fde43,name=borsaIstanbulMetal] with Flow Registry because could not retrieve version 1 of flow with identifier ca33c68b-4633-44a7-acbd-82c9f4f01f5a in bucket 6fc4f5a2-34b9-4831-996c-5a7da741b624
Lastly, the original server was dev server and its connected to test server via nifi registry.
As far as i know nifi cluster nodes exchanging changes between each other of flow.tar.gz so only 1 node connected to nifi registry should be enough. Should i have to connect all nodes to registry ? How can i find out where is the configurations to made a node connected to registry.
Sorry for grammatical mistakes. I hope someone can help me about this. Thank you for your attention.🙏
Created 05-16-2022 08:12 AM
@alperenboyaci
Things to keep in mind with NiFi authorizations.
When a user authenticates with NiFi, they are authenticating only with the node they are connecting with. That request then gets replicated to all the nodes in the cluster. This is where the "proxy user request" authorization policy comes in to play. So the node the user authenticated in to replicated the request on behalf of that authenticated user to the other nodes (this way the user does not need to authenticate to all nodes).
Some request that get replicated will result in data being returned by other nodes in the cluster for the purpose of displaying that data information on the origninating host where the user initiated the action.
So in order for that originating node to display that data, it must be authorized to do so.
So while you may have authorized your authenticated user to "view the data" for a specific component, you also need to authorize your hosts for the same. This is why you see the "Insufficient Permissions" dialog box telling you that "server 2" is not authorized to "view the data" for the component requested.
In addition to "proxy user requests", nodes would need to be authorized for:
1. "view the data" in order to list a connection queue of a component.
2. "modify the data" in order to empty a connection queue of a component.
(your user would also need these same authorizations)
Other policies that apply to the NiFi hosts include:
1. "receive data via site-to-site" set on "remote input ports" (set to your local hosts if you are sending FlowFiles to yourself within NiFi. Set to host of other external NiFi instances if receiving FlowFiles from a Remote Process group not on this cluster's hosts).
2. "send data via site-to-site" set on a "remote output port" (same logic as above)
3. "retrieve site-to-site details" set from global menu --> policies. (same logic as above)
NiFi's component level authorization policies are set via the "key icon" found in the "operate panel for components added to canvas:
Clicking on the key icon will display:
Depending in the component selected, some policies may be greyed out if they do not apply to that component. While NiFi allows you to set policies on every component, it is more typical to set policies on the process group because components (processors, controller services, child process groups) inherit permission from the parent PG unless authorizations have been set on the component itself.
When you first installed NiFi and started it for the first time, NiFi would have generated the root level Process Group (it is the canvas you see when you access the UI). With nothing selected on the canvas, the operate panel should display the name name and UUID for the root Process group (this assumes you have not click and entered a child process group). There are bread crumbs in the lower left corner to help user navigate the hierarchy of parent --> child Process groups. (label furthest to the left is the root process group:
I know this was a bit extra detail, but hope it helps you be more successful.
If you found any of the supplied responses assisted with your queries, please take a moment to login and click on "Accept as Solution" below each of those post. As a community we want to make sure we share the path to solutions with other community members through "Accept as Solution" marked responses.
Thank you,
Matt
Created 04-27-2022 10:50 PM
Forgot to add this is what i see trying to login on browser UI:
Untrusted proxy CN=server2.abc.tr, O=ABC, L=Ankara, ST=Ankara, C=TR
Created on 04-29-2022 12:11 AM - edited 04-29-2022 12:12 AM
I discover that if I start only one node at the first I am able to login and see what is in the flow.xml.
Left top corner cluster status was disconnected.
After that I started second node aswell I saw 2/2 cluster on top screen but when I tried to do anything Untrusted Proxy error occurred at both of UI's.
At the beginning I thought my problem was browser authentication but it not the case right now its about node's communication.
@MattWho Could you help me sir ?
Created 05-05-2022 01:09 PM
@alperenboyaci
In a NiFi cluster you have multiple nodes, but only one of those nodes becomes elected as the cluster coordinator by Zookeeper. When you start only one node in your cluster you effectively have a cluster with only 1 node in it and that same node must then be elected as the cluster coordinator. When you access the UI of any node in a NiFi cluster that request must be replicated to all nodes by the elected cluster coordinator (now keep in mind that as a user accessing a Node in the NiFi cluster, you have authenticated only in to the node you specifically enter the URL for.). This means that your request to access the canvas gets proxied by the cluster coordinator to the other nodes. So when you have only one node, nothing needs to be proxied since you are authenticated in to the only node in the cluster.
So for you both your nodes (since either can be elected as the cluster coordinator at any time) must exist as users and be authorized to proxy user requests:
I see you provided your authorizers.xml file which shows how you NiFi handles authorizations. I do see a few configuration issues. It is easiest to read this file from the bottom up...
So we start with your authorizer "managed-authorizer" which is configured to "file-access-policy-provider".
So then we read up looking for the "file-access-policy-provider" where we see it has been configured to use the "ldap-user-group-provider".
So then we look for the "ldap-user-group-provider" which is used to establish user to group associations from your ldap. This provider does not reference to any other providers in this file.
So while you also setup a "composite-configurable-user-group-provider" and "file-user-group-provider", these are not actually ever going to used by the configured "managed-authorizer"
To resolve this issue you need to modify the configuration of your "file-access-policy-provider" so that:
<property name="User Group Provider">ldap-user-group-provider</property>
actually has:
<property name="User Group Provider">composite-configurable-user-group-provider</property>
That composite provider is configured to make use of both the "file-user-group-provider" and "ldap-user-group-provider".
I see you have your NiFi node DNs properly configured in the "file-user-group-provider" and "file-access-policy-provider", so you are good there... but....
The file-access-policy provider only generates the ./conf/authorizations.xml file if it does NOT already exist and since the file-user-group-provider was not being used by your authorizer, it likely did not initially create this file correctly. So I recommend removing the authorizations.xml (not the authorizers.xml file) so that NiFi recreates it after you fixed your configuration issues in the authorizers.xml as I outlined above.
If you found this response assisted with your query, please take a moment to login and click on "Accept as Solution" below this post.
Thank you,
Matt
Created on 05-10-2022 01:22 AM - edited 05-10-2022 03:59 AM
@MattWho Thank you for your reply, I changed user provider to composite user provider as you suggested and edited authorizations.xml files and I can login and see what's in the cluster nodes right now. Additionally, I added the original server the cluster too I have 3 node's cluster at the moment. It seems to be my cluster nodes communicating with each other using the users.xml and UI using LDAP for authentication. Unfortunately, when I tried to see what's in a queue still getting insufficient permission error. Do I missing another policy like the above "proxy user requests". I logged in with LDAP user which I set as manager in authorizers.xml. Should I add my node's information to other policies as well like "access the controller". Lastly, I can't add users from UI before become a part of cluster , original server had auto complete while i was typing usernames right now unfortunately it is broken I added all of extra information to files manually how can I make it functional again ?
Here's some screenshots from system.
Should these entries appear on other policies like ;
When I click list queue:
My latest authorizations.xml file :
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<authorizations>
<policies>
<policy identifier="f99bccd1-a30e-3e4a-98a2-dbc708edc67f" resource="/flow" action="R">
<user identifier="5f65be12-00dc-3abe-8a86-97f45960b9ca"/>
</policy>
<policy identifier="0005d80d-a6e9-33a9-b9c2-b76af02a0b77" resource="/data/process-groups/9860c729-017c-1000-a6ce-771f96f0e174" action="R">
<user identifier="5f65be12-00dc-3abe-8a86-97f45960b9ca"/>
</policy>
<policy identifier="c9813a3f-eb39-30f7-a509-5fc2022ce53e" resource="/data/process-groups/9860c729-017c-1000-a6ce-771f96f0e174" action="W">
<user identifier="5f65be12-00dc-3abe-8a86-97f45960b9ca"/>
</policy>
<policy identifier="5cd4342d-4988-37d0-b37e-d223e3fd46aa" resource="/process-groups/9860c729-017c-1000-a6ce-771f96f0e174" action="R">
<user identifier="5f65be12-00dc-3abe-8a86-97f45960b9ca"/>
</policy>
<policy identifier="1c18b17c-1596-3bd1-92c7-7b5025cbdfb3" resource="/process-groups/9860c729-017c-1000-a6ce-771f96f0e174" action="W">
<user identifier="5f65be12-00dc-3abe-8a86-97f45960b9ca"/>
</policy>
<policy identifier="b8775bd4-704a-34c6-987b-84f2daf7a515" resource="/restricted-components" action="W">
<user identifier="5f65be12-00dc-3abe-8a86-97f45960b9ca"/>
</policy>
<policy identifier="627410be-1717-35b4-a06f-e9362b89e0b7" resource="/tenants" action="R">
<user identifier="5f65be12-00dc-3abe-8a86-97f45960b9ca"/>
</policy>
<policy identifier="15e4e0bd-cb28-34fd-8587-f8d15162cba5" resource="/tenants" action="W">
<user identifier="5f65be12-00dc-3abe-8a86-97f45960b9ca"/>
</policy>
<policy identifier="ff96062a-fa99-36dc-9942-0f6442ae7212" resource="/policies" action="R">
<user identifier="5f65be12-00dc-3abe-8a86-97f45960b9ca"/>
</policy>
<policy identifier="ad99ea98-3af6-3561-ae27-5bf09e1d969d" resource="/policies" action="W">
<user identifier="5f65be12-00dc-3abe-8a86-97f45960b9ca"/>
</policy>
<policy identifier="2e1015cb-0fed-3005-8e0d-722311f21a03" resource="/controller" action="R">
<user identifier="5f65be12-00dc-3abe-8a86-97f45960b9ca"/>
</policy>
<policy identifier="c6322e6c-4cc1-3bcc-91b3-2ed2111674cf" resource="/controller" action="W">
<user identifier="5f65be12-00dc-3abe-8a86-97f45960b9ca"/>
</policy>
<policy identifier="d59a54f7-6dd6-34ad-a279-a26ffdb9eef8" resource="/proxy" action="R">
<user identifier="adec56c1-29ed-30f0-af36-10c513b1d843"/>
<user identifier="b2f95239-353c-3a12-80ab-2b2112da1b98"/>
<user identifier="a92e1d8c-b0fc-361a-8cee-0ab3e392f40b"/>
</policy>
<policy identifier="287edf48-da72-359b-8f61-da5d4c45a270" resource="/proxy" action="W">
<user identifier="adec56c1-29ed-30f0-af36-10c513b1d843"/>
<user identifier="b2f95239-353c-3a12-80ab-2b2112da1b98"/>
<user identifier="a92e1d8c-b0fc-361a-8cee-0ab3e392f40b"/>
</policy>
</policies>
</authorizations>
My users.xml:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<tenants>
<groups/>
<users>
<user identifier="adec56c1-29ed-30f0-af36-10c513b1d843" identity="CN=server1.abc.tr, O=ABC, L=Ankara, ST=Ankara, C=TR"/>
<user identifier="b2f95239-353c-3a12-80ab-2b2112da1b98" identity="CN=server2.abc.tr, O=ABC, L=Ankara, ST=Ankara, C=TR"/>
<user identifier="a92e1d8c-b0fc-361a-8cee-0ab3e392f40b" identity="CN=server3.abc.tr, O=ABC, L=Ankara, ST=Ankara, C=TR"/>
</users>
</tenants>
Sorry I did not get back sooner, but thanks for the help sir.
Created 05-16-2022 08:12 AM
@alperenboyaci
Things to keep in mind with NiFi authorizations.
When a user authenticates with NiFi, they are authenticating only with the node they are connecting with. That request then gets replicated to all the nodes in the cluster. This is where the "proxy user request" authorization policy comes in to play. So the node the user authenticated in to replicated the request on behalf of that authenticated user to the other nodes (this way the user does not need to authenticate to all nodes).
Some request that get replicated will result in data being returned by other nodes in the cluster for the purpose of displaying that data information on the origninating host where the user initiated the action.
So in order for that originating node to display that data, it must be authorized to do so.
So while you may have authorized your authenticated user to "view the data" for a specific component, you also need to authorize your hosts for the same. This is why you see the "Insufficient Permissions" dialog box telling you that "server 2" is not authorized to "view the data" for the component requested.
In addition to "proxy user requests", nodes would need to be authorized for:
1. "view the data" in order to list a connection queue of a component.
2. "modify the data" in order to empty a connection queue of a component.
(your user would also need these same authorizations)
Other policies that apply to the NiFi hosts include:
1. "receive data via site-to-site" set on "remote input ports" (set to your local hosts if you are sending FlowFiles to yourself within NiFi. Set to host of other external NiFi instances if receiving FlowFiles from a Remote Process group not on this cluster's hosts).
2. "send data via site-to-site" set on a "remote output port" (same logic as above)
3. "retrieve site-to-site details" set from global menu --> policies. (same logic as above)
NiFi's component level authorization policies are set via the "key icon" found in the "operate panel for components added to canvas:
Clicking on the key icon will display:
Depending in the component selected, some policies may be greyed out if they do not apply to that component. While NiFi allows you to set policies on every component, it is more typical to set policies on the process group because components (processors, controller services, child process groups) inherit permission from the parent PG unless authorizations have been set on the component itself.
When you first installed NiFi and started it for the first time, NiFi would have generated the root level Process Group (it is the canvas you see when you access the UI). With nothing selected on the canvas, the operate panel should display the name name and UUID for the root Process group (this assumes you have not click and entered a child process group). There are bread crumbs in the lower left corner to help user navigate the hierarchy of parent --> child Process groups. (label furthest to the left is the root process group:
I know this was a bit extra detail, but hope it helps you be more successful.
If you found any of the supplied responses assisted with your queries, please take a moment to login and click on "Accept as Solution" below each of those post. As a community we want to make sure we share the path to solutions with other community members through "Accept as Solution" marked responses.
Thank you,
Matt
Created 05-02-2022 02:46 PM
Could you share authorizations.xml and users.xml files?
Created on 05-04-2022 11:25 PM - edited 05-05-2022 03:25 AM
Here is my authorizations.xml
I should note that /proxy lines added by me didn't auto created. Before I start nifi I also delete authorizations.xml and users.xml but still had to add manually.
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<authorizations>
<policies>
<policy identifier="f99bccd1-a30e-3e4a-98a2-dbc708edc67f" resource="/flow" action="R">
<user identifier="dbd82ce3-7330-3f75-b201-bcc00c0bc555"/>
</policy>
<policy identifier="0005d80d-a6e9-33a9-b9c2-b76af02a0b77" resource="/data/process-groups/9860c729-017c-1000-a6ce-771f96f0e174" action="R">
<user identifier="dbd82ce3-7330-3f75-b201-bcc00c0bc555"/>
</policy>
<policy identifier="c9813a3f-eb39-30f7-a509-5fc2022ce53e" resource="/data/process-groups/9860c729-017c-1000-a6ce-771f96f0e174" action="W">
<user identifier="dbd82ce3-7330-3f75-b201-bcc00c0bc555"/>
</policy>
<policy identifier="5cd4342d-4988-37d0-b37e-d223e3fd46aa" resource="/process-groups/9860c729-017c-1000-a6ce-771f96f0e174" action="R">
<user identifier="dbd82ce3-7330-3f75-b201-bcc00c0bc555"/>
</policy>
<policy identifier="1c18b17c-1596-3bd1-92c7-7b5025cbdfb3" resource="/process-groups/9860c729-017c-1000-a6ce-771f96f0e174" action="W">
<user identifier="dbd82ce3-7330-3f75-b201-bcc00c0bc555"/>
</policy>
<policy identifier="b8775bd4-704a-34c6-987b-84f2daf7a515" resource="/restricted-components" action="W">
<user identifier="dbd82ce3-7330-3f75-b201-bcc00c0bc555"/>
</policy>
<policy identifier="627410be-1717-35b4-a06f-e9362b89e0b7" resource="/tenants" action="R">
<user identifier="dbd82ce3-7330-3f75-b201-bcc00c0bc555"/>
</policy>
<policy identifier="15e4e0bd-cb28-34fd-8587-f8d15162cba5" resource="/tenants" action="W">
<user identifier="dbd82ce3-7330-3f75-b201-bcc00c0bc555"/>
</policy>
<policy identifier="ff96062a-fa99-36dc-9942-0f6442ae7212" resource="/policies" action="R">
<user identifier="dbd82ce3-7330-3f75-b201-bcc00c0bc555"/>
</policy>
<policy identifier="ad99ea98-3af6-3561-ae27-5bf09e1d969d" resource="/policies" action="W">
<user identifier="dbd82ce3-7330-3f75-b201-bcc00c0bc555"/>
</policy>
<policy identifier="2e1015cb-0fed-3005-8e0d-722311f21a03" resource="/controller" action="R">
<user identifier="dbd82ce3-7330-3f75-b201-bcc00c0bc555"/>
</policy>
<policy identifier="c6322e6c-4cc1-3bcc-91b3-2ed2111674cf" resource="/controller" action="W">
<user identifier="dbd82ce3-7330-3f75-b201-bcc00c0bc555"/>
</policy>
<policy identifier="efeb048a-a6ce-3e7d-89c2-9fd2417b8059" resource="/proxy" action="R">
<user identifier="adec56c1-29ed-30f0-af36-10c513b1d843"/>
<user identifier="b2f95239-353c-3a12-80ab-2b2112da1b98"/>
<user identifier="dbd82ce3-7330-3f75-b201-bcc00c0bc555"/>
</policy>
<policy identifier="20a75180-0463-393f-9bc6-b6dee87c174f" resource="/proxy" action="W">
<user identifier="adec56c1-29ed-30f0-af36-10c513b1d843"/>
<user identifier="b2f95239-353c-3a12-80ab-2b2112da1b98"/>
<user identifier="dbd82ce3-7330-3f75-b201-bcc00c0bc555"/>
</policy>
</policies>
</authorizations>
My users.xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<tenants>
<groups/>
<users>
<user identifier="dbd82ce3-7330-3f75-b201-bcc00c0bc555" identity="mnguser"/>
<user identifier="adec56c1-29ed-30f0-af36-10c513b1d843" identity="CN=server2.abc.tr, O=ABC, L=Ankara, ST=Ankara, C=TR"/>
<user identifier="b2f95239-353c-3a12-80ab-2b2112da1b98" identity="CN=server3.abc.tr, O=ABC, L=Ankara, ST=Ankara, C=TR"/>
</users>
</tenants>
My authorizers.xml(Company used ldap on original server so these server also has it)
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<authorizers>
<userGroupProvider>
<identifier>file-user-group-provider</identifier>
<class>org.apache.nifi.authorization.FileUserGroupProvider</class>
<property name="Users File">./conf/users.xml</property>
<property name="Legacy Authorized Users File"></property>
<property name="Initial User Identity 1">CN=server2.abc.tr, O=ABC, L=Ankara, ST=Ankara, C=TR</property>
<property name="Initial User Identity 2">CN=server3.abc.tr, O=ABC, L=Ankara, ST=Ankara, C=TR</property>
<property name="Initial User Identity 3">mnguser</property>
</userGroupProvider>
<userGroupProvider>
<identifier>ldap-user-group-provider</identifier>
<class>org.apache.nifi.ldap.tenants.LdapUserGroupProvider</class>
<property name="Authentication Strategy">LDAPS</property>
<property name="Manager DN">CN=mnguser,CN=Users,DC=abc,DC=gov,DC=tr</property>
<property name="Manager Password">mngpwd</property>
<property name="TLS - Keystore">./conf/server2.abc.tr/nifitest2keystore.jks</property>
<property name="TLS - Keystore Password">nifitest2</property>
<property name="TLS - Keystore Type">JKS</property>
<property name="TLS - Truststore">./conf/server2.abc.tr/nifitest2truststore.jks</property>
<property name="TLS - Truststore Password">nifitest2</property>
<property name="TLS - Truststore Type">JKS</property>
<property name="TLS - Client Auth">WANT</property>
<property name="TLS - Protocol">TLS</property>
<property name="TLS - Shutdown Gracefully">true</property>
<property name="Referral Strategy">FOLLOW</property>
<property name="Connect Timeout">10 secs</property>
<property name="Read Timeout">10 secs</property>
<property name="Url">ldaps://ldaps.abc.tr:636</property>
<property name="Page Size">500</property>
<property name="Sync Interval">30 mins</property>
<property name="Group Membership - Enforce Case Sensitivity">false</property>
<property name="User Search Base">CN=Users,DC=abc,DC=gov,DC=tr</property>
<property name="User Object Class">person</property>
<property name="User Search Scope">ONE_LEVEL</property>
<property name="User Search Filter">(cn=*)</property>
<property name="User Identity Attribute">cn</property>
<property name="User Group Name Attribute">memberOf</property>
<property name="User Group Name Attribute - Referenced Group Attribute"></property>
<property name="Group Search Base">CN=Users,DC=abc,DC=gov,DC=tr</property>
<property name="Group Object Class">group</property>
<property name="Group Search Scope">ONE_LEVEL</property>
<property name="Group Search Filter">(cn=*)</property>
<property name="Group Name Attribute">cn</property>
<property name="Group Member Attribute">member</property>
<property name="Group Member Attribute - Referenced User Attribute"></property>
</userGroupProvider>
<accessPolicyProvider>
<identifier>file-access-policy-provider</identifier>
<class>org.apache.nifi.authorization.FileAccessPolicyProvider</class>
<property name="User Group Provider">ldap-user-group-provider</property>
<property name="Authorizations File">./conf/authorizations.xml</property>
<property name="Initial Admin Identity">mnguser</property>
<property name="Legacy Authorized Users File"></property>
<property name="Node Identity 1">server2.abc.tr</property>
<property name="Node Identity 2">server3.abc.tr</property>
<property name="Node Group"></property>
</accessPolicyProvider>
<authorizer>
<identifier>managed-authorizer</identifier>
<class>org.apache.nifi.authorization.StandardManagedAuthorizer</class>
<property name="Access Policy Provider">file-access-policy-provider</property>
</authorizer>
</authorizers>
Thank you @gtorres
Created 05-05-2022 02:32 PM
The node identity is taken from the owner of the keystore certificate. Make sure that each nifi node has its own keystore with the right private key certificate owner. And only one private key entry by keystore.
Created 05-17-2022 06:09 AM
Can you share a screenshoot of current users listed on NiFI UI? Are the node listed? Also provide the related failed authorization messages from nifi-user.log please.