- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
NIFI Connection refused zookeeper
- Labels:
-
Apache NiFi
Created 02-19-2025 02:04 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
HI @MattWho
Good Day,
we are experiencing new error called" Connection refused: zookeepernode2" post setting up DNS with the keystore and Password for NIFI which we generated internally using keytool, same keystore and password we are using in zookeeper as well.
Attaching the Required details, Request you to please check and provide your valuable feedback
Below is the web UI error.
we setted up the nifi cluster in below format
NIFINODE1, NIFINODE2. two azure vm's
Zookeepernode1, Zookeepernode2, Zookeepernode3. three Azure vm's
please find zoo,cfg details below.
#zookeeper node3 has been choosen has an elector
sudo bin/zkServer.sh status
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Client port not found in static config file. Looking in dynamic config file.
grep: : No such file or directory
Client port not found in the server configs
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader
___________________________________________________________________________
Attaching the nifi.properties details.
# Core Properties #
nifi.flow.configuration.file=./conf/flow.xml.gz
nifi.flow.configuration.json.file=./conf/flow.json.gz
nifi.flow.configuration.archive.enabled=true
nifi.flow.configuration.archive.dir=./conf/archive/
nifi.flow.configuration.archive.max.time=30 days
nifi.flow.configuration.archive.max.storage=500 MB
nifi.flow.configuration.archive.max.count=
nifi.flowcontroller.autoResumeState=true
nifi.flowcontroller.graceful.shutdown.period=10 sec
nifi.flowservice.writedelay.interval=500 ms
nifi.administrative.yield.duration=30 sec
# If a component has no work to do (is "bored"), how long should we wait before checking again for work?
nifi.bored.yield.duration=10 millis
nifi.queue.backpressure.count=10000
nifi.queue.backpressure.size=1 GB
nifi.authorizer.configuration.file=./conf/authorizers.xml
nifi.login.identity.provider.configuration.file=./conf/login-identity-providers.xml
nifi.templates.directory=./conf/templates
nifi.ui.banner.text=
nifi.ui.autorefresh.interval=30 sec
nifi.nar.library.directory=./lib
nifi.nar.library.autoload.directory=./extensions
nifi.nar.working.directory=./work/nar/
nifi.documentation.working.directory=./work/docs/components
nifi.nar.unpack.uber.jar=false
nifi.state.management.configuration.file=./conf/state-management.xml
# The ID of the local state provider
nifi.state.management.provider.local=local-provider
# The ID of the cluster-wide state provider. This will be ignored if NiFi is not clustered but must be populated if running in a cluster.
nifi.state.management.provider.cluster=zk-provider
# Specifies whether or not this instance of NiFi should run an embedded ZooKeeper server
nifi.state.management.embedded.zookeeper.start=false
# Properties file that provides the ZooKeeper properties to use if <nifi.state.management.embedded.zookeeper.start> is set to true
nifi.state.management.embedded.zookeeper.properties=./conf/zookeeper.properties
# Database Settings
nifi.database.directory=./database_repository
# Repository Encryption properties override individual repository implementation properties
nifi.repository.encryption.protocol.version=
nifi.repository.encryption.key.id=
nifi.repository.encryption.key.provider=
nifi.repository.encryption.key.provider.keystore.location=
nifi.repository.encryption.key.provider.keystore.password=
# FlowFile Repository
nifi.flowfile.repository.implementation=org.apache.nifi.controller.repository.WriteAheadFlowFileRepository
nifi.flowfile.repository.wal.implementation=org.apache.nifi.wali.SequentialAccessWriteAheadLog
nifi.flowfile.repository.directory=./flowfile_repository
nifi.flowfile.repository.checkpoint.interval=20 secs
nifi.flowfile.repository.always.sync=false
nifi.flowfile.repository.retain.orphaned.flowfiles=true
nifi.swap.manager.implementation=org.apache.nifi.controller.FileSystemSwapManager
nifi.queue.swap.threshold=20000
# Content Repository
nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository
nifi.content.claim.max.appendable.size=50 KB
nifi.content.repository.directory.default=./content_repository
nifi.content.repository.archive.max.retention.period=7 days
nifi.content.repository.archive.max.usage.percentage=50%
nifi.content.repository.archive.enabled=true
nifi.content.repository.always.sync=false
nifi.content.viewer.url=../nifi-content-viewer/
nifi.provenance.repository.implementation=org.apache.nifi.provenance.WriteAheadProvenanceRepository
# Persistent Provenance Repository Properties
nifi.provenance.repository.directory.default=./provenance_repository
nifi.provenance.repository.max.storage.time=30 days
nifi.provenance.repository.max.storage.size=10 GB
nifi.provenance.repository.rollover.time=10 mins
nifi.provenance.repository.rollover.size=100 MB
nifi.provenance.repository.query.threads=2
nifi.provenance.repository.index.threads=2
nifi.provenance.repository.compress.on.rollover=true
nifi.provenance.repository.always.sync=false
# Comma-separated list of fields. Fields that are not indexed will not be searchable. Valid fields are:
# EventType, FlowFileUUID, Filename, TransitURI, ProcessorID, AlternateIdentifierURI, Relationship, Details
nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID, Filename, ProcessorID, Relationship
# FlowFile Attributes that should be indexed and made searchable. Some examples to consider are filename, uuid, mime.type
nifi.provenance.repository.indexed.attributes=
# Large values for the shard size will result in more Java heap usage when searching the Provenance Repository
# but should provide better performance
nifi.provenance.repository.index.shard.size=500 MB
# Indicates the maximum length that a FlowFile attribute can be when retrieving a Provenance Event from
# the repository. If the length of any attribute exceeds this value, it will be truncated when the event is retrieved.
nifi.provenance.repository.max.attribute.length=65536
nifi.provenance.repository.concurrent.merge.threads=2
# Volatile Provenance Respository Properties
nifi.provenance.repository.buffer.size=100000
# Component and Node Status History Repository
nifi.components.status.repository.implementation=org.apache.nifi.controller.status.history.VolatileComponentStatusRepository
# Volatile Status History Repository Properties
nifi.components.status.repository.buffer.size=1440
nifi.components.status.snapshot.frequency=1 min
# QuestDB Status History Repository Properties
nifi.status.repository.questdb.persist.node.days=14
nifi.status.repository.questdb.persist.component.days=3
nifi.status.repository.questdb.persist.location=./status_repository
# Site to Site properties
nifi.remote.input.host=nifinode1
nifi.remote.input.secure=true
nifi.remote.input.socket.port=10000
nifi.remote.input.http.enabled=true
nifi.remote.input.socket.port=10000
nifi.remote.input.http.enabled=true
nifi.remote.input.http.transaction.ttl=30 sec
nifi.remote.contents.cache.expiration=30 secs
# web properties #
#############################################
# For security, NiFi will present the UI on 127.0.0.1 and only be accessible through this loopback interface.
# Be aware that changing these properties may affect how your instance can be accessed without any restriction.
# We recommend configuring HTTPS instead. The administrators guide provides instructions on how to do this.
nifi.web.http.host=
nifi.web.http.port=
nifi.web.http.network.interface.default=
#############################################
nifi.web.https.host=nifinode1.web.net
nifi.web.https.port=8443
nifi.web.https.network.interface.default=
nifi.web.https.application.protocols=http/1.1
nifi.web.jetty.working.directory=./work/jetty
nifi.web.jetty.threads=200
nifi.web.max.header.size=16 KB
nifi.web.proxy.context.path=
nifi.web.proxy.host=nifi-web-net,nifi-web-net:8443
nifi.web.max.content.size=
nifi.web.max.requests.per.second=30000
nifi.web.max.access.token.requests.per.second=25
nifi.web.request.timeout=60 secs
nifi.web.request.ip.whitelist=
nifi.web.should.send.server.version=true
nifi.web.request.log.format=%{client}a - %u %t "%r" %s %O "%{Referer}i" "%{User-Agent}i"
# Filter JMX MBeans available through the System Diagnostics REST API
nifi.web.jmx.metrics.allowed.filter.pattern=
# Include or Exclude TLS Cipher Suites for HTTPS
nifi.web.https.ciphersuites.include=
nifi.web.https.ciphersuites.exclude=
nifi.sensitive.props.key=@Rfs]HjNl=r(z0&ocSsTrrR8rm?/7qMP
nifi.sensitive.props.key.protected=
nifi.sensitive.props.algorithm=NIFI_PBKDF2_AES_GCM_256
nifi.sensitive.props.additional.keys=
*************************************************************
nifi.security.user.authentication.kerberos=false
#nifi.security.user.login.identity.provider=empty
nifi.security.needClientAuth=false
#nifi.cluster.protocol.is.secure=false
#nifi.security.needClientAuth=true
#nifi.cluster.protocol.is.secure=true
nifi.security.allow.anonymous.authentication=false
***********************************************************
nifi.security.autoreload.enabled=false
nifi.security.autoreload.interval=10 secs
nifi.security.keystore=./conf/keystore.jks
#nifi.security.keystoreType=PKCS12
nifi.security.keystoreType=JKS
nifi.security.keystorePasswd=XXXXXXXXcWRvgEXXXXXXmxGxNyw
nifi.security.keyPasswd=XXXXXXXXWeQAngXXXXbYCzrEf
nifi.security.truststore=./conf/truststore.jks
nifi.security.truststoreType=JKS
nifi.security.truststorePasswd=XXXXXXXXcWRvgEXXXXXXmxGxNyw
#nifi.security.user.authorizer=single-user-authorizer
#nifi.security.allow.anonymous.authentication=false
#nifi.security.user.login.identity.provider=single-user-provider
nifi.security.user.authorizer=managed-authorizer
#nifi.security.user.authorizer=org.apache.nifi.authorization.CertificateBasedAuthorizer
nifi.security.user.login.identity.provider=
nifi.security.user.jws.key.rotation.period=PT1H
nifi.security.ocsp.responder.url=
nifi.security.ocsp.responder.certificate=
nifi.security.user.oidc.discovery.url=https://SSo-login-page
nifi.security.user.oidc.connect.timeout=5 secs
nifi.security.user.oidc.read.timeout=5 secs
nifi.security.user.oidc.client.id=XXXXXXX-be28-XXXXXXX-cd0e5f2da902
nifi.security.user.oidc.client.secret=cmNTTsotwXXXXXXXXXXXXXBtsmfWFwoXXXXXyggP
nifi.security.user.oidc.preferred.jwsalgorithm=
nifi.security.user.oidc.additional.scopes=offline_access,personal_data
nifi.security.user.oidc.claim.identifying.user=corporate_user_id
nifi.security.user.oidc.fallback.claims.identifying.user=
# Apache Knox SSO Properties #
nifi.security.user.knox.url=
nifi.security.user.knox.publicKey=
nifi.security.user.knox.cookieName=hadoop-jwt
nifi.security.user.knox.audiences=
# SAML Properties #
nifi.security.user.saml.idp.metadata.url=
nifi.security.user.saml.sp.entity.id=
nifi.security.user.saml.identity.attribute.name=
nifi.security.user.saml.group.attribute.name=
nifi.security.user.saml.request.signing.enabled=false
nifi.security.user.saml.want.assertions.signed=true
nifi.security.user.saml.signature.algorithm=http://www.w3.org/2001/04/xmldsig-more#rsa-sha256
nifi.security.user.saml.authentication.expiration=12 hours
nifi.security.user.saml.single.logout.enabled=false
nifi.security.user.saml.http.client.truststore.strategy=JDK
nifi.security.user.saml.http.client.connect.timeout=30 secs
nifi.security.user.saml.http.client.read.timeout=30 secs
# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=true
nifi.cluster.node.address=nifinode1.web
nifi.cluster.node.protocol.port=9991
nifi.cluster.node.protocol.max.threads=50
nifi.cluster.node.event.history.size=25
nifi.cluster.node.connection.timeout=5 sec
nifi.cluster.node.read.timeout=5 sec
nifi.cluster.node.max.concurrent.requests=100
nifi.cluster.firewall.file=
nifi.cluster.flow.election.max.wait.time=1 mins
nifi.cluster.flow.election.max.candidates=
# cluster load balancing properties #
nifi.cluster.load.balance.host=nifinode1.web
nifi.cluster.load.balance.port=6342
nifi.cluster.load.balance.connections.per.node=1
nifi.cluster.load.balance.max.thread.count=8
nifi.cluster.load.balance.comms.timeout=30 sec
nifi.cluster.protocol.ssl.context.protocol=TLS
# zookeeper properties, used for cluster management #
nifi.zookeeper.connect.string=zookeepernode1:2181,zookeepernode2:2181,zookeepernode3:2181
#nifi.zookeeper.connect.string=zookeepernode1:2181,zookeepernode2:2182,zookeepernode3:2183
nifi.zookeeper.connect.timeout=10 secs
nifi.zookeeper.session.timeout=10 secs
nifi.zookeeper.root.node=/nifi
#nifi.zookeeper.client.secure=false
nifi.zookeeper.ssl.clientAuth=none
nifi.zookeeper.client.secure=true
nifi.zookeeper.security.keystore=./conf/keystore.jks
nifi.zookeeper.security.keystoreType=jks
nifi.zookeeper.security.keystorePasswd=OXXXXXXXXXXXXXXXNyw
nifi.zookeeper.security.truststore=./conf/truststore.jks
nifi.zookeeper.security.truststoreType=jks
nifi.zookeeper.security.truststorePasswd=OXXXXXXXXXXXXXGxNyw
nifi.zookeeper.jute.maxbuffer=
nifi.zookeeper.ssl.client.auth=none
# Zookeeper properties for the authentication scheme used when creating acls on znodes used for cluster management
# Values supported for nifi.zookeeper.auth.type are "default", which will apply world/anyone rights on znodes
# and "sasl" which will give rights to the sasl/kerberos identity used to authenticate the nifi node
# The identity is determined using the value in nifi.kerberos.service.principal and the removeHostFromPrincipal
# and removeRealmFromPrincipal values (which should align with the kerberos.removeHostFromPrincipal and kerberos.removeRealmFromPrincipal
# values configured on the zookeeper server).
nifi.zookeeper.auth.type=
nifi.zookeeper.kerberos.removeHostFromPrincipal=
nifi.zookeeper.kerberos.removeRealmFromPrincipal=
# kerberos #
nifi.kerberos.krb5.file=
# kerberos service principal #
nifi.kerberos.service.principal=
nifi.kerberos.service.keytab.location=
# kerberos spnego principal #
nifi.kerberos.spnego.principal=
nifi.kerberos.spnego.keytab.location=
nifi.kerberos.spnego.principal=
nifi.kerberos.spnego.keytab.location=
nifi.kerberos.spnego.authentication.expiration=12 hours
# external properties files for variable registry
# supports a comma delimited list of file locations
nifi.variable.registry.properties=
# analytics properties #
nifi.analytics.predict.enabled=false
nifi.analytics.predict.interval=3 mins
nifi.analytics.query.interval=5 mins
nifi.analytics.connection.model.implementation=org.apache.nifi.controller.status.analytics.models.OrdinaryLeastSquares
nifi.analytics.connection.model.score.name=rSquared
nifi.analytics.connection.model.score.threshold=.90
# runtime monitoring properties
nifi.monitor.long.running.task.schedule=
nifi.monitor.long.running.task.threshold=
# Enable automatic diagnostic at shutdown.
nifi.diagnostics.on.shutdown.enabled=false
# Include verbose diagnostic information.
nifi.diagnostics.on.shutdown.verbose=false
# The location of the diagnostics folder.
nifi.diagnostics.on.shutdown.directory=./diagnostics
# The maximum number of files permitted in the directory. If the limit is exceeded, the oldest files are deleted.
nifi.diagnostics.on.shutdown.max.filecount=10
# The diagnostics folder's maximum permitted size in bytes. If the limit is exceeded, the oldest files are deleted.
nifi.diagnostics.on.shutdown.max.directory.size=10 MB
# Performance tracking properties
## Specifies what percentage of the time we should track the amount of time processors are using CPU, reading from/writing to content repo, etc.
## This can be useful to understand which components are the most expensive and to understand where system bottlenecks may be occurring.
## The value must be in the range of 0 (inclusive) to 100 (inclusive). A larger value will produce more accurate results, while a smaller value may be
## less expensive to compute.
## Results can be obtained by running "nifi.sh diagnostics <filename>" and then inspecting the produced file.
nifi.performance.tracking.percentage=0
# NAR Provider Properties #
# These properties allow configuring one or more NAR providers. A NAR provider retrieves NARs from an external source
# and copies them to the directory specified by nifi.nar.library.autoload.directory.
____________________________________________________________________________________
logs/nifi-app.log error details
2025-02-19 10:02:01,616 INFO [epollEventLoopGroup-4-1] o.apache.zookeeper.ClientCnxnSocketNetty channel is disconnected: [id: 0xf6690cd4, L:/53.13.138.70:53038 ! R:zookeepernode2/53.13.138.7
2:2181]
2025-02-19 10:02:01,616 INFO [epollEventLoopGroup-4-1] o.apache.zookeeper.ClientCnxnSocketNetty channel is told closing
2025-02-19 10:02:02,157 INFO [epollEventLoopGroup-4-1] o.apache.zookeeper.ClientCnxnSocketNetty SSL handler added for channel: [id: 0x48429c0b]
2025-02-19 10:02:02,158 INFO [epollEventLoopGroup-4-1] o.apache.zookeeper.ClientCnxnSocketNetty channel is connected: [id: 0x48429c0b, L:/53.13.138.70:34056 - R:zookeepernode1/53.13.138.71:2
181]
2025-02-19 10:02:02,159 INFO [epollEventLoopGroup-4-1] o.apache.zookeeper.ClientCnxnSocketNetty channel is disconnected: [id: 0x48429c0b, L:/53.13.138.70:34056 ! R:zookeepernode1/53.13.138.7
1:2181]
2025-02-19 10:02:02,159 INFO [epollEventLoopGroup-4-1] o.apache.zookeeper.ClientCnxnSocketNetty channel is told closing
2025-02-19 10:02:02,159 WARN [main] o.a.n.c.l.e.CuratorLeaderElectionManager Unable to determine leader for role 'Cluster Coordinator'; returning null
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /nifi/leaders/Cluster Coordinator
at org.apache.zookeeper.KeeperException.create(KeeperException.java:101)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:53)
at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:2480)
at org.apache.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:235)
at org.apache.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:228)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:88)
at org.apache.curator.framework.imps.GetChildrenBuilderImpl.pathInForeground(GetChildrenBuilderImpl.java:228)
at org.apache.curator.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:221)
at org.apache.curator.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:42)
at org.apache.curator.framework.recipes.locks.LockInternals.getSortedChildren(LockInternals.java:133)
at org.apache.curator.framework.recipes.locks.LockInternals.getParticipantNodes(LockInternals.java:119)
at org.apache.curator.framework.recipes.locks.InterProcessMutex.getParticipantNodes(InterProcessMutex.java:153)
at org.apache.curator.framework.recipes.leader.LeaderSelector.getLeader(LeaderSelector.java:321)
at org.apache.nifi.controller.leader.election.CuratorLeaderElectionManager.getLeader(CuratorLeaderElectionManager.java:288)
at org.apache.nifi.cluster.coordination.node.LeaderElectionNodeProtocolSender.getServiceAddress(LeaderElectionNodeProtocolSender.java:46)
at org.apache.nifi.cluster.protocol.AbstractNodeProtocolSender.requestConnection(AbstractNodeProtocolSender.java:64)
at org.apache.nifi.cluster.protocol.impl.NodeProtocolSenderListener.requestConnection(NodeProtocolSenderListener.java:89)
at org.apache.nifi.controller.StandardFlowService.connect(StandardFlowService.java:928)
at org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:476)
at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:896)
at org.apache.nifi.NiFi.<init>(NiFi.java:172)
at org.apache.nifi.NiFi.<init>(NiFi.java:83)
at org.apache.nifi.NiFi.main(NiFi.java:332)
2025-02-19 10:02:02,159 WARN [main] o.a.nifi.controller.StandardFlowService There is currently no Cluster Coordinator. This often happens upon restart of NiFi when running an embedded ZooKee
per. Will register this node to become the active Cluster Coordinator and will attempt to connect to cluster again
2025-02-19 10:02:02,159 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=false] Attempted to register Leader Election for role 'Cluster Coordinator'
but this role is already registered
2025-02-19 10:02:02,343 INFO [epollEventLoopGroup-4-1] o.apache.zookeeper.ClientCnxnSocketNetty SSL handler added for channel: [id: 0x0703dfdc]
2025-02-19 10:02:02,344 INFO [epollEventLoopGroup-4-1] o.apache.zookeeper.ClientCnxnSocketNetty channel is connected: [id: 0x0703dfdc, L:/53.13.138.70:39036 - R:zookeepernode3/53.13.247.198:
2181]
2025-02-19 10:02:02,345 INFO [epollEventLoopGroup-4-1] o.apache.zookeeper.ClientCnxnSocketNetty channel is disconnected: [id: 0x0703dfdc, L:/53.13.138.70:39036 ! R:zookeepernode3/53.13.247.1
98:2181]
2025-02-19 10:02:02,345 INFO [epollEventLoopGroup-4-1] o.apache.zookeeper.ClientCnxnSocketNetty channel is told closing
Created 03-06-2025 05:30 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi All,
Issue got Resolved, Issue was with the certications, access keys sand keystore got generated wrong,
Post generating the new keys, issue was resolved
Created 02-19-2025 04:49 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Also used the trustore and keystore from the certficate which is genearted from the DNS level nifi-web-net.
we have DNS c-name for NIFI vm's and zookeeper vm's
like nifinode1.web.net, nifinode2.web.net
zookeepernode1.web.net. zookeepernode2.web.net
can you guide us to generate certficate based on DNS level or each VM level..?
Created on 02-19-2025 09:08 PM - edited 02-19-2025 09:13 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
HI anyone nifi expert can you please update on these.. its urgent
Created 02-24-2025 05:16 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi request you to please update on these.
Created 03-06-2025 05:30 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi All,
Issue got Resolved, Issue was with the certications, access keys sand keystore got generated wrong,
Post generating the new keys, issue was resolved
