Support Questions

Find answers, ask questions, and share your expertise

Failed to obtain value from ZooKeeper for component with ID with exception code CONNECTIONLOSS

avatar
New Contributor

p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 18.0px Menlo; color: #28fe14; background-color: #000000; background-color: rgba(0, 0, 0, 0.9)} span.s1 {font-variant-ligatures: no-common-ligatures} span.Apple-tab-span {white-space:pre}

org.apache.nifi.processors.aws.s3.ListS3 ListS3[id=df273570-dabc-17e4-b2a8-4570af7a9770] Failed to restore processor state; yielding

2017-03-09 14:32:37,324 ERROR [Timer-Driven Process Thread-1] org.apache.nifi.processors.aws.s3.ListS3

java.io.IOException: Failed to obtain value from ZooKeeper for component with ID df273570-dabc-17e4-b2a8-4570af7a9770 with exception code CONNECTIONLOSS

at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:420) ~[na:na]

at org.apache.nifi.controller.state.StandardStateManager.getState(StandardStateManager.java:63) ~[na:na]

at org.apache.nifi.processors.aws.s3.ListS3.restoreState(ListS3.java:147) ~[nifi-aws-processors-1.1.1.jar:1.1.1]

at org.apache.nifi.processors.aws.s3.ListS3.onTrigger(ListS3.java:175) ~[nifi-aws-processors-1.1.1.jar:1.1.1]

at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) [nifi-api-1.1.1.jar:1.1.1]

at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099) [nifi-framework-core-1.1.1.jar:1.1.1]

at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.1.1.jar:1.1.1]

at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.1.1.jar:1.1.1]

at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.1.1.jar:1.1.1]

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_102]

at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_102]

at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_102]

at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_102]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_102]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_102]

at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]

Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /nifi/components/df273570-dabc-17e4-b2a8-4570af7a9770

at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) ~[na:na]

at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) ~[na:na]

at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155) ~[na:na]

at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1184) ~[na:na]

at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:403) ~[na:na]

... 15 common frames omitted

2 REPLIES 2

avatar
Master Mentor
@Kuntesh Bharucha

Are you using NiFi's embedded ZK or an external ZK?

Have you configured the state-management.xml file with the connection string for your ZK nodes?

13482-screen-shot-2017-03-09-at-25752-pm.png

In a NiFi cluster, ZK is used to maintain the cluster wide sate of some processors. Those processors will attempt to use the state manager even if it has not been setup correctly or at all.

Thanks,

Matt

avatar
New Contributor

Hi Matt,

Thanks for your reply. I'm using external zk and was using private IP in state.management.xml. Later, when i changed the log level to DEBUG, I got to know that nifi was unable to detect that IP. It was throwing Invalid IP address validation error. Figured out a way of passing public IP since we are going through company's proxy layer. It started working fine.