Member since
03-30-2017
17
Posts
1
Kudos Received
0
Solutions
05-10-2017
06:32 AM
Hello @mgilman, Yes in nginx, i configured it to proxy_pass but did not add the proxy headers. Now I added the headers and it is working fine. Thanks.
... View more
05-09-2017
01:05 PM
Hello @Matt Clarke, FQDN_1 is the fully qualified hostname which corresponds to internal ip. Now I tried with making nifi.web.http.host=localhost, still it redirects one specific URL to internal IP. This redirection is happening for all processors when I click on configure of a processor. I am not able to open the configuration part. I am able to click on some options of the processor like status, data provenance etc. But configure is redirecting to internal ip. Thus overall page shows following message: Unable to communicate with NiFi
Please ensure the application is running and check the logs for any errors.
FQDN is: ip-a1-b1-c1-d1.ec2.internal
... View more
05-09-2017
08:51 AM
I started running 3 nodes in the cluster without embedded zookeeper. I used a separate zookeeper instance which runs on one node and others connect to that first node. Nodes are shown as 3/3 connected. But there is something wrong. When I add a processor in one of the nodes, it gets added successfully and then configure that processor, the URL redirects to its internal ip. For example, my public ip is : a.b.c.d and internal ip is a1.b1.c1.d1 then configuring processor redirects to : a1.b1.c1.d1:9999/nifi-api/processors/ID_of_Processor while it should be a.b.c.d/nifi-api/processors/ID_of_Processor My properties are as follows: nifi.web.http.host=FQDN_1
nifi.web.http.port=9999
nifi.remote.input.host=FQDN_1
nifi.remote.input.secure=false
nifi.remote.input.socket.port=8888
nifi.remote.input.http.enabled=true
nifi.remote.input.http.transaction.ttl=30 sec
nifi.cluster.is.node=true
nifi.cluster.node.address=FQDN_1
nifi.cluster.node.protocol.port=9000
nifi.cluster.node.protocol.threads=10
nifi.cluster.node.event.history.size=25
nifi.cluster.node.connection.timeout=5 sec
nifi.cluster.node.read.timeout=5 sec
nifi.cluster.firewall.file=
nifi.cluster.flow.election.max.wait.time=2 mins
nifi.cluster.flow.election.max.candidates=3
nifi.zookeeper.connect.string=FQDN_1:2181
nifi.zookeeper.connect.timeout=3 secs
nifi.zookeeper.session.timeout=3 secs
nifi.zookeeper.root.node=/nifi
where FQDN is in form of : ip-a-b-c-d.ec2.internal
... View more
Labels:
- Labels:
-
Apache NiFi
05-09-2017
07:28 AM
Hello @Matt Clarke, I started running the nodes in the cluster. Nodes are shown as 3/3 connected. But there is something wrong. When I add a processor in one of the nodes and then configure that processor, the URL redirects to its internal ip. For example, my public ip is : a.b.c.d and internal ip is a1.b1.c1.d1 then configuring processor redirects to : a1.b1.c1.d1:9999/nifi-api/processors/ID_of_Processor while it should be a.b.c.d/nifi-api/processors/ID_of_Processor
... View more
05-05-2017
07:14 PM
Thanks @Matt Clarke, I added FQDNs and it worked like charm.
... View more
05-05-2017
06:19 AM
@Jeff Storck, yes the property nifi.state.management.embedded.zookeeper.start is set to false in all the 3 nodes.
... View more
05-04-2017
11:55 AM
Hello, I am trying to set up a clustering with 3 nodes but without using embedded zookeeper. I installed and started a separate zookeeper on port 2181 on one of the nodes (node1). My other nodes properties are as follows: nifi.cluster.is.node=true nifi.cluster.node.address=IP_MACHINE nifi.cluster.node.protocol.port=9000 nifi.cluster.node.protocol.threads=10 nifi.cluster.node.event.history.size=25 nifi.cluster.node.connection.timeout=5 sec nifi.cluster.node.read.timeout=5 sec nifi.cluster.firewall.file= nifi.cluster.flow.election.max.wait.time=1 mins nifi.cluster.flow.election.max.candidates=3 nifi.zookeeper.connect.string=IP_MACHINE_1:2181 nifi.zookeeper.connect.timeout=3 secs nifi.zookeeper.session.timeout=3 secs nifi.zookeeper.root.node=/nifi And I have kept remote (site to site) properties empty for the first node and following on other 2 nodes: nifi.remote.input.host=IP_MACHINE_1 nifi.remote.input.secure=false nifi.remote.input.socket.port= nifi.remote.input.http.enabled=true nifi.remote.input.http.transaction.ttl=30 sec Then I start all the nodes. I get following logs in the node1: 2017-05-04 11:46:48,843 INFO [Process Cluster Protocol Request-10] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 3fadedb5-c6f0-4fe8-ad02-091799b5c242 (type=NODE_CONNECTION_STATUS_REQUEST, length=97 bytes) from MACHINE_IP_3 in 0 millis
2017-05-04 11:46:50,539 INFO [Process Cluster Protocol Request-1] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 4b414f92-8486-4e7f-9c4c-e184279611b1 (type=HEARTBEAT, length=2458 bytes) from localhost:9993 in 1 millis
2017-05-04 11:46:52,083 INFO [Process Cluster Protocol Request-4] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 82956fb1-7696-41a6-a5de-36e2b4362889 (type=NODE_CONNECTION_STATUS_REQUEST, length=97 bytes) from MACHINE_IP_2 in 0 millis
2017-05-04 11:46:52,421 INFO [Process Cluster Protocol Request-2] o.a.n.c.p.impl.SocketProtocolListener Finished processing request d9f3c705-4feb-452f-b6aa-ff2d01bd3f7f (type=HEARTBEAT, length=2456 bytes) from localhost:9994 in 1 millis
2017-05-04 11:46:53,848 INFO [Process Cluster Protocol Request-6] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 45063482-48a2-4edf-b064-35840f6fcf6e (type=NODE_CONNECTION_STATUS_REQUEST, length=97 bytes) from MACHINE_IP_3 in 0 millis
2017-05-04 11:46:54,262 WARN [main] o.a.nifi.controller.StandardFlowService Failed to connect to cluster due to: org.apache.nifi.cluster.protocol.ProtocolException: Failed to create socket to MACHINE_IP_1:9000 due to: java.net.ConnectException: Connection timed out (Connection timed out)
So it received heartbeat messages from localhost:9993 (which actually is node2). I checked Zookeeper and it shows 2 Primary nodes (the 2 nodes not running zookeeper) and all 3 nodes connected in the Cluster Coordinator. When I check UI, on Machine 1, I get: Cluster is still in the process of voting on the appropriate Data Flow.
On Machine 2 and 3, I get following message on UI: Action cannot be performed because there is currently no Cluster Coordinator elected. The request should be tried again after a moment, after a Cluster Coordinator has been automatically elected.
I seem to have configured the zookeeper and other properties properly and still it is not able to elect the cluster coordinator. Thanks in advace.
... View more
Labels:
- Labels:
-
Apache NiFi
05-02-2017
12:07 PM
Hello, I want to access the zookeeper using zkCli.sh and using following command: ls / This shows list of nodes connected. When I further do ls node1, I am able to see next internal node. (say subnode1). If the subnode1 is named "sub node1" (with a white space in between), how do I access the sub node1. It shows message that file/directory not found.
... View more
Labels:
- Labels:
-
Apache NiFi
05-02-2017
11:49 AM
Hello, Thanks @Wynner and @Matt Clarke. I tried changing it to public IP of the machine instead of localhost. I am trying some other way of not using embedded zookeeper, instead, I installed a Zookeeper server separately. Nifi in a single node is getting connected to zookeeper, but there it does not get connected to the cluster. It shows a popup with following message This node is currently not connected to the cluster. Any modifications to the data flow made here will not replicate across the cluster. Can you please help, what does this message mean? Since the node is not connected to the cluster, how do I connect it. nifi.state.management.embedded.zookeeper.start=false nifi.cluster.is.node=true nifi.cluster.node.address=node2 nifi.cluster.node.protocol.port=9990 nifi.cluster.node.protocol.threads=10 nifi.cluster.node.event.history.size=25 nifi.cluster.node.connection.timeout=5 sec nifi.cluster.node.read.timeout=5 sec nifi.cluster.firewall.file= nifi.cluster.flow.election.max.wait.time=2 mins nifi.cluster.flow.election.max.candidates=2 nifi.zookeeper.connect.string=107.22.208.210:2181 Where 107.22.208.210 is the IP of another machine, where I am running zookeeper. I want 2 EC2 instances to run NIFI and both to connect to same zookeeper server at the mentioned IP, without using embedded zookeeper. Also, in zookeeper.properties, I disabled server.1 and server.2 properties.
... View more
04-26-2017
02:02 PM
I am trying to set up clustering with 2 nodes. One machine being local and another on an EC2 instance. When I try to connect, it gives following logs: 2017-04-26 19:10:54,126 INFO [main] /nifi-docs No Spring WebApplicationInitializer types detected on classpath 2017-04-26 19:10:54,157 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext@6e4ac3f5{/nifi-docs,file:///home/jatin/Downloads/Softwares/nifi-1.1.1/work/jetty/nifi-web-docs-1.1.1.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.1.1.nar-unpacked/META-INF/bundled-dependencies/nifi-web-docs-1.1.1.war} 2017-04-26 19:10:54,197 INFO [main] / No Spring WebApplicationInitializer types detected on classpath 2017-04-26 19:10:54,232 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext@6418075c{/,file:///home/jatin/Downloads/Softwares/nifi-1.1.1/work/jetty/nifi-web-error-1.1.1.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.1.1.nar-unpacked/META-INF/bundled-dependencies/nifi-web-error-1.1.1.war} 2017-04-26 19:10:54,239 INFO [main] o.eclipse.jetty.server.AbstractConnector Started ServerConnector@2f164fab{HTTP/1.1,[http/1.1]}{localhost:8088} 2017-04-26 19:10:54,239 INFO [main] org.eclipse.jetty.server.Server Started @80506ms 2017-04-26 19:10:55,124 INFO [main] org.apache.nifi.web.server.JettyServer Loading Flow... 2017-04-26 19:10:55,131 INFO [main] org.apache.nifi.io.socket.SocketListener Now listening for connections from nodes on port 9990 2017-04-26 19:10:55,161 INFO [main] o.a.nifi.controller.StandardFlowService Connecting Node: localhost:8088 2017-04-26 19:11:01,265 WARN [main] o.a.nifi.controller.StandardFlowService There is currently no Cluster Coordinator. This often happens upon restart of NiFi when running an embedded ZooKeeper. Will register this node to become the active Cluster Coordinator and will attempt to connect to cluster again 2017-04-26 19:11:01,265 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=false] Attempted to register Leader Election for role 'Cluster Coordinator' but this role is already registered 2017-04-26 19:11:05,694 INFO [Curator-Framework-0] o.a.c.f.state.ConnectionStateManager State change: SUSPENDED 2017-04-26 19:11:05,696 INFO [Curator-ConnectionStateManager-0] o.a.n.c.l.e.CuratorLeaderElectionManager org.apache.nifi.controller.leader.election.CuratorLeaderElectionManager$ElectionListener@65557951 Connection State changed to SUSPENDED 2017-04-26 19:11:05,705 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background operation retry gave up org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) ~[zookeeper-3.4.6.jar:3.4.6-1569965] at org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:728) [curator-framework-2.11.0.jar:na] at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:857) [curator-framework-2.11.0.jar:na] at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:809) [curator-framework-2.11.0.jar:na] at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:64) [curator-framework-2.11.0.jar:na] at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:267) [curator-framework-2.11.0.jar:na] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_101] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_101] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [na:1.8.0_101] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_101] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_101] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101] 2017-04-26 19:11:05,707 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background retry gave up org.apache.curator.CuratorConnectionLossException: KeeperErrorCode = ConnectionLoss at org.apache.curator.ConnectionState.checkTimeouts(ConnectionState.java:197) ~[curator-client-2.11.0.jar:na] at org.apache.curator.ConnectionState.getZooKeeper(ConnectionState.java:88) ~[curator-client-2.11.0.jar:na] at org.apache.curator.CuratorZookeeperClient.getZooKeeper(CuratorZookeeperClient.java:116) ~[curator-client-2.11.0.jar:na] at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:835) [curator-framework-2.11.0.jar:na] at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:809) [curator-framework-2.11.0.jar:na] at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:64) [curator-framework-2.11.0.jar:na] at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:267) [curator-framework-2.11.0.jar:na] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_101] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_101] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [na:1.8.0_101] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_101] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_101] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101] 2017-04-26 19:11:05,791 ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background operation retry gave up org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) ~[zookeeper-3.4.6.jar:3.4.6-1569965] at org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:728) [curator-framework-2.11.0.jar:na] at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:857) [curator-framework-2.11.0.jar:na] at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:809) [curator-framework-2.11.0.jar:na] at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:64) [curator-framework-2.11.0.jar:na] at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:267) [curator-framework-2.11.0.jar:na] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_101] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_101] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [na:1.8.0_101] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_101] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_101] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101] My nifi.properties are as follows: # cluster common properties (all nodes must have same values) # nifi.cluster.protocol.heartbeat.interval=5 sec nifi.cluster.protocol.is.secure=false # cluster node properties (only configure for cluster nodes) # nifi.cluster.is.node=true nifi.cluster.node.address=localhost #nifi.cluster.node.address=107.23.49.252 nifi.cluster.node.protocol.port=9990 nifi.cluster.node.protocol.threads=10 nifi.cluster.node.event.history.size=25 nifi.cluster.node.connection.timeout=5 sec nifi.cluster.node.read.timeout=5 sec nifi.cluster.firewall.file= nifi.cluster.flow.election.max.wait.time=2 mins nifi.cluster.flow.election.max.candidates=2 nifi.zookeeper.connect.string=localhost:2181,107.23.49.252:2181 nifi.zookeeper.connect.timeout=3 secs nifi.zookeeper.session.timeout=100 secs nifi.zookeeper.root.node=/nifi nifi.remote.input.host=localhost nifi.remote.input.secure=false nifi.remote.input.socket.port=9998 nifi.remote.input.http.enabled=true nifi.remote.input.http.transaction.ttl=30 sec nifi.state.management.provider.cluster=zk-provider nifi.state.management.embedded.zookeeper.start=true nifi.state.management.embedded.zookeeper.properties=./conf/zookeeper.properties My zookeeper.properties file contents are as follows: clientPort=9001 initLimit=10 autopurge.purgeInterval=24 syncLimit=5 tickTime=10000 dataDir=./state/zookeeper autopurge.snapRetainCount=30 server.1=localhost:2888:3888 server.2=107.23.49.252:2888:3888 And my state-management.xml file content is as follows: <cluster-provider> <id>zk-provider</id> <class>org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider</class> <property name="Connect String">localhost:2181,107.23.49.252:2181</property> <property name="Root Node">/nifi</property> <property name="Session Timeout">10 seconds</property> <property name="Access Control">Open</property> </cluster-provider> I also created a file myid with content 1 in state/zookeeper directory for localhost. When I start the node, after some time, service starts and I can see the UI but it show me the following message: This node is currently not connected to the cluster. Any modifications to the data flow made here will not replicate across the cluster. and I see the state Disconnected in right top corner where it should show clusters connected.
... View more
Labels:
- Labels:
-
Apache NiFi