Member since
07-19-2016
91
Posts
10
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2703 | 08-12-2016 05:05 PM |
05-23-2017
04:08 PM
Hi guys, It seems Repository has to be configured as a path to local disk. Could we specify the content repository to S3 or any other remote location(hdfs), etc? Thanks.
... View more
Labels:
- Labels:
-
Apache NiFi
08-29-2016
05:15 PM
Hi @Bryan Bende Thanks for your response. Since NiFi 1.0 will be master-zero design, will the MapCacheServer be redesign based on the new architecture? or it is still only available on the elected leader node? Thanks.
... View more
08-29-2016
03:44 PM
Hi guys, I am confused in a couple of questions on NCM failure. My understanding is the main responsibilities of NCM are: •Communicates dataflow changes to the nodes
•Receives health and status information from nodes If NCM fails, the existing dataflow can still run, but can't be changed. And new node can't join in, dead node can't be detected. How about we have below two scenarios: 1. A ListSftp processor deployed in Primary node, it will distribute filepaths to each work nodes running FetchSftp. However, if NCM fails, and one of the work nodes also fails, will the primary node still send data to the dead work node? How does primary node know that node is dead? 2. If we setup "DistributedMapCacheServer" in NCM, later NCM fails, does that mean the work node can't access to the Cache Server any longer? Any solution to make it high available?
Thanks.
... View more
Labels:
- Labels:
-
Apache NiFi
08-29-2016
03:33 PM
@Pierre Villard In the cluster mode, if we run MapCacheServer on only NCM, does that mean once NCM fails, the MapCacheServer becomes unavailable? Is there anyway to avoid it? Thanks.
... View more
08-12-2016
05:05 PM
Figured out it. It's not the firewall issue. Since the cluster and edge node are deployed in AWS. The remote host of edge node for site-to-site should be set as its public ip, rather than private ip. Singapore Dataflow: GetFile->output_port # Site to Site properties nifi.remote.input.socket.host=sig-public-ip
... View more
08-11-2016
08:09 PM
Hi @milind pandit I deployed in an AWS instance. It seems iptables is not started. ubuntu@ip-172-31-xx-xx:~$ sudo service iptables stop iptables: unrecognized service Telnet works on both 8080 and 9090 ports. Output port can be detected by RPG, but data can't be transferred. Thanks.
... View more
08-11-2016
02:46 PM
I have two NiFi instances deployed in AWS. I'd like to test the RGP across clusters US-East: 172.31.48.1 Dataflow: RGP->PutFile # Site to Site properties nifi.remote.input.socket.host=172.31.48.1 nifi.remote.input.socket.port=9090 nifi.remote.input.secure=false # web properties # nifi.web.http.host=172.31.48.1 nifi.web.http.port=8080 Singapore: 172.31.11.2 Dataflow: GetFile->output_port # Site to Site properties nifi.remote.input.socket.host=172.31.11.2 nifi.remote.input.socket.port=9090 nifi.remote.input.secure=false # web properties # nifi.web.http.host=172.31.11.2 nifi.web.http.port=8080 The GetFile processor can fetch files and queue them in Singapore instance. The RGP can successfully detect the "output_port" by setting the url as "http://sig-public-ip:8080/nifi" in US-east instance. However, RGP log shows : 14:38:14 UTC
ERROR
e93c67c6-3472-44e4-aa3e-78613ce37e47 us-ease-public-ip:8080 RemoteGroupPort[name=output_port,target=http://sig-public-ip:8080/nifi] failed to communicate with http://sig-public-ip:8080/nifi due to java.net.SocketTimeoutException When I test the network connection, ubuntu@ip-172-31-48-1:/opt$ telnet sig-public-ip 9090
Trying sig-public-ip...
Connected to sig-public-ip. ubuntu@ip-172-31-11-2:/opt$ telnet us-ease-public-ip 9090
Trying us-ease-public-ip...
Connected to us-ease-public-ip. The same for port 8080. Any settings I should do to enable the RPG communication? Thanks.
... View more
Labels:
- Labels:
-
Apache NiFi
08-10-2016
06:13 PM
Hi @jsequeiros After setting up site-to-site properties on all nodes, the fetchSftp works as expected. Both worker nodes are involved in file fetching. How do we distribute tasks evenly among workers? since I found sometimes one worker took all the files. Sometimes, one took 3, while the other one took 1. I have total 4 test files. Thanks.
... View more
08-10-2016
01:29 PM
Hi @jsequeiros These two settings solved the issue for me. The IOException refers to zookeeper state reading. One more question: In my test, the listSftp(on primary node) sends a list of 4 files to fetchSftp. However, I found only the primary node fetches all the 4 files rather than evenly distributing tasks to two workers. Any idea about the task allocation from listSftp to fetchSftp?
... View more
08-09-2016
08:52 PM
Is it because I didn't setup State Provider? In /conf/state-management.xml <cluster-provider>
<id>zk-provider</id> <class>org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider</class> <property name="Connect String"></property> <property name="Root Node">/nifi</property> <property name="Session Timeout">30 seconds</property> <property name="Access Control">CreatorOnly</property> <property name="Username">nifi</property>
<property name="Password">nifi</property>
</cluster-provider>
... View more
- « Previous
- Next »