Member since 
    
	
		
		
		10-08-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                59
            
            
                Posts
            
        
                16
            
            
                Kudos Received
            
        
                0
            
            
                Solutions
            
        
			
    
	
		
		
		06-13-2018
	
		
		12:31 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 I'm using NiFi 1.6.0, in a 3 node cluster.  When I use GETSFTP (Set to ALL Nodes) in a clustered nifi the cluster seems to distribute the data acquired evenly among nodes.  Does this mean that all 3 servers GetSFTP the data evenly?  I also tried using FETCH SFTP to get the listings and then did a site to site, back to my own cluster and It did NOT distribute the Fetch 0 byte files evenly among the nodes for the fetch SFTP load to be evenly distributed.  What would be the best practice to Load Balance SFTPGET in a nifi cluster?  John 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache NiFi
			
    
	
		
		
		06-12-2018
	
		
		02:00 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Matt Clarke   Any ideas? This thing is killing us! 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-11-2018
	
		
		01:22 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Matt Burgess   any ideas as well? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-11-2018
	
		
		01:21 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@Matt Clarke Any ideas? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-10-2018
	
		
		11:35 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 So I have 4 740xd Servers with 80 Threads on each and 384GB of ram each.  How should I best handle heap for each nifi VM on the server?  Option 1: 1, VM 80 cpus, NiFi set to 300GB of Ram for heap? And have the 4 nifis clustered.  Option 2: 10, VMs 8 cpus each, NiFi set to 32GB of Ram for heap for each VM, These NiFis would also be clustered, across the Virtualized 740 and across the 4 hardware 740s  Note: all these files going through these nifis are less than 4GB, and 90% of them are under 10MB  ALSO what generation should I use for Java Garbage Collection?  Attached are two pictures explaining my options.  John     
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache NiFi
			
    
	
		
		
		06-07-2018
	
		
		01:43 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Turns out I had: nifi.state.management.embedded.zookeeper.start=false  Once I changed that to true it worked 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-05-2018
	
		
		05:53 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@Matt Burgess  or @Matt Clarke  Any ideas? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-05-2018
	
		
		04:33 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi Everyone!  I keep getting a: ERROR [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl Background retry gave up org.apache.curator.CuratorConnectionLossException: KeeperErrorCode = ConnectionLoss  My EDITABLE config files are here in the associated directories in my google drive below and attached, this is a 3 node cluster:  https://drive.google.com/drive/folders/11xM-sz8mUvpaiOOS4aiZ94TQGHHzConF?usp=sharing  I made sure FirewallD was off, all ports used are free, and I followed these guides:  https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.1.1/bk_administration/content/clustering.html  AND  https://community.hortonworks.com/articles/135820/configuring-an-external-zookeeper-to-work-with-apa.html  Any help would be greatly appreciated!!  Here's my configs in plain text for reference:  ###################################################10.0.0.89 Server###################################################### 
 
####ZooKeeper.properties File##### 
clientPort=24489
initLimit=10
autopurge.purgeInterval=24
syncLimit=5
tickTime=2000
dataDir=./state/zookeeper
autopurge.snapRetainCount=30
server.1=10.0.0.89:2888:3888
 server.2=10.0.0.227:2888:3888
 server.3=10.0.0.228:2888:3888
 
 
 
 ####nifi.properties cluster section####
 
# cluster common properties (all nodes must have same values) #
nifi.cluster.protocol.heartbeat.interval=5 sec
nifi.cluster.protocol.is.secure=false
# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=true
nifi.cluster.node.address=10.0.0.89
nifi.cluster.node.protocol.port=24489
nifi.cluster.node.protocol.threads=10
nifi.cluster.node.protocol.max.threads=50
nifi.cluster.node.event.history.size=25
nifi.cluster.node.connection.timeout=5 sec
nifi.cluster.node.read.timeout=5 sec
nifi.cluster.node.max.concurrent.requests=100
nifi.cluster.firewall.file=
nifi.cluster.flow.election.max.wait.time=5 mins
nifi.cluster.flow.election.max.candidates=3
# zookeeper properties, used for cluster management #
nifi.zookeeper.connect.string=10.0.0.89:24489,10.0.0.227:24427,10.0.0.28:24428
nifi.zookeeper.connect.timeout=3 secs
nifi.zookeeper.session.timeout=3 secs
nifi.zookeeper.root.node=/nifi
 
 
###################################################################10.0.0.227 Server################################################################################################
 
####ZooKeeper.properties File#####  
 
 clientPort=24427
initLimit=10
autopurge.purgeInterval=24
syncLimit=5
tickTime=2000
dataDir=./state/zookeeper
autopurge.snapRetainCount=30
server.1=10.0.0.89:2888:3888
 server.2=10.0.0.227:2888:3888
 server.3=10.0.0.228:2888:3888
 
 
#####nifi.properties cluster section ###################
# cluster common properties (all nodes must have same values) #
nifi.cluster.protocol.heartbeat.interval=5 sec
nifi.cluster.protocol.is.secure=false
# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=true
nifi.cluster.node.address=10.0.0.227
nifi.cluster.node.protocol.port=24427
nifi.cluster.node.protocol.threads=10
nifi.cluster.node.protocol.max.threads=50
nifi.cluster.node.event.history.size=25
nifi.cluster.node.connection.timeout=5 sec
nifi.cluster.node.read.timeout=5 sec
nifi.cluster.node.max.concurrent.requests=100
nifi.cluster.firewall.file=
nifi.cluster.flow.election.max.wait.time=5 mins
nifi.cluster.flow.election.max.candidates=3
# zookeeper properties, used for cluster management #
nifi.zookeeper.connect.string=10.0.0.89:24489,10.0.0.227:24427,10.0.0.28:24428
nifi.zookeeper.connect.timeout=3 secs
nifi.zookeeper.session.timeout=3 secs
nifi.zookeeper.root.node=/nifi
 
 
 
 
 
 
##################################################################################10.0.0.228 Server####### ##############################################################################
 
####ZooKeeper.properties File##### 
 
clientPort=24428
initLimit=10
autopurge.purgeInterval=24
syncLimit=5
tickTime=2000
dataDir=./state/zookeeper
autopurge.snapRetainCount=30
server.1=10.0.0.89:2888:3888
 server.2=10.0.0.227:2888:3888
 server.3=10.0.0.228:2888:3888
 
 
 
 #####nifi.properties cluster section ##########
 
 # cluster common properties (all nodes must have same values) #
nifi.cluster.protocol.heartbeat.interval=5 sec
nifi.cluster.protocol.is.secure=false
# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=true
nifi.cluster.node.address=10.0.0.228
nifi.cluster.node.protocol.port=24428
nifi.cluster.node.protocol.threads=10
nifi.cluster.node.protocol.max.threads=50
nifi.cluster.node.event.history.size=25
nifi.cluster.node.connection.timeout=5 sec
nifi.cluster.node.read.timeout=5 sec
nifi.cluster.node.max.concurrent.requests=100
nifi.cluster.firewall.file=
nifi.cluster.flow.election.max.wait.time=5 mins
nifi.cluster.flow.election.max.candidates=3
# zookeeper properties, used for cluster management #
nifi.zookeeper.connect.string=10.0.0.89:24489,10.0.0.227:24427,10.0.0.28:24428
nifi.zookeeper.connect.timeout=3 secs
nifi.zookeeper.session.timeout=3 secs
nifi.zookeeper.root.node=/nifi
 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache NiFi
			
    
	
		
		
		05-07-2018
	
		
		04:53 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Matt,   Thank you for your thorough feedback!  How does nifi handle load balancing on its own if there was 1 listenHTTP processor on a cluster?  Would we give the endpoint communicating customer 1 IP to reach us? And then nifi would broker that connection to one of the nodes in the cluster? Or would the data come through 1 node and then distribute to the other nodes?  Basically I'm trying to use nifi clustering as the load balancer 🙂 Any suggestions? If that doesen't work I was going to try out HAproxy in front of the cluster IP.  John 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













