Member since 
    
	
		
		
		02-23-2017
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                15
            
            
                Posts
            
        
                0
            
            
                Kudos Received
            
        
                1
            
            
                Solution
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 7118 | 12-20-2021 10:22 AM | 
			
    
	
		
		
		07-17-2024
	
		
		07:22 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Did you find any solution about this issue ? I have exactly the same problem.  An invoke HTTP get a time out on the flow file. A retry is implemented and the flow file get stucked with a timeout each time.  Works fine with same configuration to call API with postman and a curl from server nifi node.  The only way is to restart the invoke http processor and after few hours and some flow files a new occurence appears.  Error :  2024-07-17 14:55:06,783 ERROR [Timer-Driven Process Thread-17] o.a.nifi.processors.standard.InvokeHTTP InvokeHTTP[id=ba9d2024-8df1-323d-b0ba-54402272b188] Routing to Failure due to exception: java.net.SocketTimeoutException: timeout: java.net.SocketTimeoutException: timeout  java.net.SocketTimeoutException: timeout  at okhttp3.internal.http2.Http2Stream$StreamTimeout.newTimeoutException(Http2Stream.java:593)  at okhttp3.internal.http2.Http2Stream$StreamTimeout.exitAndThrowIfTimedOut(Http2Stream.java:601)  at okhttp3.internal.http2.Http2Stream$FramingSink.emitFrame(Http2Stream.java:510)  at okhttp3.internal.http2.Http2Stream$FramingSink.close(Http2Stream.java:538)  at okio.RealBufferedSink.close(RealBufferedSink.java:236)  at okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.java:63)  at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)  at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:45)  at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)  at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)  at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93)  at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)  at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)  at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)  at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)  at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:120)  at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)  at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)  at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:185)  at okhttp3.RealCall.execute(RealCall.java:69)  at org.apache.nifi.processors.standard.InvokeHTTP.onTrigger(InvokeHTTP.java:850)  at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)  at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1174)  at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213)  at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)  at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)  at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)  at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)  at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)  at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)  at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)  at java.base/java.lang.Thread.run(Thread.java:834)     Thanks 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-20-2021
	
		
		02:15 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi Christopher,     It's done ! I hope this post can help other people !  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-20-2021
	
		
		10:22 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hello !    Sorry I was out during few months.  No need to have a cross-realm trust setup because it's just a single one direction.           Solution and it's now running :  [realms]
  romulus = {
    admin_server = <...>
    kdc = <...>
  }
  remus = {
    admin_server = <...>
    kdc = <...>
  }
[domain_realm]
  <IP Name Node 1 romulus cluster> = romulus
  <IP Name Node 2 romulus cluster> = romulus
  <IP Name Node 1 remus cluster> = remus
  <IP Name Node 2 remus cluster> = remus     Let me explain :  Nifi needs a default realm. the default realm is not used to communicate with project Hadoop cluster kerberised (romus and remulus).  To help Nifi you must maps the name node hostnames to Kerberos realms in the section domain_realm.     In this case, Nifi will try to use the default realm and the realm of the main kerberos defined in the HDFS processor of the project and will failed.      It was a little bit tricky 😉 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-20-2021
	
		
		09:28 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi ZhouJun,     Have you find a solution to your issue ?     I have the problem in my production and staging environnement since few months without any solutions. Network issue maybe ?     Thanks for your feedback 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-14-2021
	
		
		02:18 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I made some changes just to try  - added under /etc/hosts name node IP and server name  - updated dns_lookup_kdc from true to false  - udp_preference_limit from 0 to 1     No effect. It's still not working 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-14-2021
	
		
		02:15 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 For sur !     For your information because I haven't written : if I switch the default realm under krb5.conf from romulus to remus, romulus not working and remus is ok. it's like Nifi, when the default realm it's not the same as the principal realm is lost and try to use an another realm.     Further more, remus and romulus are not the true name and I need to change path, server name, ip and other objets to share with you 😉     Remus :      Romulus :     
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-13-2021
	
		
		09:50 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks for your anwser !     Waiting for several days to have an another Hadoop Cluster, it's now provided.  After some configuration and check, it's working with some trouble.  Let me explain my issue.     I have two Hadoop kerberized cluster : romulus and remus.  For each one, kerberos configuration wad added in the krb5.conf  Find an extract of this configuration :     [libdefaults]
  default_realm = romulus
  dns_lookup_realm = false
  dns_lookup_kdc = true
  rdns = false
  dns_canonicalize_hostname = false
  ticket_lifetime = 168h 0m 0s
  renew_lifetime = 90d
  forwardable = true
  udp_preference_limit = 0
  ...
[realms]
  romulus = {
    admin_server = <...>
    kdc = <...>
  }
  remus = {
    admin_server = <...>
    kdc = <...>
  }     If I use the kinit client , it's working fine.     I configure two GetHDFS processor, each one with the core-site.xml and hdfs-site.xml for Hadoop Configuration Resources, the keytab and the principal associated.         Case one : default_realm is romulus and I start the GetHDFS processor romulus.  => Ok nice, I get the hdfs file (you can see the flowfile in the queue)     Case two: default_realm is still romulus and I start the GetHDFS processor remus  => KO :  Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentException: Server has invalid Kerberos principal: nn/<ip namenode active>@<ROMULUS>, expecting: nn/<ip namenode active>@<REMUS>  Nifi trying to connect using the default realm romulus :      Server has invalid Kerberos principal: nn/<ip namenode active>@<ROMULUS>     And with the correct realm remus :     expecting: nn/<ip namenode active>@<REMUS>        To sum up : Nifi working well with the default realm but have an issue with the other realms.     Is there a configuation I missed ?     Thanks       
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-12-2021
	
		
		05:55 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi everyone,     I'm working to a new feature with an existing nifi cluster to provide a new service to add an interface with serveral kerberized HDP Cluster.     I would like to know if a single Nifi cluster can use several realms in the same krb5 file.  Reading official documentation, nifi can do it (https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#kerberos_properties) : " If necessary the krb5 file can support multiple realms."     It seems ok but because at this time I have no way to have several hadoop cluster for testing (to notice : my cluster is working already with one kerberized cluster hadoop ), is anybody can confirm or reject this design: one cluster Nifi with different realms to communicate with multiple kerberized hdp cluster.     Thanks for your help and as soon as I have several kerberized cluster hadoop for testing, I will update this article.    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache NiFi
 - 
						
							
		
			HDFS
 - 
						
							
		
			Kerberos
 - 
						
							
		
			Security
 
			
    
	
		
		
		09-02-2019
	
		
		06:42 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi,     Have you find a solution ? I have the same problem with my production cluster.  HDFS HA is implemented from several years without problem.     But recently, we realized that a hdfs client have to wait 20 secondes when the server hosting the nn1 is shutdown. Exemple when I set the debug mode :  19/08/29 11:03:05 DEBUG ipc.Client: Connecting to XXXXX/XXXXX:8020  19/08/29 11:03:23 DEBUG ipc.Client: Failed to connect to server: XXXXX/XXXXX:8020: try once and fail.  java.net.NoRouteToHostException: No route to host     Few informations :   Hadoop version : 2.7.3  dfs.client.retry.policy.enabled : false  dfs.client.failover.sleep.max.millis : 15000  ipc.client.connect.timeout : 20000     Thanks for your help !     
						
					
					... View more