Member since 
    
	
		
		
		03-10-2017
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                170
            
            
                Posts
            
        
                80
            
            
                Kudos Received
            
        
                32
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 1168 | 08-12-2024 08:42 AM | |
| 1995 | 05-30-2024 04:11 AM | |
| 2581 | 05-29-2024 06:58 AM | |
| 1738 | 05-16-2024 05:05 AM | |
| 1330 | 04-23-2024 01:46 AM | 
			
    
	
		
		
		06-30-2023
	
		
		02:37 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 You need paywall credentials to get the CFM parcels.  You can get the paywall credentials from your contact from the Cloudera accounts Team.   I hope this helps.  Thank you  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-05-2023
	
		
		04:32 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Partition level HDFS directory disk usage is not avaible since this works on gievn direceoty path only and not at the disk level.   Thank you           
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-24-2023
	
		
		05:51 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 This is not a permission issue at this point but more of an issue between NameNode and DataNode.  I would request you start a new thread for HDFS.  Thank you  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-24-2023
	
		
		02:22 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Looking at the error snipped, this seems to be an HDFS-level issue, but just to make sure  I assume you are be using NiFi: PutHDFS processor to write into the HDFS cluster thus, I would check following    Check if processor is configured with latest copy of hdfs-site.xml and core-site.xml files under Hadoop configuration resources.  Try to write into same hdfs location from hdfs client outside of NiFi ? and see if this works or not to isolate if this is hdfs issue or configuration issue in NiFi processor end.    Thank you        
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-12-2023
	
		
		06:33 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 There is No specific processor built only for Oracle but if you are talking about Oracle DB then one can use ExecuteSQL/PutSQL with DBCPConnectionPool Controller where DBCPConnectionPool controller ie generic implementation to connect any DB and it requires a local copy of Database specific client driver and driver class name.  Please refer https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-dbcp-service-nar/1.21.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html     If you found this response assisted with your issue, please take a moment to login and click on "Accept as Solution" below this post.    Thank you,  Chandan  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-29-2023
	
		
		05:47 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 No, So you can evaluate  GenerateTableFetch -->ExecuteSQL   Where GenerateTableFetch "Maximum-value Columns" setting can help.  refr   https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.20.0/org.apache.nifi.processors.standard.GenerateTableFetch/index.html     If you found this response assisted with your issue, please take a moment and click on "Accept as Solution" below this post.    Thank you 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-29-2023
	
		
		03:00 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							  CaptureChangeMySQL processor would be fit for your requirement.     If you found this response assisted with your issue, please take a moment and click on "Accept as Solution" below this post.    Thank you          
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-28-2023
	
		
		04:50 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 PublishKafka writes messages only to those Kafka nodes that are leaders for a given topic: partition.  Now it's Kafka internal job to keep the In-Sync Replicas in sync with its leader.  So with respect to your question:  When the Publisher client is set to run ,client sends a (read/write) request the bootstrap server, listed in the configuration bootstrap.servers to get the metadata info about topic: partition details, that's how the client knows who are all leaders in given topic partitions and the Publisher client writes into leaders of topic: partition   With "Guarantee single node" and if kafka broker node goes down which was happen to be a leader for topic: partition then Kafka will assign a new leader from ISR list  for topic: partition and through Kafka client setting metadata.max.age.ms producer refreshed its metadata information will get to know who is next leader to produce.     If you found this response assisted with your issue, please take a moment and click on "Accept as Solution" below this post.    Thank you 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-28-2023
	
		
		04:19 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 This error has to do with the schema used in Record Writer controllers used by ExecuteSQLRecord.  I would validate the provided schema and check if it is in the correct Avro format as it Avro schema parser is complaining about an illegal character in the provided schema.      If you found this response assisted with your issue, please take a moment and click on "Accept as Solution" below this post.    Thank you 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-27-2023
	
		
		06:28 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Regarding step 2, You have to determine the HDFS directory where NiFi PutParquet will write the files, and who has access to this directory path on HDFS, that user's user principal and associated keytab is required. I assume if HDFS is secured by Kerberos then the users has to obtain the Kerberos ticket by running kinit with user principal and Keytab to access it at the HDFS  side.       About step 3. No need to install Kerberos service, NiFi needs a Kerberos client on NiFi hosts which is by default installed on most Linux OS.  client config files located at /etc/krb5.conf , to which Kerberos server NiFi PutParquet should connect in order to obtain kerbeors ticket using configured user pric/keytab details, user has updated Krb5.conf file with Kerberos Server details. I mean KDC realm details.     If you found this additional response assisted with your issue, please take a moment and click on "Accept as Solution" below this post.    Thank you       
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













