Member since 
    
	
		
		
		01-19-2017
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                3676
            
            
                Posts
            
        
                632
            
            
                Kudos Received
            
        
                372
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 472 | 06-04-2025 11:36 PM | |
| 996 | 03-23-2025 05:23 AM | |
| 530 | 03-17-2025 10:18 AM | |
| 1860 | 03-05-2025 01:34 PM | |
| 1238 | 03-03-2025 01:09 PM | 
			
    
	
		
		
		12-19-2021
	
		
		04:24 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Koffi   Yes you obviously cannot run safe mode when the namenodes are down  I can see the JN and ZKFC are all up can you run the below command on the last known good Namenode nn01 hopping you are running it as root  su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh start namenode"  If nn01 starts without any issue then run the same command on nn02 else share the logs from nn01 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-17-2021
	
		
		08:56 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Koffi   This issue seems linked to your previous posting. Your last healthy name node was nn01, right? The assumption here is you are logged in as root  Instructions to fix that one journal node.  1) Put both nn01 and nn02 in safe mode ( NN HA)  $ sudo su - hdfs
[hdfs@host ~]$ hdfs dfsadmin -safemode enter  Safe mode is ON in nn01/<nn01_IP>:8020  Safe mode is ON in nn02/<nn02_IP>:8020     2) Save Namespace  [hdfs@host ~]$ hdfs dfsadmin -saveNamespace  Save namespace successful for nn01/<nn01_IP>:8020  Save namespace successful for nn02/<nn02_IP>:8020     3) Backup zip/tar the journal dir from a working JN node of (nn01) and copy it to the non-working JN's of  (nn02)node to something like  /hadoop/hdfs/journal/<Cluster_name>/current    4) Leave safe mode  [hdfs@host ~]$ hdfs dfsadmin -safemode leave  Safe mode is OFF in nn01/<nn01_IP>:8020  Safe mode is OFF in nn02/<nn02_IP>:8020     4) Restart HDFS  From Ambari you can now start the nn01 first when it comes up then start nn02  Please let me know.    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-17-2021
	
		
		08:45 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Koffi   From the Ambari UI are you seeing any HDFS alert? ZKFailover Controller or Journalnodes? If so share the logs? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-27-2021
	
		
		12:08 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Rish   How much memory has your VM quickstart have?  Can you open the RM and check using the application_id   the logs should give you an idea of whats happening  Geoffrey    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-27-2021
	
		
		11:16 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Koffi   There are a couple of things here you first need to resolve too many open files issue by checking the ulimit  $ ulimit -n  To increase for the current session depending on the above output  ulimit -n 102400  Edit /etc/security/limits.conf to make the change permanent.  Then restart the kdc and kadmin depending on your Linux version systemctl    # /etc/rc.d/init.d/krb5kdc start
# /etc/rc.d/init.d/kadmin start     Then restart Atlas from the Ambari UI      Please revert after these actions     Geoffrey                            
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-02-2021
	
		
		11:11 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Phanikondeti   Please can you share how you installed your nifi , version, and install documents followed?   The errors logs would be good to share too. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-21-2021
	
		
		06:28 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @vciampa   The solution is the Replication Manager which enables you to replicate data across data centers for disaster recovery scenarios. Replication Manager replicates HDFS, Hive, and Impala data, and supports Sentry to Ranger replication from CDH (version 5.10 and higher) clusters to CDP Private Cloud Base (version 7.0. 3 and higher) clusters.  https://docs.cloudera.com/cdp/latest/data-migration/topics/cdp-data-migration-replication-manager-to-cdp-data-center.html  It's easy to use 🙂  Happy Hadooping     
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-20-2021
	
		
		10:53 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @vciampa   Please look at this document that performs the steps for upgrading from HDP to CDP Private   Happy hadooping 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-09-2021
	
		
		07:19 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @rachida_el-hamm   Here is a very good resource, sit back and sip your coffee or tea. It should help you resolve your MySQL issue   Happy hadooping    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-06-2021
	
		
		12:19 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Anup123   I responded to a similar question see SSL Sqoop  If already have an SSL cert file, then you can generate your own JKS file and import your cert into your jks. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













