Member since 
    
	
		
		
		01-08-2018
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                133
            
            
                Posts
            
        
                31
            
            
                Kudos Received
            
        
                21
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 18427 | 07-18-2018 01:29 AM | |
| 3590 | 06-26-2018 06:21 AM | |
| 6189 | 06-26-2018 04:33 AM | |
| 3069 | 06-21-2018 07:48 AM | |
| 2750 | 05-04-2018 04:04 AM | 
			
    
	
		
		
		06-26-2018
	
		
		06:20 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Have you run "Create Root Directory" from HBase's available actions in Cloudera Manager?  This will create the "/hbase" (default value) directory in hdfs that will belong to user hbase and group hbase.  If it is created then check permissions  sudo -u hdfs hdfs dfs -ls /     If not, create it (you can do it using Cloudera Manager as mentioned above):  sudo -u hdfs hdfs dfs -mkdir /hbase  sudo -u hdfs hdfs dfs -chown hbase:hbase /hbase 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-26-2018
	
		
		04:33 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Ok, from the log it is obvioues that issue for spark is the old jdk.        When you tried to upgrade java have you defined the java home in "/etc/default/cloudera-scm-server"  e.g.:  export JAVA_HOME="/usr/lib/jvm/java-8-oracle/"  Can you send the relevant "/var/log/cloudera-scm-server/cloudera-scm-server.out" ?    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-25-2018
	
		
		06:41 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 My CDH version is 5.14 but there is no spark2 parcel for 5.14, there are parcels for 5.13 and 5.12. Is this the problem I faced with?  This is not an issue. Spark is built on CDH5.13 but it works fine with CDH5.14.  Check compatibility notes:  https://www.cloudera.com/documentation/spark2/latest/topics/spark2_requirements.html#cdh_versions     According to the screenshot, the procedure failed while it was distributing configuration to namenode. Can you check the "stdout" and "stderr" output? You can copy it here so we can take a look. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-21-2018
	
		
		07:48 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 You should not worry for compatibility between KTS and CDH  If you check https://www.cloudera.com/documentation/enterprise/latest/topics/encryption_ref_arch.html#concept_npk_rxh_1v  CDH connects to KMS.  KMS will connect to KTS  So you have to check whether the KMS which is compatible to KTS3.8, is compatible with CDH5.14.2.    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-29-2018
	
		
		03:23 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Everything seem to be ok. I have the same configuration and cannot reproduce your issue.  I have CM 5.14.3 installed so my jar is /usr/share/cmf/lib/agent-5.14.3.jar  What version are you using? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-28-2018
	
		
		07:46 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Regarding the query, the "ETL_DEV" is probably the display name, so instead of  "AND clusterName=ETL_DEV"  you should try  "AND clusterDisplayName=ETL_DEV"     The fact, that nothing is displayed, is an indication that the configuration is not complete. Can you check that the directory, specified in "Cloudera Manager Container Usage Metrics Directory" is created in the HDFS and user defined in "Container Usage MapReduce Job User" has full permissions?  If not, then you will need to re-run the Create YARN Container Usage Metrics Dir command. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-28-2018
	
		
		01:34 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 This is a normal behavior.  You should either create dynamic folder name (e.g. output_dir_timestamp) but you may end up having a lot of directories, or add an HDFS action to delete the HDFS directory, just before the sqoop action. I recomend the last approach. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-27-2018
	
		
		11:43 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 It is strange. Your config, seems to be ok. I don't know what is the problem.  I could recommend check the output of  hostname -f  and  host -t A 10.142.0.4  if the output is the FQDN, then probably something is wrong with your version of Dnstest.     Can you also check the "/etc/nsswitch.conf" file?  Usually the hosts line is:  hosts:          files dns  Can you check if something else is before the above? E.g. "sss files dns".  The order matters. If "files" is first, then the local "/etc/hosts" will be checked first http://man7.org/linux/man-pages/man5/nsswitch.conf.5.html 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-23-2018
	
		
		05:45 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Can you post you /etc/hosts file? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-07-2018
	
		
		12:42 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 To be honest, I have not used lzo in spark.  I suppose that you have Spark running under yarn and not stand-alone.  In that case, the first thing I would check, is that lzo is configured in YARN available codecs "io.compression.codecs". Moreover, have you configured HDFS https://www.cloudera.com/documentation/enterprise/latest/topics/cm_mc_gpl_extras.html#xd_583c10bfdbd326ba--6eed2fb8-14349d04bee--7c3e 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













