Member since 
    
	
		
		
		01-20-2014
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                578
            
            
                Posts
            
        
                102
            
            
                Kudos Received
            
        
                94
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 6677 | 10-28-2015 10:28 PM | |
| 3543 | 10-10-2015 08:30 PM | |
| 5638 | 10-10-2015 08:02 PM | |
| 4095 | 10-07-2015 02:38 PM | |
| 2875 | 10-06-2015 01:24 AM | 
			
    
	
		
		
		06-26-2017
	
		
		04:15 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Installation is not able to locate the oozie shared library tar gip file. I even didn't find in any other locations.      Error: gzip: /usr/lib/oozie/oozie-sharelib-yarn.tar.gz: No such file or directory
tar: This does not look like a tar archive
tar: Exiting with failure status due to previous errors  I'm running with      Version: Cloudera Enterprise Data Hub Edition Trial 5.11.1  CDH 5.0.0 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-28-2017
	
		
		11:24 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 in my case opend jdk is causing the issue once i removed them and installed the correct version of JDK the distribution completed successfully 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-03-2017
	
		
		08:44 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 If you choose to use a custom Java location, modify the host configuration to ensure the JDK can be found:  Open the Cloudera Manager Admin Console.  In the main navigation bar, click the Hosts tab and optionally click a specific host link.  Click the Configuration tab.  Select Category > Advanced.  Set the Java Home Directory property to the custom location.  Click Save Changes. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-26-2017
	
		
		07:39 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 I was having same issue and was getting same error but when I was running any command through directory where I have installed CDH, I was able to run all the commands - hadoop, hdfs, spark-shell ect.     e.g. if your CHD installation location is - /dat/anlt1/cld/cloudera/CDH-5.8.3-1.cdh5.8.3.p0.2/bin     you can test -     $ cd /dat/anlt1/cld/cloudera/CDH-5.8.3-1.cdh5.8.3.p0.2/bin     [root@xyz bin]# ./hadoop and if it work then you need to set up environment variable in your Unix master server     For RHEL -     [root@xyz~]# echo "$PATH" /usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin     [root@xyz~]# export PATH=$PATH:/path/to/CHD_Installation_bin_path     for me it's - /dat/anlt1/cld/cloudera/CDH-5.8.3-1.cdh5.8.3.p0.2/bin     [root@xyz~]# echo "$PATH"     /usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/dat/anlt1/cld/cloudera/CDH-5.8.3-1.cdh5.8.3.p0.2/bin     to make permanent change -     $ echo "export PATH=$PATH:/dat/anlt1/cld/cloudera/CDH-5.8.3-1.cdh5.8.3.p0.2/bin" >> /etc/profile     after that restart(reboot) your server. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-16-2017
	
		
		09:05 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I got the same issue and resolved with using the "Force Overwrite" while Hive replication setup 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-18-2016
	
		
		07:18 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Harsh,     In this thread you stated "but do look into if your users have begun creating too many tiny files as it may hamper their job performance with overheads of too many blocks (and thereby, too many mappers)." Too may tiny files is in the eye of the  beholder if those files are what get you paid.       I'm also seeing a block issue on wo of our nodes, but a rebalance to 10% has no effect. I've rebalanced to 8% and it improves, but I suspect we're running into a small files issue.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-08-2016
	
		
		04:55 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							   @applejack wrote:   An easier option to fix this is to use VMware Converter Standalone if you have access to that application.     You can install vmware converter standalone on any Windows machine and use it to convert the orginal .vmdk files (downloaded from Cloudera) to a new ESX host.     You will need to unzip the files that are downloaded from Cloudera to extract the .vmx and .vmdk files to a network share or put them on the same server where you install vmware converter.  Either way will work.     Install vmware converter standalone on Windows Server.     Map a drive to the .vmx files from the vmware converter server or copy the .vmx and .vmdk classroom files locally to the vmware converter server.     Start up vmware converter and choose "Backup image or third-party virtual machine"     Browse to the network share (or local drive) that has the .vmx file.     Connect to your vcenter server or a stand-alone ESXi host.     Choose your Display Name     Choose which storage volume to store the Hadoop training virtual machine on.     Configure your network settings; maybe change the storage to thin provisioned; and make any other changes to the virtual machine if you want.     After that the conversion process should run and within 10-15 minutes it should convert the files over to the new server and you should be good to go.          Though I had to upgrade my vCenter Standalone Converter from v4.0.1 to v6.1.0 (latest at the time of print), the installation method described in this post works perfectly, and is indeed the simplest way to get the VM image on an ESXi box.     Thanks       
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-17-2016
	
		
		01:40 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi My issue was solved by updating the SUSE 11SP4. Installed the updates as the os was in initial state.Erro rwas gone after that.             
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-10-2016
	
		
		02:07 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Great . Thanks Goutam 🙂      So apart from yarn will there be any other CDH services (Hive, Pig, Flume, Spark, Impala, Hue, Oozie, Hbase) that too will require exec permission on /tmp ?     Having noexec on /tmp will have any problem to cluster functioning?  What would be your reccomandation here.     Thanks, 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-17-2016
	
		
		04:20 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Are you talking about the log4j file which are used by flume agents? If yes, they are at /etc/flume-ng/conf folder? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













