Member since 
    
	
		
		
		08-16-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                642
            
            
                Posts
            
        
                131
            
            
                Kudos Received
            
        
                68
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 3971 | 10-13-2017 09:42 PM | |
| 7460 | 09-14-2017 11:15 AM | |
| 3789 | 09-13-2017 10:35 PM | |
| 6023 | 09-13-2017 10:25 PM | |
| 6595 | 09-13-2017 10:05 PM | 
			
    
	
		
		
		07-11-2017
	
		
		09:10 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @csguna that JIRA mentions a different log and has been fixed.  So it is likely not the issue reported here.  I saw the JIRA for the FD leak for UDFs.  With that one, the FDs would be to the UDF jar file, which is not what is seen here. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-11-2017
	
		
		12:03 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							CM and CN support AD, LDAP(S), and SAML. CM also support an external auth program which you might be able to work OTP into but CN does not support this feature.    https://www.cloudera.com/documentation/enterprise/5-4-x/topics/cm_sg_external_auth.html    https://www.cloudera.com/documentation/enterprise/5-4-x/topics/cn_sg_external_auth.html#xd_583c10bfdbd326ba-7dae4aa6-147c30d0933--7b62
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-10-2017
	
		
		11:58 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@csguna @saranvisa I don't know if those settings will effect the Operations log.    I did find this JIRA but it isn't for the operations logs. I only skimmed through it though.    https://issues.apache.org/jira/browse/HIVE-4500    It definitely sounds like you have a FD leak in HS2. you could just disable the operation logs to alleviate the issue while you dig into it further. For what it is worth, I am running CDH 5.8.2 in production and don't see this issue with HS2.    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-10-2017
	
		
		11:44 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							I don't know of a doc or if it is not possible. You should be able to do it all using the CM api. You just need to figure out the order in which to do it. Then you could build a script that does it all with CM in place. This is a lot of work and this is why rolling upgrades is an Enterprise feature.    What I would do is get the quickstart VM or install a single node version and use the trial license. Then you can conduct rolling restart, get the ordering, commands from the logs, etc. But then you need to build and test your version.    Good luck.
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-10-2017
	
		
		11:33 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@csguna    OS and kernel version from the dump.    OS:Red Hat Enterprise Linux Server release 6.9 (Santiago)  uname:Linux 2.6.32-696.1.1.el6.x86_64 #1 SMP Tue Mar 21 12:19:18 EDT 2017 x86_64  libc:glibc 2.12 NPTL 2.12  rlimit: STACK 10240k, CORE 0k, NPROC 1024, NOFILE 4096, AS infinity  load average:0.00 0.01 0.00  /proc/meminfo:  MemTotal: 24591972 kB  MemFree: 10750560 kB    The load average and memory usage for the system is not high.  The proc and file limit are the defaults but for a Sqoop job this may be ok.    VM Arguments:  jvm_args: -Xmx1000m -Dhadoop.log.dir=/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hadoop/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,NullAppender  java_command: org.apache.hadoop.util.RunJar /home/progr/oracle_export/sqoop.jar    It is running with just shy of 1 GB of heap. So it may simply be that. You can try some of the other recommendation it has.    Did this run previously in a different version of CDH or is this a new job?
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-09-2017
	
		
		05:09 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@desind    No. It was kind of always there but the Impala JIRA mentioned above that fixed a issue caused it to pop up. Those CAM releases have the CM fix to have it communicate with the FQDN. For whatever reason it didn't make it into CM 5.11.1. It should be in a CM 5.11 release in the future.    In short either upgrade to one of those CM versions or use the --hostname setting for CM/CDH 5.11.1.
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-09-2017
	
		
		05:07 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@desind    No. It was kind of always there but the Impala JIRA mentioned above that fixed a issue caused it to pop up. Those CAM releases have the CM fix to have it communicate with the FQDN. For whatever reason it didn't make it into CM 5.11.1. It should be in a CM 5.11 release in the future.    In short either upgrade to one of those CM versions or use the --hostname setting for CM/CDH 5.11.1.
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-08-2017
	
		
		12:49 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@Lars Volker Thanks for adding this bit of info. I was looking at IMPALA-5631 as a suspect but never thought to look at CM.    Lesson Learned: pay as much attention to CM release notes as I do CDH release notes.
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-08-2017
	
		
		10:41 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							Yes you can deploy it through Chef or Puppet. You ca also use Cloudera Manager with them as well. You can have Chef or Puppet manager CM and use the CM API to manager the cluster.    I am not positive on the support. I worked with on client that used Chef. They didn't have support as they felt capable without it. Any time Cloudera was involved they pushed for the use of CM, but I am pretty sure they eventually paid for CDH Enterprise and still didn't use CM. They were trying to work CM in when I left.    Another way to look at it is that the Enterprise license (which includes support) is just the use for all services plus some nice CM feature. So if you are doing all of that through Chef or Puppet, the left over benefit is to get patches back ported.
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













