Member since 
    
	
		
		
		03-29-2020
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                110
            
            
                Posts
            
        
                10
            
            
                Kudos Received
            
        
                16
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 1218 | 01-08-2022 07:17 PM | |
| 3672 | 09-22-2021 09:39 AM | |
| 15566 | 09-14-2021 04:21 AM | |
| 2965 | 09-01-2021 10:28 PM | |
| 3943 | 08-31-2021 08:04 PM | 
			
    
	
		
		
		01-08-2022
	
		
		07:17 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Can you take a look at the below links for creating the ORC table with the Snappy compression question?     https://community.cloudera.com/t5/Support-Questions/Data-Compression-Doesn-t-work-in-ORC-with-SNAPPY-Compression/td-p/172151  https://community.cloudera.com/t5/Support-Questions/Snappy-vs-Zlib-Pros-and-Cons-for-each-compression-in-Hive/m-p/97110  https://community.cloudera.com/t5/Community-Articles/Performance-Comparison-b-w-ORC-SNAPPY-and-ZLib-in-hive-ORC/ta-p/246948 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-09-2021
	
		
		06:44 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @mhchethan      Generally, we suggest using a single LDAP URL. If you want more than one you can configure LB and that LB should connect to your backend      https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.0.0/securing-hive/content/hive_secure_hiveserver_using_ldap.html          If you are happy with the reply, mark it Accept as Solution      
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-09-2021
	
		
		02:26 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @Aarth      You can add the hive properties in the HQL file so that it will take it at the session level.     Example:  beeline -u "jdbc:hive2://<FQDN:10000>" -f rajkumar.hql  Add all the required properties in the HQL file so it will be applied at the time of executing at a session level.  set hive.compute.query.using.stats=false;  set hive.fetch.task.conversion=none ;    If you are processing more than one HQL file add the needed properties to it.  If you are happy with the reply, mark it Accept as Solution 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-11-2021
	
		
		02:13 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @PawanUppala      It would be really helpful if you could share with us the complete error stack before the HS2 goes down. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-05-2021
	
		
		06:07 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 hi @Sadique1      Can you run which spark-shell and share us the results 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-05-2021
	
		
		06:06 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @Swagat      (1). Can you run the below command in your Linux terminal and re-run the "yarn logs -applicationID <appID>" command  export HADOOP_USER_NAME=hdfs     (2). Based on the logs parent folder "/tmp/logs/hive" and its subfolders are with 770 permission/hive:hadoop so the other users do not have permission to access it. The users ledapp3, calapr01, and gaiapr01 will fall under others so they are not able to access it.     Can you run below command as HDFS user and ask your end users to try again.  hdfs dfs -chmod 775 <parent folder>  hdfs dfs -chmod -R 775 <parent folder>  hdfs dfs -chmod -R 777 <parent folder> ###Least Recommended.     Similar Issue Reference:  https://community.cloudera.com/t5/Support-Questions/Permission-denied-user-root-access-WRITE-inode-quot-user/td-p/4943  https://community.cloudera.com/t5/Support-Questions/Permission-denied-as-I-am-unable-to-delete-a-directory-in/m-p/322469#M228801     If you are happy with the reply, mark it Accept as Solution 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-01-2021
	
		
		02:38 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @Sam2020      Can you check the below links and see whether it helps.     Upgrade CM  https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/upgrade-cdp/topics/ug_cdh_upgrading_top.html     Upgrade CDP Cluster to a higher version  https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/upgrade-cdp/topics/ug-cdpdc.html     If you are happy with the reply, mark it Accept as Solution 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-28-2021
	
		
		07:37 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @Kicker      Question: How can I find the list of active sessions on my cluster or how can I check whether a session is active or not by sessionId in Hive?     Answer: Could you please take a look at the below link where we discussed a similar question.  https://community.cloudera.com/t5/Support-Questions/How-many-users-connected-to-HiveServer2/m-p/322372#M228765     You may have to log in to hiveserver2 and run the below command to see the active number of connections to hiveserver2.  netstat -ntpla | grep 10000 | grep -i ESTABLISHED ### Instead of 10000 you have to mention your HS2 Port(if you use http more the port number will be 10001)  netstat -ntpla | grep 10000 ### This gives you the detail of Established, Close_wait, and other processes as well.    If you want to check the Active sessions you can find the details in Hiveserver2 Web UI  Ambari > Hive > QuickLinks > Hiveserver2 WebUI     If you are happy with the reply, mark it Accept as Solution 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-22-2021
	
		
		09:39 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Hi @nareshbattula      Q1: hive metastore performance  Based on the memory tuning the performance could vary. Please check the below link to see how to set the memory stuff  https://docs.cloudera.com/documentation/enterprise/5-7-x/topics/admin_hive_tuning.html     Q2: which tables/dbs causing more pressure on Hivemetastore  You may have to check in the HMS logs to see which query is taking a long time.     If you are using HDP you can find the current Memory pressure and Heap usage details via the below link  https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/using-ambari-core-services/content/amb_hive_hivemetastore.html  https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/using-ambari-core-services/content/amb_hive_home.html     If you are using CM then you can see the details in below link  CM > Hive > Hivemetastore > Charts > JVM Heap Usage/JVM Pause Time/     Q3: number of connections on HMS?  You can run the below commands to see the established connection/number of connections to HMS     netstat -ntpla | grep 9083  lsof -p <hms pid> | grep "ESTABLISHED" -i     If you are using CM then you can see the details in the below link  CM > Hive > Hivemetastore > Charts > Open connections     If you are happy with the reply, mark it Accept as Solution 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-20-2021
	
		
		11:22 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @vciampa      Please check the below document which should help you to understand the requirement upgrading from HDP 2. X to CDP 7.X cluster as well as Hive.     https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/upgrade-hdp/topics/amb-hdp-cdp-upg.html  https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/upgrade-hdp/topics/ug_hive_validations.html  https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/upgrade-cdh/topics/ug_hive_changes_in_cdp.html     If you are happy with the comment, Mark it "Accept as Solution". 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













