Member since 
    
	
		
		
		04-25-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                579
            
            
                Posts
            
        
                609
            
            
                Kudos Received
            
        
                111
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 2930 | 02-12-2020 03:17 PM | |
| 2138 | 08-10-2017 09:42 AM | |
| 12480 | 07-28-2017 03:57 AM | |
| 3424 | 07-19-2017 02:43 AM | |
| 2526 | 07-13-2017 11:42 AM | 
			
    
	
		
		
		12-22-2016
	
		
		01:05 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		3 Kudos
		
	
				
		
	
		
					
							 this seems a problem with move task parallelism introduced in hdp-2.5, could you please try run this query after setting following param  set hive.mv.files.thread=0 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-22-2016
	
		
		12:51 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 SYMPTOM:   there are lots of heap dump file created under the directory controlled by java flag -XX:HeapDumpPath, user complains that HiveServer2 has generated these dump since this flag was setup by him for HiveServer2.  looking at the HiveServer2 logs, there was no trace for startup and stop services. even there was no sign of any type of failure in the logs.  ROOT CAUSE:  At very preliminary analysis of heap dump I examine the process arguments which looks like this   hive     12345 1234  9.8 27937992 26020536 ?   Sl   Dec01 19887:12  \_ /etc/alternatives/java_sdk_1.8.0/bin/java -Xmx24576m -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhdp.          version=2.3.4.20-3 -Dhadoop.log.dir=/var/log/hadoop/hive -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.4.2.-258/hadoop -Dhadoop.id.str=hive -Dhadoop.root.logger=INFO,console -Djava.library.   path=:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/lib/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.2.-258/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.           preferIPv4Stack=true -Xmx24576m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/log/hive -Dhadoop.security.logger=INFO,NullAppender -Dhdp.version=2.3.4.20-3 -Dhadoop.log.dir=/var/log/hadoop/hive -  Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.4.2.-258/hadoop -Dhadoop.id.str=hive -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64- 64:/usr/lib/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.2.-258/hadoop/lib/native:/usr/hdp/2.4.2.-258/hadoop/lib/native/Linux-amd...      hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx24576m -Xmx24576m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/log/hive -Dhadoop.security.            logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.4.2.-258/hive/lib/hive-common-1.2.1.2.3.4.74-1.jar org.apache.hadoop.hive.ql.exec.mr.ExecDriver -localtask -plan file:/tmp/hive/261a57a5- caab-4f98-9fa2-f50209ba29e9/*****/-local-10006/plan.xml -jobconffile file:/tmp/hive/K*****/-local-10007/jobconf.xml  the java process argument suggests that it is not hiveserver2, it looks like map side join spun by cliDriver.  we looked at a hive-env file which seems messed up, specially HADOOP_CLIENT_OPTS  if [ "$SERVICE" = "cli" ]; then
   if [ -z "$DEBUG" ]; then
     export HADOOP_OPTS="$HADOOP_OPTS -XX:NewRatio=12  -XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:+UseParNewGC -XX:-UseGCOverheadLimit -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/log/hive  "
   else
     export HADOOP_OPTS="$HADOOP_OPTS -XX:NewRatio=12 -Xms10m -XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:-UseGCOverheadLimit   -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/log/hive"
   fi
 fi
# The heap size of the jvm stared by hive shell script can be controlled via:
export HADOOP_CLIENT_OPTS="$HADOOP_CLIENT_OPTS -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/log/hive "
  so after looking over we are sure that it was hive CLI process which spun Map Side join actually went into OOM not the Hiveserver2.  WORKAROUND:  NA  RESOLUTION:  Ask user to modify hive-env and set HADOOP_CLIENT_OPTS appropriately. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		12-22-2016
	
		
		12:15 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 SYMPTOM:   hiveserver2 is failing with OutOfMemoryError very frequently, user increased HiveServer2 heap size to 16GB but the still faces the same issue.  we enabled heap dump using -XX:+HeapDumpOnOutOfMemoryError to see what object are causing the hiveserver2 heap to grow such big.  ROOT CAUSE:  After initial analysis we notice that some of the connection objects are set with the fetchSize of 50M,we enquire user about that which reveals that there are some connection strings where the fetchSize was set as 50M.  with the fetchSize setting in place HiveServer2 is taking a lot of heap space while fetching the query result and going into out of memory.  WORKAROUND:  NA  RESOLUTION:  Ask user to remove the fetchSize setting from connection string. there is an improvement in place in the hive community tracked as https://issues.apache.org/jira/browse/HIVE-14901 to put a 'guardrail' against someone using too high a value for fetchSize. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		12-22-2016
	
		
		08:39 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		3 Kudos
		
	
				
		
	
		
					
							@Guillaume Roger looks file is not there on repo, could you please contact HW support to get it fixed. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-22-2016
	
		
		08:36 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Dinesh Das this should be admin/admin 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-22-2016
	
		
		08:22 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 @Christian van den Heever  did you followed and tried the steps provided in this thread?  https://community.hortonworks.com/questions/65329/service-hdfs-check-failed.html 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-22-2016
	
		
		08:19 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@rama its upto you whether you want to create a table in new database or the same database where the original table is, for the second query you can specify database path aftter specifying path in command  create database <db name> location <some location> 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-22-2016
	
		
		07:25 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 could you please try to   create table bkp_table like original_table;  and insert data into bkp_table from original table and see if it helps  @rama 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-22-2016
	
		
		06:28 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							@Yukti Agrawal if you have a tez view set up with ambari then you can open that and search with application id, opening this you will able to see the query     
						
					
					... View more