Member since 
    
	
		
		
		04-25-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                579
            
            
                Posts
            
        
                609
            
            
                Kudos Received
            
        
                111
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 2930 | 02-12-2020 03:17 PM | |
| 2138 | 08-10-2017 09:42 AM | |
| 12480 | 07-28-2017 03:57 AM | |
| 3424 | 07-19-2017 02:43 AM | |
| 2526 | 07-13-2017 11:42 AM | 
			
    
	
		
		
		12-23-2016
	
		
		07:08 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							@ARUN it looks hiveserver2 is not able to create zk node in zookeeper, it could be issue at zookeeper side   could you please check whether zk node got created or not using  /usr/hdp/current/zookeeper-client/bin/zkCli.sh -server localhost:2181   ls /hiveserver2  see if you have issue with zookeeper server. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-23-2016
	
		
		06:18 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		3 Kudos
		
	
				
		
	
		
					
							 	often we have need to read the parquet file, parquet-meta data or parquet-footer, parquet tools is shipped with parquet-hadoop library which can help us to read parquet. these are simple steps to build parquet-tools and demonstrate use of it.  prerequisites: maven 3,git, jdk-7/8   // Building a parquet tools   git clone https://github.com/Parquet/parquet-mr.git 
cd parquet-mr/parquet-tools/ 
mvn clean package -Plocal   // know the schema of the parquet file   java -jar parquet-tools-1.6.0.jar schema sample.parquet   // Read parquet file   java -jar parquet-tools-1.6.0.jar cat sample.parquet   // Read few lines in parquet file   java -jar parquet-tools-1.6.0.jar head -n5 sample.parquet   // know the meta information of the parquet file   java -jar parquet-tools-1.6.0.jar meta sample.parquet 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		12-23-2016
	
		
		05:49 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 SYMPTOM:  user complains that he is seeing some special characters (^@^@^@^@^@^@^) in GC logs similar to this  concurrent-mark-sweep perm gen total 29888K, used 29792K [0x00000007e0000000, 0x00000007e1d30000, 0x0000000800000000)
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^ @^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@2016-12-10T00:09:31.648-0500: 912.497: [GC2016-12-10T00:09:31.648-0500: 912.497: [ParNew: 574321K->15632K(629120K),   0.0088960 secs] 602402K->43713K(2027264K), 0.0090030 secs]
  ROOT CAUSE:  there was some query which was running on hiveserver2 in mr mode which spins Map Join actually forking a new process after inheriting the all java arguments was actually writing to same HiveServer2 GC files. this is introducing a special characters in the GC logs and also some of the GC events got skipped while writing to GC file.  WORKAROUND:   NA  RESOLUTION:   Run the query in Tez mode which will force the map join to run in to task container or set hive.exec.submit.local.task.via.child=false which will not fork a child process to run a local map task but this can be risky if somehow you mapjoin goes OOM then it can stall your service 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		12-23-2016
	
		
		05:00 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @rama will you able to run the suggestions? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-22-2016
	
		
		05:39 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 try adding these jars in hive aux jars or at session level using add jar option and see if it helps   hive-hbase-handler-*.jar   hbase-client-*.jar   ADD JAR /usr/hdp/current/hive-client/lib/hive-hbase-handler-*.jar;    ADD JAR /usr/hdp/2.3.2.0-2950/hbase/lib/hbase-client-*.jar ; 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-22-2016
	
		
		05:33 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		4 Kudos
		
	
				
		
	
		
					
							 remove property ‘org.apache.atlas.hive.hook.HiveHook’ from ‘hive.exec.post.hooks’ in hive-site.xml, and restart all affected components 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-22-2016
	
		
		04:42 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 SYMPTOM:  The following jmx tool command is failing with "port already in user" error  /usr/hdp/current/kafka-broker/bin/kafka-run-class.sh kafka.tools.JmxTool --object-name kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec  --jmx-url service:jmx:rmi:///jndi/rmi://`hostname`:1099/jmxrmi  ROOT CAUSE:  JMX tool invoke the kafka-run-class.sh after reading kafka-env.sh, since we have already have export $JMX_PORT=1099 there then it tried to bind the same host while invoking the jmx tool which result into error.  WORKAROUND:  NA  RESOLUTION:  edit kafka-run-class.sh  Replace section 
# JMX port to use 
if [ $JMX_PORT ]; then 
KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT" 
fi 
To the following: 
Changed security protocol to the correct ones and kafka cli script as follows. 
# JMX port to use 
if [ $ISKAFKASERVER = "true" ]; then 
JMX_REMOTE_PORT=$JMX_PORT 
else 
JMX_REMOTE_PORT=$CLIENT_JMX_PORT 
fi 
if [ $JMX_REMOTE_PORT ]; then 
KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_REMOTE_PORT" 
fi
 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		12-22-2016
	
		
		04:20 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 hiveserver2 create operational logs which are added part of https://issues.apache.org/jira/browse/HIVE-4629 to report the progress of query like below.  "Parsing command", "Parse Completed", "Starting Semantic Analysis", "Semantic Analysis Completed", "Starting command"  these pipe file are created to write intermediate result into it to report the query progress,you need not to worry about them, if you want to disable them there is an option in hive to disable them by using   
 hive.server2.logging.operation.enabled  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-22-2016
	
		
		01:19 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 SYMPTOM:  same hive insert query is failing with following different exception intermittently.  grep 'Failed to move' hive.log 
ERROR [main]: metadata.Hive (Hive.java:copyFiles(2652)) - Failed to move: java.util.ConcurrentModificationException 
ERROR [main]: metadata.Hive (Hive.java:copyFiles(2652)) - Failed to move: java.util.ConcurrentModificationException 
ERROR [main]: metadata.Hive (Hive.java:copyFiles(2652)) - Failed to move: java.util.ConcurrentModificationException 
ERROR [main]: metadata.Hive (Hive.java:copyFiles(2652)) - Failed to move: java.util.NoSuchElementException 
ERROR [main]: metadata.Hive (Hive.java:copyFiles(2652)) - Failed to move: java.util.NoSuchElementException 
ERROR [main]: metadata.Hive (Hive.java:copyFiles(2652)) - Failed to move: java.lang.ArrayIndexOutOfBoundsException: -3 
  ROOT CAUSE:  earlier it was observed that if query execution time is small but the Move Task which copies the part file to destination directory is actually taking too long to complete if the destination directory has too many partitions.In HDP 2.5, hive community introduced move task parallelism with default 15 concurrent threads. during the copy phase there is some race condition at metastore failing the query with different exceptions.  WORKAROUND:  disable move task parallelism by setting hive.mv.files.thread=0  RESOLUTION:  To fix it get a patch for https://issues.apache.org/jira/browse/HIVE-15355 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels: