Member since 
    
	
		
		
		12-09-2015
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                106
            
            
                Posts
            
        
                40
            
            
                Kudos Received
            
        
                20
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 3559 | 12-26-2018 08:07 PM | |
| 3540 | 08-17-2018 06:12 PM | |
| 1824 | 08-09-2018 08:35 PM | |
| 14612 | 01-03-2018 12:31 AM | |
| 1404 | 11-07-2017 05:53 PM | 
			
    
	
		
		
		12-26-2018
	
		
		08:07 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 You can use the Export Table command  https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ImportExport#LanguageManualImportExport-ExportSyntax 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-12-2018
	
		
		09:23 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 desc formatted <table> <column>  https://cwiki.apache.org/confluence/display/Hive/StatsDev#StatsDev-Examples 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-20-2018
	
		
		04:15 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		3 Kudos
		
	
				
		
	
		
					
							 hive.merge.cardinality.check=false is a bad idea.  The logic controlled by this property checks if the ON clause of your Merge statement is such that more than 1 row from source side matches the same row from target side (which only happens in WHEN MATCHED clause).  Logically what this means is that the query is asking the system to update 1 existing row in target in 2 (or more) different ways.  This check is actually part of SQL standard definition of how Merge should work.  You either need examine your data or the ON clause but disabling this check, when it throws a cardinality_violation error, may lead to data corruption later. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-17-2018
	
		
		06:12 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 When you do SHOW COMPACTIONS, if compaction MR job was submitted, it will show Hadoop Job ID, which can be used to get more info if the problem is with the job in the Resource Manager UI.  If it failed even before submitting the job to the cluster, the errors would be in the log of the standalone Hive Metastore running the compactor processes. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-09-2018
	
		
		08:35 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 https://cwiki.apache.org/confluence/display/Hive/LanguageManual+SortBy 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-11-2018
	
		
		07:30 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 hive.support.concurrency property enables locking.  When a queries is shutdown its locks should be released immediately.  When dies abruptly it may leave locks behind.  These will be cleaned up by a background process running from a standalone Hive metastore process.  This process will consider locks abandoned if they have not heartbeated for (by default) 5 minutes.  Metastore logfile should have entries from AcidHouseKeeperService - that is the clean up process. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-04-2018
	
		
		05:57 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Not generally.  The data layout for transactional tables requires special logic to decide which directories to read and how to combine them correctly.  Some data files may represent updates of previously written rows, for example.  Also, if you are reading while something is writing to this table your read may fail (w/o the special logic) because it will try to read incomplete ORC files.  Compaction may (again w/o the special logic) may make it look like your data is duplicated.   
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-03-2018
	
		
		12:31 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Spark doesn't support reading Hive Acid tables directly.  (https://issues.apache.org/jira/browse/SPARK-15348/SPARK-16996)  It can be done (WIP) via LLAP - tracked in https://issues.apache.org/jira/browse/HIVE-12991 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













