Member since 
    
	
		
		
		10-05-2017
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                3
            
            
                Posts
            
        
                0
            
            
                Kudos Received
            
        
                0
            
            
                Solutions
            
        
			
    
	
		
		
		10-18-2017
	
		
		08:29 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Raj Sivanesan  Here's what I did in a lab environment.  I wound up with over 20k partitions on a table (d'oh) and was ok with blowing out the table/database.  I can't confirm that this should be done on a production cluster - use with caution.  Feedback is welcome.  Backup Hive Metastore:  mysqldump -u root -p hivedb >> hivedb.bak  Hive Metastore:  --SELECT TBL_ID FROM TBLS WHERE TBL_NAME = 'myTable';
DELETE FROM PARTITION_KEY_VALS WHERE PART_ID IN (SELECT PART_ID FROM PARTITIONS WHERE TBL_ID = 54);
DELETE FROM PARTITION_PARAMS WHERE PART_ID IN (SELECT PART_ID FROM PARTITIONS WHERE TBL_ID = 54);
DELETE FROM PARTITIONS WHERE TBL_ID = 54;
  Hive:  DROP DATABASE IF EXISTS myDatabase CASCADE; 
						
					
					... View more