Member since 
    
	
		
		
		10-28-2020
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                622
            
            
                Posts
            
        
                47
            
            
                Kudos Received
            
        
                40
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 2247 | 02-17-2025 06:54 AM | |
| 6944 | 07-23-2024 11:49 PM | |
| 1460 | 05-28-2024 11:06 AM | |
| 2015 | 05-05-2024 01:27 PM | |
| 1304 | 05-05-2024 01:09 PM | 
			
    
	
		
		
		10-18-2022
	
		
		01:29 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @SwaggyPPPP Is this a partitioned table? In that case you could run the ALTER TABLE  command as follows:  alter table my_table add columns(field4 string,field5 string) CASCADE;  Let us know if this issue occurs consistently, after adding new columns, and your Cloudera product version? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-18-2022
	
		
		01:07 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @KPG1 We only support upgrading an existing cluster using Ambari or Cloudera Manager, instead of importing/updating the jars manually. In latest CDP Private cloud base, and our Public Cloud,  we are using Hadoop version 3.1.1 at this point. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-18-2022
	
		
		12:56 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @ditmarh this might not work in scenarios where the table schema.table is created from Hive, and we are appending to it from Spark.      You may try the following command, replacing saveAsTable with insertInto.  df.write.mode("append").format("parquet").insertInto("schema.table") 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-14-2022
	
		
		01:21 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @RamuAnnamalai It looks similar to https://issues.apache.org/jira/browse/IMPALA-10042  Please check what's the value of "Maximum Cached File Handles" under Impala Configuration in CM UI? Set that to Zero(0) and see if the issue still reappears.      How do you write to the table? Is there a chance, the data is getting corrupted during the insert?  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-14-2022
	
		
		01:07 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Asim- Unless your final table has to be a Hive managed(acid) table then, you could incrementally update the Hive table directly using Sqoop.  e.g.  sqoop import --connect jdbc:oracle:thin:@xx.xx.xx.xx:1521:ORCL --table EMPLOYEE --username user1 --password welcome1 --incremental lastmodified --merge-key employee_id --check-column emp_timestamp --target-dir /usr/hive/warehouse/external/empdata/  Otherwise, the way you are trying is the actually the way Cloudera recommends it. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-05-2022
	
		
		09:16 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @HanzalaShaikh You may consider DLM replication. This is explained here and here.  You set the hive.repl.rootdir to set the location where you you want to store the backup, and use the REPL DUMP command to dump your data and metadata:  e.g.     REPL DUMP db1 WITH('hive.repl.rootdir'='s3a://blah/');     Refer to the Cloudera documentation for for more details and examples.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-31-2022
	
		
		05:05 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @mohammad_shamim Did you have Hive HA configured in CDH cluster, in that case, you need to make sure that there are equal number of HS2 instances created in the CDP cluster, because without that HA cannot be attained. Also, make sure that there is no Hiveserver2 instance created under "Hive" service in CDP. It should only be present under Hive on Tez service. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-18-2022
	
		
		06:56 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @ssuja I am afraid it's not achievable using Ranger. If you already have a data directory owned by a specific user, say user1, you may create a policy in Ranger providing hive and other users access to that directory path(URI), and keep the physical path owned by user1 itself. See, if this is something you can work with. I should also mention, creating an external Hive table without Location clause, will create a directory with hive ownership, for Impersonation is disabled in Hive. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-12-2022
	
		
		11:27 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @ssuja there is a Hive property that would help you achieve what you are aiming for. Look for hive.server2.enable.doAs under Hive on Tez configurations and enable it. However, there is a catch. This property needs to be disabled if you are using Ranger for authorization. If you are not using Ranger, and using Storage Based Authorization(which is not the recommended in CDP), then you could definitely enable this. Refer to the doc here. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-05-2022
	
		
		02:12 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @xinghx The only difference between CDP 7.1.1 and 7.1.7 is HIVE-24920.  In your test case, the CREATE TABLE statement is creating an External table with "TRANSLATED_TO_EXTERNAL" table property set to "TRUE". Your second query to change the table to a Managed/acid table does not really work, so that query has no impact apart from just adding a table property.     Now coming to the RENAME query, I notice it does not change the location in CDP 7.1.1 either. Please refer to the attachment. In CDP 7.1.7(SP1) it does change the location if we have "TRANSLATED_TO_EXTERNAL" = "TRUE", If we set it to false, we have the same behavior as 7.1.1.  alter table alter_test set tblproperties("TRANSLATED_TO_EXTERNAL"="FALSE");  I hope this helps. 
						
					
					... View more