Member since 
    
	
		
		
		09-16-2021
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                421
            
            
                Posts
            
        
                55
            
            
                Kudos Received
            
        
                39
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 235 | 10-22-2025 05:48 AM | |
| 314 | 09-05-2025 07:19 AM | |
| 803 | 07-15-2025 02:22 AM | |
| 1350 | 06-02-2025 06:55 AM | |
| 1628 | 05-22-2025 03:00 AM | 
			
    
	
		
		
		09-18-2024
	
		
		09:19 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 This solution worked for eliminating error , but data is not being fetched from table.     empty data frame showing. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-18-2024
	
		
		01:22 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @zhuodongLi, Did the responses help resolve your query? If it did, kindly mark the relevant reply as the solution, as it will aid others in locating the answer more easily in the future.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-11-2024
	
		
		08:47 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @ggangadharan  thanks for your reply.     Yes, as soon spark sees NUMBER data type in oralce it convert the df datatype to decimal(38,10) then when precision value in oracle column contains >30 spark cant accommodate it as it only allows 28 max digits if decimal(38,10) hence getting this issue.  yeah as you said the probable solution is to cast it as string Type.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-05-2024
	
		
		04:53 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Lorenzo The issue seems to be related to HIVE-27191 where some mhl_txnids do not exist in TXNS,completed_txn_components txn_components table but they are still present in min_history_level table, as a result, the cleaner gets blocked and many entries are stuck in the ready-for-cleaning state. To confirm that collect the output of below query     SELECT MHL_TXNID FROM HIVE.MIN_HISTORY_LEVEL WHERE MHL_MIN_OPEN_TXNID = (SELECT MIN(MHL_MIN_OPEN_TXNID) FROM HIVE.MIN_HISTORY_LEVEL);     Once we get the output of the above query check if those txn ids are there in TXNS,completed_txn_components txn_components tables using below commands.     select * from txn_components where tc_txnid IN (MHL_TXNID );  select * from completed_txn_components where ctc_txnid IN (MHL_TXNID);  select * from TXNS where ctc_txnid IN (MHL_TXNID);     If we got 0 results from the above queries this confirms that the MHL_TXNIDs we got above are orphans and we need to remove them in order to unblock the cleaner.     delete from MIN_HISTORY_LEVEL where MHL_TXNID=13422; --(repeat for all)     Hope this helps you in resolving the issue 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-05-2024
	
		
		01:36 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Are you using the same user account to connect via ODBC which you used to log in to Hue? Please verify that.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-04-2024
	
		
		05:35 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 If setting the proper queue name resolves the problem, it is possible that the query may have been submitted in the default queue, where it competes for resources with other queries and fails due to a timeout error 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-28-2024
	
		
		01:40 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Unfortunately, it is not possible to change the Application-Name of an already started Application Master in Apache Hadoop YARN. The Application-Name is set when the application is submitted and cannot be modified during runtime.  The Application-Name is typically specified as a parameter when submitting the application using the spark-submit command or the YARN REST API. Once the application is started, the Application-Name is fixed and cannot be changed.  If you need to change the Application-Name, you will need to stop the existing application and submit a new one with the desired name. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-28-2024
	
		
		12:40 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 When writing to a statically partitioned table using HWC, the following query is internally fired to Hive through JDBC after writing data to a temporary location:  Spark write statement:    df.write.format(HIVE_WAREHOUSE_CONNECTOR).mode("append").option("partition", "c1='val1',c2='val2'").option("table", "t1").save();   HWC internal query:  LOAD DATA INPATH '<spark.datasource.hive.warehouse.load.staging.dir>' [OVERWRITE] INTO TABLE db.t1 PARTITION (c1='val1',c2='val2');   During static partitioning, the partition information is known during compile time, resulting in the creation of a staging directory in the partition directory.     On the other hand, when writing to a dynamically partitioned table using HWC, the following query is internally fired to Hive through JDBC after writing data to a temporary location:    Spark write statement:  df.write.format(HIVE_WAREHOUSE_CONNECTOR).mode("append").option("partition", "c1='val1',c2").option("table", "t1").save();  HWC internal query:  CREATE TEMPORARY EXTERNAL TABLE db.job_id_table(cols....) STORED AS ORC LOCATION '<spark.datasource.hive.warehouse.load.staging.dir>'; 
INSERT INTO TABLE t1 PARTITION (c1='val1',c2) SELECT <cols> FROM db.job_id_table;    During dynamic partitioning, the partition information is known during runtime, hence the staging directory is created at the table level. Once the DAG  is completed, the MOVE TASK will move the files to the respective partitions.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-22-2024
	
		
		04:34 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 It looks similar to the KB    Please follow the instructions in the KB.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-13-2024
	
		
		10:04 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks @ggangadharan     As far as I can see HBase is up and running but I found something in the HBase log:    2024-08-13 21:53:30,583 INFO SecurityLogger.org.apache.hadoop.hbase.Server: Auth successful for hive/HOST@REALM (auth:KERBEROS)  2024-08-13 21:53:30,584 INFO SecurityLogger.org.apache.hadoop.hbase.Server: Connection from xx.xxx.xx.xxx:55106, version=2.2.3.7.1.7.0-551, sasl=true, ugi=hive/HOST@REALM (auth:KERBEROS), service=ClientService  2024-08-13 21:53:30,584 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for hive/HOST@REALM (auth:KERBEROS) for protocol=interface org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$BlockingInterface  2024-08-13 21:53:38,853 WARN org.apache.hadoop.hbase.ipc.RpcServer: Expected HEADER=HBas but received HEADER=\x00\x00\x013 from xx.xxx.xx.xxx:39718  2024-08-13 21:53:38,853 WARN org.apache.hadoop.hbase.ipc.RpcServer: Expected HEADER=HBas but received HEADER=\x0A\x04hi from xx.xxx.xx.xxx:39718  2024-08-13 21:53:39,056 WARN org.apache.hadoop.hbase.ipc.RpcServer: Expected HEADER=HBas but received HEADER=\x00\x00\x013 from xx.xxx.xx.xxx:39720  2024-08-13 21:53:39,056 WARN org.apache.hadoop.hbase.ipc.RpcServer: Expected HEADER=HBas but received HEADER=\x0A\x04hi from xx.xxx.xx.xxx:39720  2024-08-13 21:53:39,361 WARN org.apache.hadoop.hbase.ipc.RpcServer: Expected HEADER=HBas but received HEADER=\x00\x00\x013 from xx.xxx.xx.xxx:39722  2024-08-13 21:53:39,361 WARN org.apache.hadoop.hbase.ipc.RpcServer: Expected HEADER=HBas but received HEADER=\x0A\x04hi from xx.xxx.xx.xxx:39722  2024-08-13 21:53:39,869 WARN org.apache.hadoop.hbase.ipc.RpcServer: Expected HEADER=HBas but received HEADER=\x00\x00\x013 from xx.xxx.xx.xxx:39724  2024-08-13 21:53:39,870 WARN org.apache.hadoop.hbase.ipc.RpcServer: Expected HEADER=HBas but received HEADER=\x0A\x04hi from xx.xxx.xx.xxx:39724  2024-08-13 21:53:40,877 WARN org.apache.hadoop.hbase.ipc.RpcServer: Expected HEADER=HBas but received HEADER=\x00\x00\x013 from xx.xxx.xx.xxx:39726  2024-08-13 21:53:40,877 WARN org.apache.hadoop.hbase.ipc.RpcServer: Expected HEADER=HBas but received HEADER=\x0A\x04hi from xx.xxx.xx.xxx:39726  2024-08-13 21:53:42,882 WARN org.apache.hadoop.hbase.ipc.RpcServer: Expected HEADER=HBas but received HEADER=\x00\x00\x013 from xx.xxx.xx.xxx:39728  2024-08-13 21:53:42,882 WARN org.apache.hadoop.hbase.ipc.RpcServer: Expected HEADER=HBas but received HEADER=\x0A\x04hi from xx.xxx.xx.xxx:39728  2024-08-13 21:53:46,219 INFO org.apache.hadoop.hbase.io.hfile.LruBlockCache: totalSize=9.18 MB, freeSize=12.20 GB, max=12.21 GB, blockCount=5, accesses=7481, hits=7461, hitRatio=99.73%, , cachingAccesses=7469, cachingHits=7461, cachingHitsRatio=99.89%, evictions=2009, evicted=0, evictedPerRun=0.0  2024-08-13 21:53:46,914 WARN org.apache.hadoop.hbase.ipc.RpcServer: Expected HEADER=HBas but received HEADER=\x00\x00\x013 from xx.xxx.xx.xxx:39730  2024-08-13 21:53:46,914 WARN org.apache.hadoop.hbase.ipc.RpcServer: Expected HEADER=HBas but received HEADER=\x0A\x04hi from xx.xxx.xx.xxx:39730  2024-08-13 21:53:50,477 INFO org.apache.hadoop.hbase.ScheduledChore: CompactionThroughputTuner average execution time: 8653 ns.  2024-08-13 21:53:50,572 INFO org.apache.hadoop.hbase.replication.regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B  2024-08-13 21:53:55,216 INFO SecurityLogger.org.apache.hadoop.hbase.Server: Auth successful for hbase/HOST@REALM (auth:KERBEROS)  2024-08-13 21:53:55,216 INFO SecurityLogger.org.apache.hadoop.hbase.Server: Connection from xx.xxx.xx.xxx:55174, version=2.2.3.7.1.7.0-551, sasl=true, ugi=hbase/HOST@REALM (auth:KERBEROS), service=ClientService  2024-08-13 21:53:55,216 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for hbase/HOST@REALM (auth:KERBEROS) for protocol=interface org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$BlockingInterface  2024-08-13 21:53:56,136 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping HBase metrics system...  2024-08-13 21:53:56,136 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: HBase metrics system stopped.  2024-08-13 21:53:56,638 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties  2024-08-13 21:53:56,641 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).  2024-08-13 21:53:56,641 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: HBase metrics system started      This Warning (WARN org.apache.hadoop.hbase.ipc.RpcServer: Expected HEADER=HBas but received HEADER=\x00\x00\x013 from xx.xxx.xx.xxx:39730) only appears for the statement:    insert overwrite table managed_ml select key, cf1_id , cf1_name from c_0external_ml;    Others statements like insert into c_0external_ml values (1,2,3); runs perfectly.       Does this error sound familiar to you??? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













