Member since 
    
	
		
		
		09-16-2021
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                421
            
            
                Posts
            
        
                55
            
            
                Kudos Received
            
        
                39
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 226 | 10-22-2025 05:48 AM | |
| 309 | 09-05-2025 07:19 AM | |
| 791 | 07-15-2025 02:22 AM | |
| 1344 | 06-02-2025 06:55 AM | |
| 1617 | 05-22-2025 03:00 AM | 
			
    
	
		
		
		05-26-2025
	
		
		06:35 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Too simple 😋  I can't use clause IN as already written in my first post.  Then my credential are "read only," I can't open a temporary tables. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-07-2025
	
		
		05:48 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Yigal   It is not supported in Impala, Below is the Jira for your reference it is still in open state and not Resolved.  https://issues.apache.org/jira/browse/IMPALA-5226     Regards,  Chethan YM 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-23-2025
	
		
		12:46 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							  When using the -f flag with Beeline to execute an HQL script file, Beeline follows a fail-fast approach. If any single SQL statement fails, Beeline will stop execution immediately and exit with an error.  This behavior helps ensure data integrity , it avoids executing dependent or subsequent statements that may rely on a failed operation (e.g., missing table, failed insert).  Beeline does not support built-in error handling or continuation of script execution after a failed statement within the same script file.  To continue executing remaining statements even if one fails:  Use a Bash (or similar) wrapper script to:    Loop over each statement/block.  Execute each one using a separate beeline -e "<statement>"  Check the exit code after each call.  Log errors to a file (e.g., stderr, exit code, or custom messages).  Proceed to the next statement regardless of the previous outcome.      Sample BASH Script   [root@node4 ~]# cat run_hive_statements.sh
#!/bin/bash
# Set the HQL file to read SQL-like Hive queries from
HQL_FILE="sample_hql_success_failure.hql"
# Log file for capturing errors or output, named with the current date
LOG_FILE="error_log_$(date +%Y%m%d).log"
# Read the contents of the HQL file into a variable
HQL_CONTENT=$(cat "$HQL_FILE")
# Preprocess the file: ensure each SQL statement ends with a newline after semicolon
STATEMENTS=$(sed 's/;/;\n/g' "$HQL_FILE")
# Loop through each SQL-like statement (semicolon-separated)
for STATEMENT in "${STATEMENTS[@]}"; do
  # Clean up the statement: remove empty lines and comments
  SQL_SCRIPT=$(echo "$STATEMENT" | sed '/^\s*$/d' | sed '/^--.*$/d')
  # If the cleaned SQL statement is not empty, proceed
  if [ -n "$SQL_SCRIPT" ]; then
    # Split potential multiline statements using awk with record separator as ';'
    echo "$SQL_SCRIPT" | awk '
      BEGIN { RS=";" }
      {
        gsub(/^[ \t\r\n]+|[ \t\r\n]+$/, "", $0);  # Trim leading/trailing whitespace
        if(length($0)) print $0 ";"
      }
    ' | while read -r STATEMENT; do
      # Execute each final cleaned and trimmed SQL statement
      echo "Executing: $STATEMENT"
      beeline -n hive -p hive -e "$STATEMENT" >> "$LOG_FILE" 2>&1
      EXIT_CODE=$?
      if [ "$EXIT_CODE" -ne 0 ]; then
         echo "Error executing statement:"
         echo "$FINAL_STATEMENT;" >> "$LOG_FILE"
         echo "Beeline exited with code: $EXIT_CODE" >> "$LOG_FILE"
      fi
    done
  fi
done
[root@node4 ~]#   Sample HQL file   [root@node4 ~]# cat sample_hql_success_failure.hql
-- This script demonstrates both successful and failing INSERT statements
-- Successful INSERT
CREATE TABLE IF NOT EXISTS successful_table (id INT, value STRING);
INSERT INTO successful_table VALUES (1, 'success1');
INSERT INTO successful_table VALUES (2, 'success2');
SELECT * FROM successful_table;
-- Intentionally failing INSERT (ParseException)
CREATE TABLE IF NOT EXISTS failing_table (id INT, value INT);
INSERT INTO failing_table VALUES (1, 'this_will_fail);
SELECT * FROM failing_table;
-- Another successful INSERT (will only run if the calling script continues)
CREATE TABLE IF NOT EXISTS another_successful_table (id INT, data STRING);
INSERT INTO another_successful_table VALUES (101, 'continued_data');
SELECT * FROM another_successful_table;
-- Intentionally failing INSERT (table does not exist)
INSERT INTO non_existent_table SELECT * FROM successful_table;
[root@node4 ~]#   Sample ERROR log created as part of the script.    [root@node4 ~]# grep -iw "error" error_log_20250423.log
Error: Error while compiling statement: FAILED: ParseException line 1:54 character '<EOF>' not supported here (state=42000,code=40000)
Error: Error while compiling statement: FAILED: SemanticException [Error 10001]: Line 1:12 Table not found 'non_existent_table' (state=42S02,code=10001)
[root@node4 ~]# 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-10-2025
	
		
		10:44 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 To provide the exact HQL query, Please share the following :   DDL for both the tables  Sample records from each table   The expected output based on the sample data.    The above information will help to understand the problem statement better and validate the solution.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-28-2025
	
		
		03:47 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 * This has been addressed as part of support case.   * Tez job failed with below error.   Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: Encountered unregistered class ID: 104
Serialization trace:
columnTypeResolvers (org.apache.hadoop.hive.ql.exec.UnionOperator)
tableDesc (org.apache.hadoop.hive.ql.plan.PartitionDesc)
aliasToPartnInfo (org.apache.hadoop.hive.ql.plan.MapWork)
        at org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:137)
        at org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:693)
        at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readClass(SerializationUtilities.java:186)
        at org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:118)
        at org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:543)
        at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:731)
        at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:219)
        at org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
        at org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:543)
        at org.apache.hadoop.hive.ql.exec.SerializationUtilities$PartitionDescSerializer.read(SerializationUtilities.java:580)
        at org.apache.hadoop.hive.ql.exec.SerializationUtilities$PartitionDescSerializer.read(SerializationUtilities.java:572)
        at org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:813)
        at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readClassAndObject(SerializationUtilities.java:181)
        at org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:161)
        at org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:39)
        at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:731)
        at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:219)
        at org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
        at org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:543)
        at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:709)
        at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:211)
        at org.apache.hadoop.hive.ql.exec.SerializationUtilities.deserializeObjectByKryo(SerializationUtilities.java:755)
        at org.apache.hadoop.hive.ql.exec.SerializationUtilities.deserializePlan(SerializationUtilities.java:661)
        at org.apache.hadoop.hive.ql.exec.SerializationUtilities.deserializePlan(SerializationUtilities.java:638)
        at org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:492)
        ... 22 more  * Serialization related jars loaded from different version of hive-exec (hive-exec-<version>.jar)  * Remove older version of jars from the HS2 classpath and aux jars to overcome the problem.        
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-25-2025
	
		
		04:12 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 If the Beeline shell gets stuck, first validate whether HS2 is up and running. Then, check if the query reaches HS2.    If the query reaches HS2 but gets stuck, analyze the HS2 JSTACK and HS2 logs to identify the issue.    If the query does not reach HS2, validate the Beeline JSTACK, HS2 JSTACK, and HS2 logs.    If you are unable to determine the root cause with this information, I recommend raising a support ticket for further investigation. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-21-2025
	
		
		06:29 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 To identify which user is writing the files, use HDFS CLI commands such as ls or getfacl 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-06-2025
	
		
		07:42 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Assuming it's a MapReduce job, since you're looking for information related to MapReduce I/O counters.    Script to calculate the counter info.      [hive@node4 ~]$ cat get_io_counters.sh
#!/bin/bash
# Ensure a job ID is provided
if [ "$#" -ne 1 ]; then
    echo "Usage: $0 <job_id>"
    exit 1
fi
JOB_ID=$1
# Extract I/O counters from the MapReduce job status
mapred job -status "$JOB_ID" | egrep -A 1 'File Input Format Counters|File Output Format Counters' | awk -F'=' '
  /File Input Format Counters/ {getline; bytes_read=$2}
  /File Output Format Counters/ {getline; bytes_written=$2}
  END {
    total_io_mb = (bytes_read + bytes_written) / (1024 * 1024)
    printf "BYTES_READ=%d\nBYTES_WRITTEN=%d\nTOTAL_IO_MB=%.2f\n", bytes_read, bytes_written, total_io_mb
  }'
[hive@node4 ~]$    Sample Output      [hive@node4 ~]$ ./get_io_counters.sh job_1741272271547_0007
25/03/06 15:38:34 INFO client.RMProxy: Connecting to ResourceManager at node3.playground-ggangadharan.coelab.cloudera.com/10.129.117.75:8032
25/03/06 15:38:35 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
BYTES_READ=288894
BYTES_WRITTEN=348894
TOTAL_IO_MB=0.61
[hive@node4 ~]$ 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-04-2025
	
		
		04:55 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @ggangadharan I have tried this and it is working. Thanks for the help. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-07-2025
	
		
		11:50 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @zorrofrombrasil   Did you get a chance to check the updates from @ggangadharan and @smruti?  If so, did the information help to fix the issue? or Do you have any other concerns? 
						
					
					... View more