Member since
09-16-2021
421
Posts
55
Kudos Received
39
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 456 | 10-22-2025 05:48 AM | |
| 454 | 09-05-2025 07:19 AM | |
| 1034 | 07-15-2025 02:22 AM | |
| 1627 | 06-02-2025 06:55 AM | |
| 1784 | 05-22-2025 03:00 AM |
08-28-2024
12:07 AM
Based on the INFO logs, it appears that there is an open transaction blocking the compaction cleaner process. This requires a separate investigation, so I advise raising a support case to resolve the problem. Additionally, we need to examine the HMS logs, backend DB dump, and the output of "hdfs dfs -ls -R" command.
... View more
08-23-2024
06:09 AM
1 Kudo
Since the partition related information not mentioned in the write statement staging directory created in the table directory instead of partition directory.
... View more
08-22-2024
11:53 PM
To determine the cause of the failure, it is recommended to review the HMS logs within the specified time frame as the exception stack-trace does not provide sufficient information.
... View more
08-22-2024
05:25 AM
1 Kudo
make sure below is enabled at cluster level. hive.acid.direct.insert.enabled Also use below format to insert into partitioned tables. Static partition df.write.format(HIVE_WAREHOUSE_CONNECTOR).mode("append").option("partition", "c1='val1',c2='val2'").option("table", "t1").save(); Dynamic partition df.write.format(HIVE_WAREHOUSE_CONNECTOR).mode("append").option("partition", "c1,c2").option("table", "t1").save();
... View more
08-22-2024
04:34 AM
1 Kudo
It looks similar to the KB Please follow the instructions in the KB.
... View more
08-22-2024
04:24 AM
1 Kudo
Please verify if there are any long-running transactions on the cluster and, if found, consider aborting them using the "abort transactions" command, if it is safe to do so. You can use the "show transactions" command in Beeline to validate the long-running transactions. Another alternative is to use the following backend DB query . SELECT * FROM "TXNS" WHERE "TXN_ID" = ( SELECT min(res.id) FROM ( SELECT "NTXN_NEXT" AS id FROM "NEXT_TXN_ID" UNION ALL SELECT "MHL_TXNID" FROM "MIN_HISTORY_LEVEL" WHERE "MHL_TXNID" = ( SELECT min("MHL_MIN_OPEN_TXNID") FROM "MIN_HISTORY_LEVEL" ) ) res) Note: This query is for postgres DB, modify it depending upon the backend DB in which you're using.
... View more
08-13-2024
04:37 AM
1 Kudo
From the Error Stack-trace , it does looks like problem is , connection closed exception between Tez AM application and the HBase server (host12.com). This connection closure led to a java.net.SocketTimeoutException. In simpler terms, tez AM tried to communicate with HBase but the connection timed out after 60 seconds (callTimeout) because it didn't receive a response within that time frame. This issue occurred during the initialization of a vertex named "vertex_1723487266861_0004_2_00" within the Tez application. Possible Reasons The HBase server might be overloaded, experiencing internal issues, or even down entirely. Hive might have an incorrect HBase server address or port configured.
... View more
08-12-2024
01:51 AM
Can you please share the stacktrace from HiveServer2 logs and spark-submit command used
... View more
08-11-2024
11:37 PM
Could you please share the cluster version and spark-submit command . What's the HWC execution mode? Can you please share the complete StackTrace? As the issue is with respect to MoveTask , HIVE-24163 can be a problem.
... View more
08-08-2024
07:54 AM
Is the query failed in compilation stage / execution stage? Could you please share the complete stack-trace of the query failure?
... View more