<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Compaction stuck Ready for Cleaning in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Compaction-stuck-Ready-for-Cleaning/m-p/393099#M248338</link>
    <description>&lt;DIV class="p-rich_text_block--no-overflow"&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/100837"&gt;@Lorenzo&lt;/a&gt;&amp;nbsp;The issue seems to be related to HIVE-27191 where some mhl_txnids do not exist in TXNS,completed_txn_components txn_components table but they are still present in min_history_level table, as a result, the cleaner gets blocked and many entries are stuck in the ready-for-cleaning state. To confirm that collect the output of below query&lt;/DIV&gt;&lt;DIV class="p-rich_text_block--no-overflow"&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class="p-rich_text_block--no-overflow"&gt;SELECT MHL_TXNID FROM HIVE.MIN_HISTORY_LEVEL WHERE MHL_MIN_OPEN_TXNID = (SELECT MIN(MHL_MIN_OPEN_TXNID) FROM HIVE.MIN_HISTORY_LEVEL);&lt;/DIV&gt;&lt;DIV class="p-rich_text_section"&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class="p-rich_text_block--no-overflow"&gt;Once we get the output of the above query check if those txn ids are there in TXNS,completed_txn_components txn_components tables using below commands.&lt;/DIV&gt;&lt;DIV class="p-rich_text_block--no-overflow"&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class="p-rich_text_block--no-overflow"&gt;select * from txn_components where tc_txnid IN (MHL_TXNID );&lt;/DIV&gt;&lt;DIV class="p-rich_text_block--no-overflow"&gt;select * from completed_txn_components where ctc_txnid IN (MHL_TXNID);&lt;/DIV&gt;&lt;DIV class="p-rich_text_block--no-overflow"&gt;select * from TXNS where ctc_txnid IN (MHL_TXNID);&lt;/DIV&gt;&lt;DIV class="p-rich_text_block--no-overflow"&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class="p-rich_text_block--no-overflow"&gt;If we got 0 results from the above queries this confirms that the MHL_TXNIDs we got above are orphans and we need to remove them in order to unblock the cleaner.&lt;/DIV&gt;&lt;DIV class="p-rich_text_block--no-overflow"&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class="p-rich_text_block--no-overflow"&gt;delete from MIN_HISTORY_LEVEL where MHL_TXNID=13422; --(repeat for all)&lt;/DIV&gt;&lt;DIV class="p-rich_text_block--no-overflow"&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class="p-rich_text_block--no-overflow"&gt;Hope this helps you in resolving the issue&lt;/DIV&gt;</description>
    <pubDate>Thu, 05 Sep 2024 11:53:20 GMT</pubDate>
    <dc:creator>Pzahid</dc:creator>
    <dc:date>2024-09-05T11:53:20Z</dc:date>
    <item>
      <title>Compaction stuck Ready for Cleaning</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Compaction-stuck-Ready-for-Cleaning/m-p/392333#M248087</link>
      <description>&lt;P&gt;&lt;SPAN class="HwtZe"&gt;&lt;SPAN class="jCAhz ChMk0b"&gt;&lt;SPAN class="ryNqvb"&gt;Hi all, &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="HwtZe"&gt;&lt;SPAN class="jCAhz ChMk0b"&gt;&lt;SPAN class="ryNqvb"&gt;in my test cluster I am noticing a slowdown in the execution of acid queries.&lt;/SPAN&gt;&lt;/SPAN&gt; &lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="HwtZe"&gt;&lt;SPAN class="jCAhz"&gt;&lt;SPAN class="ryNqvb"&gt;Analyzing in detail I noticed that the compactions remain stuck at "ready for cleaning" and there are many delta files.&lt;/SPAN&gt;&lt;/SPAN&gt; &lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="HwtZe"&gt;&lt;SPAN class="jCAhz"&gt;&lt;SPAN class="ryNqvb"&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="delta files.png" style="width: 999px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/41495i3E5084E864891AC6/image-size/large?v=v2&amp;amp;px=999" role="button" title="delta files.png" alt="delta files.png" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="HwtZe"&gt;&lt;SPAN class="jCAhz"&gt;&lt;SPAN class="ryNqvb"&gt;I also tried to manually launch the compaction without any result.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="jCAhz ChMk0b"&gt;&lt;SPAN class="ryNqvb"&gt;&lt;STRONG&gt; hive.metastore.housekeeping.threads.on&lt;/STRONG&gt; and &lt;STRONG&gt;hive.metastore.housekeeping.threads.on&lt;/STRONG&gt; is &lt;STRONG&gt;true&lt;/STRONG&gt; only in 1 hive metastore host.&lt;/SPAN&gt;&lt;/SPAN&gt; &lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="HwtZe"&gt;&lt;SPAN class="jCAhz ChMk0b"&gt;&lt;SPAN class="ryNqvb"&gt;This is a table properties:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="HwtZe"&gt;&lt;SPAN class="jCAhz ChMk0b"&gt;&lt;SPAN class="ryNqvb"&gt;bucketing_version 2&lt;BR /&gt;transactional true&lt;BR /&gt;transactional_properties default&lt;BR /&gt;transient_lastDdlTime 1720453037&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="HwtZe"&gt;&lt;SPAN class="jCAhz ChMk0b"&gt;&lt;SPAN class="ryNqvb"&gt;In the development cluster with the identical configuration I do not have this problem.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="jCAhz ChMk0b"&gt;&lt;SPAN class="ryNqvb"&gt; Do you have any suggestions?&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="HwtZe"&gt;&lt;SPAN class="jCAhz"&gt;&lt;SPAN class="ryNqvb"&gt;I'm running in CDP 7.1.9&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="HwtZe"&gt;&lt;SPAN class="jCAhz"&gt;&lt;SPAN class="ryNqvb"&gt;Thanks, Lorenzo&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 21 Apr 2026 06:26:51 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Compaction-stuck-Ready-for-Cleaning/m-p/392333#M248087</guid>
      <dc:creator>Lorenzo</dc:creator>
      <dc:date>2026-04-21T06:26:51Z</dc:date>
    </item>
    <item>
      <title>Re: Compaction stuck Ready for Cleaning</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Compaction-stuck-Ready-for-Cleaning/m-p/392346#M248090</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Please verify if there are any long-running transactions on the cluster and, if found, consider aborting them using the "abort transactions" command, if it is safe to do so.&lt;BR /&gt;&lt;BR /&gt;You can use the "show transactions" command in Beeline to validate the long-running transactions.&lt;BR /&gt;&lt;BR /&gt;Another alternative is to use the following backend DB query .&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;SELECT * FROM "TXNS" WHERE "TXN_ID" = ( SELECT min(res.id) FROM ( SELECT "NTXN_NEXT" AS id FROM "NEXT_TXN_ID" UNION ALL SELECT "MHL_TXNID" FROM "MIN_HISTORY_LEVEL" WHERE "MHL_TXNID" = ( SELECT min("MHL_MIN_OPEN_TXNID") FROM "MIN_HISTORY_LEVEL" ) ) res)&lt;/LI-CODE&gt;&lt;P&gt;&lt;SPAN&gt;&lt;BR /&gt;Note: This query is for postgres DB,&amp;nbsp; modify it depending upon the backend DB in which you're using.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 22 Aug 2024 11:24:57 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Compaction-stuck-Ready-for-Cleaning/m-p/392346#M248090</guid>
      <dc:creator>ggangadharan</dc:creator>
      <dc:date>2024-08-22T11:24:57Z</dc:date>
    </item>
    <item>
      <title>Re: Compaction stuck Ready for Cleaning</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Compaction-stuck-Ready-for-Cleaning/m-p/392366#M248101</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;DIV class="QFw9Te BLojaf"&gt;&lt;SPAN class="HwtZe"&gt;&lt;SPAN class="jCAhz ChMk0b"&gt;&lt;SPAN class="ryNqvb"&gt;actually launching the show transactions command I see open transactions from June 13th.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class="jCAhz ChMk0b"&gt;&lt;SPAN class="ryNqvb"&gt; I tried to do abort transactions "id" but I receive the following error:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="QFw9Te BLojaf"&gt;&lt;SPAN class="HwtZe"&gt;&lt;SPAN class="jCAhz ChMk0b"&gt;&lt;SPAN class="ryNqvb"&gt;Error while compiling statement: FAILED: Execution Error, return code 40000 from org.apache.hadoop.hive.ql.ddl.DDLTask. org.apache.thrift.TApplicationException: Internal error processing abort_txns&lt;BR /&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class="QFw9Te BLojaf"&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class="QFw9Te BLojaf"&gt;&lt;EM&gt;&lt;FONT size="2"&gt;&lt;SPAN class="HwtZe"&gt;&lt;SPAN class="jCAhz ChMk0b"&gt;&lt;SPAN class="ryNqvb"&gt;INFO : Completed compiling command(queryId=hive_20240822152836_9e17b9ba-4e7b-44d1-a710-8e086edd8da0); Time taken: 0.001 seconds&lt;BR /&gt;INFO : Executing command(queryId=hive_20240822152836_9e17b9ba-4e7b-44d1-a710-8e086edd8da0): abort transactions 13422&lt;BR /&gt;INFO : Starting task [Stage-0:DDL] in serial mode&lt;BR /&gt;ERROR : Failed&lt;BR /&gt;org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.thrift.TApplicationException: Internal error processing abort_txns&lt;BR /&gt;at org.apache.hadoop.hive.ql.metadata.Hive.abortTransactions(Hive.java:5549) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hadoop.hive.ql.ddl.process.abort.AbortTransactionsOperation.execute(AbortTransactionsOperation.java:35) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hadoop.hive.ql.ddl.DDLTask.execute(DDLTask.java:82) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:213) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:105) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Executor.launchTask(Executor.java:357) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Executor.launchTasks(Executor.java:330) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Executor.runTasks(Executor.java:246) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Executor.execute(Executor.java:109) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:785) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.run(Driver.java:524) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.run(Driver.java:518) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:166) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:234) ~[hive-service-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:91) ~[hive-service-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:334) ~[hive-service-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at java.security.AccessController.doPrivileged(Native Method) ~[?:?]&lt;BR /&gt;at javax.security.auth.Subject.doAs(Subject.java:423) ~[?:?]&lt;BR /&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) ~[hadoop-common-3.1.1.7.1.9.4-4.jar:?]&lt;BR /&gt;at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:354) ~[hive-service-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]&lt;BR /&gt;at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]&lt;BR /&gt;at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]&lt;BR /&gt;at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]&lt;BR /&gt;at java.lang.Thread.run(Thread.java:829) ~[?:?]&lt;BR /&gt;Caused by: org.apache.thrift.TApplicationException: Internal error processing abort_txns&lt;BR /&gt;at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_abort_txns(ThriftHiveMetastore.java:5929) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.abort_txns(ThriftHiveMetastore.java:5916) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.abortTxns(HiveMetaStoreClient.java:3445) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at jdk.internal.reflect.GeneratedMethodAccessor529.invoke(Unknown Source) ~[?:?]&lt;BR /&gt;at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]&lt;BR /&gt;at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:213) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at com.sun.proxy.$Proxy55.abortTxns(Unknown Source) ~[?:?]&lt;BR /&gt;at jdk.internal.reflect.GeneratedMethodAccessor529.invoke(Unknown Source) ~[?:?]&lt;BR /&gt;at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]&lt;BR /&gt;at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:3759) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at com.sun.proxy.$Proxy55.abortTxns(Unknown Source) ~[?:?]&lt;BR /&gt;at org.apache.hadoop.hive.ql.metadata.Hive.abortTransactions(Hive.java:5546) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;... 26 more&lt;BR /&gt;ERROR : DDLTask failed, DDL Operation: class org.apache.hadoop.hive.ql.ddl.process.abort.AbortTransactionsOperation&lt;BR /&gt;org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.thrift.TApplicationException: Internal error processing abort_txns&lt;BR /&gt;at org.apache.hadoop.hive.ql.metadata.Hive.abortTransactions(Hive.java:5549) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hadoop.hive.ql.ddl.process.abort.AbortTransactionsOperation.execute(AbortTransactionsOperation.java:35) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hadoop.hive.ql.ddl.DDLTask.execute(DDLTask.java:82) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:213) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:105) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Executor.launchTask(Executor.java:357) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Executor.launchTasks(Executor.java:330) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Executor.runTasks(Executor.java:246) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Executor.execute(Executor.java:109) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:785) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.run(Driver.java:524) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hadoop.hive.ql.Driver.run(Driver.java:518) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:166) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:234) ~[hive-service-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:91) ~[hive-service-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:334) ~[hive-service-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at java.security.AccessController.doPrivileged(Native Method) ~[?:?]&lt;BR /&gt;at javax.security.auth.Subject.doAs(Subject.java:423) ~[?:?]&lt;BR /&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) ~[hadoop-common-3.1.1.7.1.9.4-4.jar:?]&lt;BR /&gt;at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:354) ~[hive-service-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]&lt;BR /&gt;at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]&lt;BR /&gt;at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]&lt;BR /&gt;at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]&lt;BR /&gt;at java.lang.Thread.run(Thread.java:829) ~[?:?]&lt;BR /&gt;Caused by: org.apache.thrift.TApplicationException: Internal error processing abort_txns&lt;BR /&gt;at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_abort_txns(ThriftHiveMetastore.java:5929) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.abort_txns(ThriftHiveMetastore.java:5916) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.abortTxns(HiveMetaStoreClient.java:3445) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at jdk.internal.reflect.GeneratedMethodAccessor529.invoke(Unknown Source) ~[?:?]&lt;BR /&gt;at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]&lt;BR /&gt;at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:213) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at com.sun.proxy.$Proxy55.abortTxns(Unknown Source) ~[?:?]&lt;BR /&gt;at jdk.internal.reflect.GeneratedMethodAccessor529.invoke(Unknown Source) ~[?:?]&lt;BR /&gt;at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]&lt;BR /&gt;at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]&lt;BR /&gt;at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:3759) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;at com.sun.proxy.$Proxy55.abortTxns(Unknown Source) ~[?:?]&lt;BR /&gt;at org.apache.hadoop.hive.ql.metadata.Hive.abortTransactions(Hive.java:5546) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4]&lt;BR /&gt;... 26 more&lt;BR /&gt;ERROR : FAILED: Execution Error, return code 40000 from org.apache.hadoop.hive.ql.ddl.DDLTask. org.apache.thrift.TApplicationException: Internal error processing abort_txns&lt;BR /&gt;INFO : Completed executing command(queryId=hive_20240822152836_9e17b9ba-4e7b-44d1-a710-8e086edd8da0); Time taken: 1.018 seconds&lt;BR /&gt;INFO : OK&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/EM&gt;&lt;/DIV&gt;&lt;DIV class="QFw9Te BLojaf"&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class="QFw9Te BLojaf"&gt;&lt;FONT size="3"&gt;&lt;SPAN class="HwtZe"&gt;&lt;SPAN class="jCAhz ChMk0b"&gt;&lt;SPAN class="ryNqvb"&gt;Any suggestion?&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/DIV&gt;&lt;DIV class="QFw9Te BLojaf"&gt;&lt;FONT size="3"&gt;&lt;SPAN class="HwtZe"&gt;&lt;SPAN class="jCAhz ChMk0b"&gt;&lt;SPAN class="ryNqvb"&gt;Thanks,&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/DIV&gt;&lt;DIV class="QFw9Te BLojaf"&gt;&lt;FONT size="3"&gt;&lt;SPAN class="HwtZe"&gt;&lt;SPAN class="jCAhz ChMk0b"&gt;&lt;SPAN class="ryNqvb"&gt;Lorenzo&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/DIV&gt;</description>
      <pubDate>Thu, 22 Aug 2024 13:30:27 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Compaction-stuck-Ready-for-Cleaning/m-p/392366#M248101</guid>
      <dc:creator>Lorenzo</dc:creator>
      <dc:date>2024-08-22T13:30:27Z</dc:date>
    </item>
    <item>
      <title>Re: Compaction stuck Ready for Cleaning</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Compaction-stuck-Ready-for-Cleaning/m-p/392409#M248118</link>
      <description>&lt;P&gt;&lt;SPAN&gt;To determine the cause of the failure, it is recommended to review the HMS logs within the specified time frame as the exception stack-trace does not provide sufficient information.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 23 Aug 2024 06:53:49 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Compaction-stuck-Ready-for-Cleaning/m-p/392409#M248118</guid>
      <dc:creator>ggangadharan</dc:creator>
      <dc:date>2024-08-23T06:53:49Z</dc:date>
    </item>
    <item>
      <title>Re: Compaction stuck Ready for Cleaning</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Compaction-stuck-Ready-for-Cleaning/m-p/392624#M248194</link>
      <description>&lt;P&gt;&lt;SPAN class="HwtZe"&gt;&lt;SPAN class="jCAhz ChMk0b"&gt;&lt;SPAN class="ryNqvb"&gt;I deleted the open transactions from the Oracle db.&lt;/SPAN&gt;&lt;/SPAN&gt; &lt;SPAN class="jCAhz ChMk0b"&gt;&lt;SPAN class="ryNqvb"&gt;After restarting hive unfortunately I still have the same problems.&lt;/SPAN&gt;&lt;/SPAN&gt; &lt;SPAN class="jCAhz ChMk0b"&gt;&lt;SPAN class="ryNqvb"&gt;There are no error messages from the logs and the tables are not locked.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="HwtZe"&gt;&lt;SPAN class="jCAhz ChMk0b"&gt;&lt;SPAN class="ryNqvb"&gt;&lt;EM&gt;INFO org.apache.hadoop.hive.ql.txn.compactor.Cleaner: [Cleaner-executor-thread-0]: Starting cleaning for id:5365402,dbname:XXXX,tableName:XXXX,partName:schema_sorgente=XXXX,state:,type:MAJOR,enqueueTime:0,start:0,properties:null,runAs:hive,tooManyAborts:false,hasOldAbort:false,highestWriteId:826,errorMessage:null,workerId: null,initiatorId: null&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;2024-08-27 14:26:53,877 WARN org.apache.hadoop.hive.ql.txn.compactor.Cleaner: [Cleaner-executor-thread-0]: id=5365402 Remained 21 obsolete directories from hdfs://XXXX. [base_0000201_v1772045,base_0000014_v1403023,delta_0000002_0000002_0000,delete_delta_0000003_0000003_0000,delta_0000003_0000003_0000,delta_0000004_0000004_0000,delete_delta_0000007_0000007_0000,delta_0000007_0000007_0000,delta_0000008_0000008_0000,delete_delta_0000011_0000011_0000,delta_0000011_0000011_0000,delta_0000012_0000012_0000,delete_delta_0000013_0000013_0000,delta_0000013_0000013_0000,delta_0000014_0000014_0000,delete_delta_0000200_0000200_0000,delta_0000200_0000200_0000,delta_0000201_0000201_0000,delete_delta_0000498_0000498_0000,delta_0000498_0000498_0000,delta_0000499_0000499_0000]&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;2024-08-27 14:26:53,877 WARN org.apache.hadoop.hive.ql.txn.compactor.Cleaner: [Cleaner-executor-thread-0]: No files were removed. Leaving queue entry id:5365402,dbname:XXXX,tableName:XXXX,partName:schema_sorgente=XXXX,state:,type:MAJOR,enqueueTime:0,start:0,properties:null,runAs:hive,tooManyAborts:false,hasOldAbort:false,highestWriteId:826,errorMessage:null,workerId: null,initiatorId: null in ready for cleaning state.&lt;/EM&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 27 Aug 2024 15:44:27 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Compaction-stuck-Ready-for-Cleaning/m-p/392624#M248194</guid>
      <dc:creator>Lorenzo</dc:creator>
      <dc:date>2024-08-27T15:44:27Z</dc:date>
    </item>
    <item>
      <title>Re: Compaction stuck Ready for Cleaning</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Compaction-stuck-Ready-for-Cleaning/m-p/392679#M248205</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Based on the INFO logs, it appears that there is an open transaction blocking the compaction cleaner process. This requires a separate investigation, so I advise raising a support case to resolve the problem. Additionally, we need to examine the HMS logs, backend DB dump, and the output of "hdfs dfs -ls -R" command.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 28 Aug 2024 07:07:09 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Compaction-stuck-Ready-for-Cleaning/m-p/392679#M248205</guid>
      <dc:creator>ggangadharan</dc:creator>
      <dc:date>2024-08-28T07:07:09Z</dc:date>
    </item>
    <item>
      <title>Re: Compaction stuck Ready for Cleaning</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Compaction-stuck-Ready-for-Cleaning/m-p/393099#M248338</link>
      <description>&lt;DIV class="p-rich_text_block--no-overflow"&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/100837"&gt;@Lorenzo&lt;/a&gt;&amp;nbsp;The issue seems to be related to HIVE-27191 where some mhl_txnids do not exist in TXNS,completed_txn_components txn_components table but they are still present in min_history_level table, as a result, the cleaner gets blocked and many entries are stuck in the ready-for-cleaning state. To confirm that collect the output of below query&lt;/DIV&gt;&lt;DIV class="p-rich_text_block--no-overflow"&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class="p-rich_text_block--no-overflow"&gt;SELECT MHL_TXNID FROM HIVE.MIN_HISTORY_LEVEL WHERE MHL_MIN_OPEN_TXNID = (SELECT MIN(MHL_MIN_OPEN_TXNID) FROM HIVE.MIN_HISTORY_LEVEL);&lt;/DIV&gt;&lt;DIV class="p-rich_text_section"&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class="p-rich_text_block--no-overflow"&gt;Once we get the output of the above query check if those txn ids are there in TXNS,completed_txn_components txn_components tables using below commands.&lt;/DIV&gt;&lt;DIV class="p-rich_text_block--no-overflow"&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class="p-rich_text_block--no-overflow"&gt;select * from txn_components where tc_txnid IN (MHL_TXNID );&lt;/DIV&gt;&lt;DIV class="p-rich_text_block--no-overflow"&gt;select * from completed_txn_components where ctc_txnid IN (MHL_TXNID);&lt;/DIV&gt;&lt;DIV class="p-rich_text_block--no-overflow"&gt;select * from TXNS where ctc_txnid IN (MHL_TXNID);&lt;/DIV&gt;&lt;DIV class="p-rich_text_block--no-overflow"&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class="p-rich_text_block--no-overflow"&gt;If we got 0 results from the above queries this confirms that the MHL_TXNIDs we got above are orphans and we need to remove them in order to unblock the cleaner.&lt;/DIV&gt;&lt;DIV class="p-rich_text_block--no-overflow"&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class="p-rich_text_block--no-overflow"&gt;delete from MIN_HISTORY_LEVEL where MHL_TXNID=13422; --(repeat for all)&lt;/DIV&gt;&lt;DIV class="p-rich_text_block--no-overflow"&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class="p-rich_text_block--no-overflow"&gt;Hope this helps you in resolving the issue&lt;/DIV&gt;</description>
      <pubDate>Thu, 05 Sep 2024 11:53:20 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Compaction-stuck-Ready-for-Cleaning/m-p/393099#M248338</guid>
      <dc:creator>Pzahid</dc:creator>
      <dc:date>2024-09-05T11:53:20Z</dc:date>
    </item>
  </channel>
</rss>

