<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: ORC acid table &amp;quot;select count(*) error&amp;quot;  - java.io.IOException: [Error 30021]: An ORC ACID reader required to read ACID tables in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ORC-acid-table-quot-select-count-error-quot-java-io/m-p/201104#M78591</link>
    <description>&lt;UL&gt;
&lt;/UL&gt;&lt;P&gt;Perhaps:&lt;/P&gt;&lt;P&gt;&lt;EM&gt;"Reading/writing to an ACID table from a non-ACID session is not allowed. In other words, the Hive transaction manager must be set to org.apache.hadoop.hive.ql.lockmgr.DbTxnManager in order to work with ACID tables."&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;SET hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;&lt;/P&gt;&lt;P&gt;&lt;A href="https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions#HiveTransactions-Limitations" target="_blank"&gt;https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions#HiveTransactions-Limitations&lt;/A&gt;&lt;/P&gt;</description>
    <pubDate>Wed, 23 May 2018 01:11:41 GMT</pubDate>
    <dc:creator>umair_khan</dc:creator>
    <dc:date>2018-05-23T01:11:41Z</dc:date>
    <item>
      <title>ORC acid table "select count(*) error"  - java.io.IOException: [Error 30021]: An ORC ACID reader required to read ACID tables</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ORC-acid-table-quot-select-count-error-quot-java-io/m-p/201103#M78590</link>
      <description>&lt;P&gt;I created a table as below. Then ingest date by rk and dt. After two separate insert into, when I run a select count(*), I keep getting the following error: "Caused by: java.io.IOException: [Error 30021]: An ORC ACID reader required to read ACID tables"&lt;/P&gt;&lt;P&gt;What does this error mean, how do I work around this?&lt;BR /&gt;&lt;BR /&gt;set hive.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat;
&lt;BR /&gt;set hive.exec.dynamic.partition=true; &lt;BR /&gt;
set hive.exec.dynamic.partition.mode=nonstrict; &lt;BR /&gt;&lt;BR /&gt;CREATE TABLE `table1_orc`
(
  `uuid` string,
   token string,
   ip_address string,
  `raw_event` string
)
PARTITIONED BY 
(
  `rk` string,
  `dt` string
)
CLUSTERED BY (token) INTO 10 BUCKETS
STORED AS ORC
TBLPROPERTIES (
  'transactional'='true');&lt;/P&gt;</description>
      <pubDate>Tue, 22 May 2018 19:26:02 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ORC-acid-table-quot-select-count-error-quot-java-io/m-p/201103#M78590</guid>
      <dc:creator>lightsailpro</dc:creator>
      <dc:date>2018-05-22T19:26:02Z</dc:date>
    </item>
    <item>
      <title>Re: ORC acid table "select count(*) error"  - java.io.IOException: [Error 30021]: An ORC ACID reader required to read ACID tables</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ORC-acid-table-quot-select-count-error-quot-java-io/m-p/201104#M78591</link>
      <description>&lt;UL&gt;
&lt;/UL&gt;&lt;P&gt;Perhaps:&lt;/P&gt;&lt;P&gt;&lt;EM&gt;"Reading/writing to an ACID table from a non-ACID session is not allowed. In other words, the Hive transaction manager must be set to org.apache.hadoop.hive.ql.lockmgr.DbTxnManager in order to work with ACID tables."&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;SET hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;&lt;/P&gt;&lt;P&gt;&lt;A href="https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions#HiveTransactions-Limitations" target="_blank"&gt;https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions#HiveTransactions-Limitations&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 23 May 2018 01:11:41 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ORC-acid-table-quot-select-count-error-quot-java-io/m-p/201104#M78591</guid>
      <dc:creator>umair_khan</dc:creator>
      <dc:date>2018-05-23T01:11:41Z</dc:date>
    </item>
    <item>
      <title>Re: ORC acid table "select count(*) error"  - java.io.IOException: [Error 30021]: An ORC ACID reader required to read ACID tables</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ORC-acid-table-quot-select-count-error-quot-java-io/m-p/201105#M78592</link>
      <description>&lt;P&gt;hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager is set in the configuration indeed. I run the select count(*) from the same session that "insert into" was done. "insert into" was fine.&lt;BR /&gt;&lt;BR /&gt;I created a similar acid table but without partition. Query runs fine. So it seems something related to partition.&lt;/P&gt;</description>
      <pubDate>Wed, 23 May 2018 02:59:13 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ORC-acid-table-quot-select-count-error-quot-java-io/m-p/201105#M78592</guid>
      <dc:creator>lightsailpro</dc:creator>
      <dc:date>2018-05-23T02:59:13Z</dc:date>
    </item>
    <item>
      <title>Re: ORC acid table "select count(*) error"  - java.io.IOException: [Error 30021]: An ORC ACID reader required to read ACID tables</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ORC-acid-table-quot-select-count-error-quot-java-io/m-p/201106#M78593</link>
      <description>&lt;P&gt;--1st error is: "Caused by: java.io.IOException: [Error 30022]: Must use HiveInputFormat to read ACID tables (set hive.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat)". &lt;BR /&gt;&lt;BR /&gt;--After I set set hive.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat, I got second error: "Caused by: java.io.IOException: [Error 30021]: An ORC ACID reader required to read ACID tables". &lt;BR /&gt;&lt;BR /&gt;--Set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager, make no difference&lt;BR /&gt;&lt;BR /&gt;--I tried compact table all partitions major, still get same error. Any suggestion for working around the error is appreciated.&lt;BR /&gt;&lt;BR /&gt;Here the table definition:&lt;BR /&gt;CREATE TABLE `raw_orc1`
(
  `uuid` string,
   token string,
   ip string,
  event string
)
PARTITIONED BY 
(
  `rk` string,
  `dt` string
)
CLUSTERED BY (token) INTO 10 BUCKETS
STORED AS ORC
TBLPROPERTIES (
  'transactional'='true'
  );&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;        at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:173)
        at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:139)
        at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:347)
        at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:194)
        at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:185)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
        at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:185)
        at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:181)
        at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: java.lang.reflect.InvocationTargetException
        at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
        at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
        at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:266)
        at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.&amp;lt;init&amp;gt;(HadoopShimsSecure.java:213)
        at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getRecordReader(HadoopShimsSecure.java:333)
        at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:719)
        at org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:149)
        at org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:80)
        at org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:674)
        at org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:633)
        at org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:145)
        at org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:109)
        at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.getMRInput(MapRecordProcessor.java:405)
        at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:124)
        at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:149)
        ... 14 more
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:252)
        ... 26 more
Caused by: java.io.IOException: [Error 30021]: An ORC ACID reader required to read ACID tables
        at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.raiseAcidTablesMustBeReadWithAcidReaderException(OrcInputFormat.java:265)
        at org.apache.hadoop.hive.ql.io.orc.VectorizedOrcInputFormat$VectorizedOrcRecordReader.&amp;lt;init&amp;gt;(VectorizedOrcInputFormat.java:70)
        at org.apache.hadoop.hive.ql.io.orc.VectorizedOrcInputFormat.getRecordReader(VectorizedOrcInputFormat.java:177)
        at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.createVectorizedReader(OrcInputFormat.java:1309)
        at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:1322)
        at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.&amp;lt;init&amp;gt;(CombineHiveRecordReader.java:67)
        ... 31 more&lt;/P&gt;</description>
      <pubDate>Wed, 23 May 2018 20:06:07 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ORC-acid-table-quot-select-count-error-quot-java-io/m-p/201106#M78593</guid>
      <dc:creator>lightsailpro</dc:creator>
      <dc:date>2018-05-23T20:06:07Z</dc:date>
    </item>
    <item>
      <title>Re: ORC acid table "select count(*) error"  - java.io.IOException: [Error 30021]: An ORC ACID reader required to read ACID tables</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ORC-acid-table-quot-select-count-error-quot-java-io/m-p/201107#M78594</link>
      <description>&lt;P&gt;Finally figured out.need to set hive.tez.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat;&lt;/P&gt;</description>
      <pubDate>Thu, 24 May 2018 00:13:07 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ORC-acid-table-quot-select-count-error-quot-java-io/m-p/201107#M78594</guid>
      <dc:creator>lightsailpro</dc:creator>
      <dc:date>2018-05-24T00:13:07Z</dc:date>
    </item>
  </channel>
</rss>

