Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Trying to generate data from hive_testbench throws "Malformed ORC file hdfs" Exception

avatar
Contributor

I am working on setting and configuring hive_testbench. I applied all the required steps for the configurations but whenever I try to generate the data, I get the following exception:

Caused by: java.lang.RuntimeException: java.io.IOException: org.apache.hadoop.hive.ql.io.FileFormatException: Malformed ORC file hdfs://mycluster/tmp/tpcds-generate/100/date_dim/data-m-00099. Invalid postscript.
        at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:196)
        at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.<init>(TezGroupedSplitsInputFormat.java:135)
        at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat.getRecordReader(TezGroupedSplitsInputFormat.java:101)
        at org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:149)
        at org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:80)
        at org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:650)
        at org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:621)
        at org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:145)
        at org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:109)
        at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.getMRInput(MapRecordProcessor.java:408)
        at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:128)
        at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:149)
        ... 14 more
Caused by: java.io.IOException: org.apache.hadoop.hive.ql.io.FileFormatException: Malformed ORC file hdfs://mycluster/tmp/tpcds-generate/100/date_dim/data-m-00099. Invalid postscript.
        at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
        at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
        at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:253)
        at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:193)
        ... 25 more
Caused by: org.apache.hadoop.hive.ql.io.FileFormatException: Malformed ORC file hdfs://mycluster/tmp/tpcds-generate/100/date_dim/data-m-00099. Invalid postscript.
        at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.ensureOrcFooter(ReaderImpl.java:251)
        at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.extractMetaInfoFromFooter(ReaderImpl.java:376)
        at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:317)
        at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:238)
        at org.apache.hadoop.hive.ql.io.orc.VectorizedOrcInputFormat.getRecordReader(VectorizedOrcInputFormat.java:175)
        at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.createVectorizedReader(OrcInputFormat.java:1239)
        at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:1252)
        at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:251)
        ... 26 more


Also,tpcds_bin_partitioned_orc_100 DB is generated but remains empty due to these errors (i.e. no tables). I tried generating the data by only calling the script, and I tried running it with the FORMAT=textfile and format=orc options but I still get the same error.

Any idea how can I resolve this and generate the data in tpcds_bin_partitioned_orc_100 DB?

1 ACCEPTED SOLUTION

avatar
Contributor

turned out it is because hive by default creates the tables in ORC format and hive-testbench assumes that the default tables is in text format. I had to change the script in hive-testbench/ddl-tpcds/text/alltable.sql to be STORED AS TEXTFILE.

View solution in original post

5 REPLIES 5

avatar

Hi @Sarah Maadawy. Did you run ./tpcds-setup.sh 100? That's 100 GB of data. Are you sure you wanted that much data? You might be running out of space.

avatar
Contributor

I tried with 10GB, I have enough space but I am still getting the same error

avatar
Contributor

I also tried to query the tables in tpcds_text_10 before generating the tables in tpcds_bin_partitioned_orc_10 and they through the same error. but that could make sense because they are originally created in text format and then changed to ord after that as per my understanding from the scripts

avatar
Contributor

turned out it is because hive by default creates the tables in ORC format and hive-testbench assumes that the default tables is in text format. I had to change the script in hive-testbench/ddl-tpcds/text/alltable.sql to be STORED AS TEXTFILE.

avatar
New Contributor


you can change default behave by "set hive.default.fileformat=TextFile"