Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Unable to read multiple Parquet files in hive

avatar
New Contributor

We are using ParquetFileWriter to generate Parquet files and want to be able to query this in hive. So in hive we have it setup as an external table that is pointing to HDFS folder where parquet files are located. This all works great.

 

Next we tried to setup a partitioned tables so we changed writer to generate multiple folders based on the partition key Year. The external table in hive has been updated to have PartitionBy clause.  We've also maually added partitions hive using alter table statement.

 

Now on querying the table we are getting incorrect results and it appers to be that the first parquet file loaded contents are returned for all the partitions. So if we query for year 2012 we get 5 records and we will get the same for 2013 and 2014. If we re-start hive shell and query directly for 2014 we correct result only for that partition and now subsequent queries all return the same data from 2014 partition.

 

We are using CDH 4.6 with Hive 0.10

 

Files weith created with following

<dependency>

    <groupId>org.apache.hive</groupId>

    <artifactId>hive-exec</artifactId>

    <version>0.10.0</version>

</dependency>

 

<dependency>

    <groupId>com.twitter</groupId>

    <artifactId>parquet-hive-bundle</artifactId>

    <version>1.4.0</version>

</dependency>

 

create external table sinet_test
(
datatimestamp INT,
serverid INT
)
PARTITIONED BY (year String)
ROW FORMAT SERDE 'parquet.hive.serde.ParquetHiveSerDe'
STORED AS
INPUTFORMAT 'parquet.hive.DeprecatedParquetInputFormat'
OUTPUTFORMAT 'parquet.hive.DeprecatedParquetOutputFormat'
LOCATION '/tmp/sinet/writer';

 

Can someone please help look into this?

 

Thanks

Kanwal

2 ACCEPTED SOLUTIONS

avatar
Contributor
That's a very generic error. Can you look at the task logs to see if there is a longer stack trace?

View solution in original post

avatar
New Contributor
I did look into task logs and couldn't find any additional information.

Its the same stack trace in the log as well.

Thanks
Kanwal

View solution in original post

4 REPLIES 4

avatar
Contributor
Hi,

I think there was a bug related to reading incorrect footers which was fixed in https://issues.apache.org/jira/browse/HIVE-5783.

Can you try upgrading to CDH5 and reproducing?

avatar
New Contributor

Thanks. The bug seems to be related to doing select * vs selecting individual columns. Now I'm able to query individual columns but then running a query that required MR job is failing with the following error

 

Diagnostic Messages for this Task:

 

java.io.IOException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:372)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.<init>(HadoopShimsSecure.java:319)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getRecordReader(HadoopShimsSecure.java:433)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:540)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:394)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native

 

Any suggestions?

 

Thanks

Kanwal

avatar
Contributor
That's a very generic error. Can you look at the task logs to see if there is a longer stack trace?

avatar
New Contributor
I did look into task logs and couldn't find any additional information.

Its the same stack trace in the log as well.

Thanks
Kanwal