<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: create a parquet table in Hive from a dataframe in Scala, in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/create-a-parquet-table-in-Hive-from-a-dataframe-in-Scala/m-p/118108#M80891</link>
    <description>&lt;P&gt;I have similar query but it is about reading data from HIVE tables which are stored as parquet format.Getting below error on table data read using spark-SQL. &lt;/P&gt;&lt;P&gt;java.lang.ArrayIndexOutOfBoundsException: 7
        at org.apache.parquet.bytes.BytesUtils.bytesToLong(BytesUtils.java:250)
        at org.apache.parquet.column.statistics.LongStatistics.setMinMaxFromBytes(LongStatistics.java:50)
        at org.apache.parquet.format.converter.ParquetMetadataConverter.fromParquetStatistics(ParquetMetadataConverter.java:255)&lt;/P&gt;&lt;P&gt;,
&lt;/P&gt;&lt;P&gt;I have a similar requirement to read data from a HIVE table which is stored in parquet format. Getting the below exception.&lt;/P&gt;&lt;P&gt;WARN TaskSetManager: Lost task 0.0 in stage 1.0 (TID 2,...........) java.lang.ArrayIndexOutOfBoundsException: 7
        at org.apache.parquet.bytes.BytesUtils.bytesToLong(BytesUtils.java:250)
        at org.apache.parquet.column.statistics.LongStatistics.setMinMaxFromBytes(LongStatistics.java:50)
        at org.apache.parquet.format.converter.ParquetMetadataConverter.fromParquetStatistics(ParquetMetadataConverter.java:255)&lt;/P&gt;</description>
    <pubDate>Wed, 05 Jul 2017 19:21:52 GMT</pubDate>
    <dc:creator>brahmacharykasa</dc:creator>
    <dc:date>2017-07-05T19:21:52Z</dc:date>
  </channel>
</rss>

