<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Error while accessing parquet data through hive (PARQUET-251) in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Error-while-accessing-parquet-data-through-hive-PARQUET-251/m-p/182023#M61602</link>
    <description>&lt;P&gt;This bug was because of permission issue. (Reinstallation of Ambari agent)&lt;/P&gt;&lt;P&gt;The error was because of /tmp/parquet-0.log and /tmp/parquet-0.log.lock access denying for hive user.&lt;/P&gt;</description>
    <pubDate>Tue, 23 May 2017 09:10:19 GMT</pubDate>
    <dc:creator>Mehdi_hosseinza</dc:creator>
    <dc:date>2017-05-23T09:10:19Z</dc:date>
    <item>
      <title>Error while accessing parquet data through hive (PARQUET-251)</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Error-while-accessing-parquet-data-through-hive-PARQUET-251/m-p/182022#M61601</link>
      <description>&lt;P&gt;
	I've installed Horton's Ambari package and It worked fine till today.&lt;/P&gt;&lt;P&gt;
	I've faced an error while running query on hive. I've getting this warning and can't fix it.&lt;/P&gt;&lt;PRE&gt;May 22, 2017 5:26:52 PM WARNING: org.apache.parquet.CorruptStatistics: Ignoring statistics because created_by could not be parsed (see PARQUET-251): parquet-mr (build 32c46643845ea8a705c35d4ec8fc654cc8ff816d)
org.apache.parquet.VersionParser$VersionParseException: Could not parse created_by: parquet-mr (build 32c46643845ea8a705c35d4ec8fc654cc8ff816d) using format: (.+) version ((.*) )?\(build ?(.*)\)
	at org.apache.parquet.VersionParser.parse(VersionParser.java:112)
	at org.apache.parquet.CorruptStatistics.shouldIgnoreStatistics(CorruptStatistics.java:60)
	at org.apache.parquet.format.converter.ParquetMetadataConverter.fromParquetStatistics(ParquetMetadataConverter.java:263)
	at org.apache.parquet.hadoop.ParquetFileReader$Chunk.readAllPages(ParquetFileReader.java:583)
	at org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:513)
	at org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:130)
	at org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:214)
	at org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:227)
	at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.&amp;lt;init&amp;gt;(ParquetRecordReaderWrapper.java:120)
	at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.&amp;lt;init&amp;gt;(ParquetRecordReaderWrapper.java:83)
	at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:71)
	at org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:694)
	at org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:332)
	at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:458)
	at org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:427)
	at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:146)
	at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1765)
	at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:237)
	at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:169)
	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:380)
	at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:740)
	at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:685)
	at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
&lt;/PRE&gt;&lt;P&gt;As I've googled, it's a bug in previous versions of Apache Parquet, but don't know how to update it.&lt;/P&gt;</description>
      <pubDate>Tue, 23 May 2017 00:44:14 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Error-while-accessing-parquet-data-through-hive-PARQUET-251/m-p/182022#M61601</guid>
      <dc:creator>Mehdi_hosseinza</dc:creator>
      <dc:date>2017-05-23T00:44:14Z</dc:date>
    </item>
    <item>
      <title>Re: Error while accessing parquet data through hive (PARQUET-251)</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Error-while-accessing-parquet-data-through-hive-PARQUET-251/m-p/182023#M61602</link>
      <description>&lt;P&gt;This bug was because of permission issue. (Reinstallation of Ambari agent)&lt;/P&gt;&lt;P&gt;The error was because of /tmp/parquet-0.log and /tmp/parquet-0.log.lock access denying for hive user.&lt;/P&gt;</description>
      <pubDate>Tue, 23 May 2017 09:10:19 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Error-while-accessing-parquet-data-through-hive-PARQUET-251/m-p/182023#M61602</guid>
      <dc:creator>Mehdi_hosseinza</dc:creator>
      <dc:date>2017-05-23T09:10:19Z</dc:date>
    </item>
  </channel>
</rss>

