<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question java.io.IOException: Error reading file in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/java-io-IOException-Error-reading-file/m-p/177493#M139741</link>
    <description>&lt;P&gt;	I'm unable to query the partitioned table in Hive with null values. I'm getting the results from ORC Formatted table when I use the simple select with limit 10. when I query the table to filter out null values (col_name != ' ')  . The file is having enough permissions .&lt;/P&gt;&lt;P&gt;Please let me know where i'm going wrong . &lt;/P&gt;&lt;P&gt;	it throws the below error :  &lt;/P&gt;&lt;PRE&gt;	Diagnostic Messages for this Task: Error: java.io.IOException: java.io.IOException: java.io.IOException: Error reading file: hdfs://filename         at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121) at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77) at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:227) at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:137) at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:199) at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:185) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.io.IOException: java.io.IOException: Error reading file: hdfs://filename         at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121) at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77) at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:355) at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:106) at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:42) at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:116) at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:225) ... 11 more Caused by: java.io.IOException: 	Error reading file: hdfs:/filename         at org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.next(RecordReaderImpl.java:1051) at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:171) at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:145) at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350) ... 15 more Caused by: java.io.IOException: Seek outside of data in compressed stream Stream for column 29 kind DATA position: 15982 length: 130886 range: 0 offset: 17609 limit: 17609 range 0 = 0 to 15982; range 1 = 106407 to 24479 uncompressed: 128 to 128 to 15982 at org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.seek(InStream.java:365) at org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.readHeader(InStream.java:182) at org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.read(InStream.java:238) at org.apache.hadoop.hive.ql.io.orc.SerializationUtils.readLongBE(SerializationUtils.java:1139) at org.apache.hadoop.hive.ql.io.orc.SerializationUtils.unrolledUnPackBytes(SerializationUtils.java:1055) at org.apache.hadoop.hive.ql.io.orc.SerializationUtils.unrolledUnPack16(SerializationUtils.java:1014) at org.apache.hadoop.hive.ql.io.orc.SerializationUtils.readInts(SerializationUtils.java:884) at org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readDirectValues(RunLengthIntegerReaderV2.java:255) at org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readValues(RunLengthIntegerReaderV2.java:62) at org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.next(RunLengthIntegerReaderV2.java:302) at org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StringDictionaryTreeReader.next(TreeReaderFactory.java:1712) at org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StringTreeReader.next(TreeReaderFactory.java:1397) at org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StructTreeReader.next(TreeReaderFactory.java:2004) at org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.next(RecordReaderImpl.java:1044) ... 18 more Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask MapReduce Jobs Launched: Stage-Stage-1: Map: 6487 Cumulative CPU: 7812.99 sec HDFS Read: 2685196842 HDFS Write: 0 FAIL Total MapReduce CPU Time Spent: 0 days 2 hours 10 minutes 12 seconds 990 msec &lt;/PRE&gt;</description>
    <pubDate>Tue, 31 Oct 2017 00:44:41 GMT</pubDate>
    <dc:creator>raaj_muthu</dc:creator>
    <dc:date>2017-10-31T00:44:41Z</dc:date>
  </channel>
</rss>

