Community Articles
Find and share helpful community-sourced technical articles
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.
Labels (1)
Cloudera Employee

ISSUE : Simple select query against an ORC table without limit clause is failing with the below exception.

2017-06-26 16:00:35,228 ERROR [main]: CliDriver ( - Failed with exception serious problem java.lang.RuntimeException: serious problem at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow( at org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow( at org.apache.hadoop.hive.ql.exec.FetchTask.fetch( at org.apache.hadoop.hive.ql.Driver.getResults( at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd( at org.apache.hadoop.hive.cli.CliDriver.processCmd( at org.apache.hadoop.hive.cli.CliDriver.processLine( at org.apache.hadoop.hive.cli.CliDriver.executeDriver( at at org.apache.hadoop.hive.cli.CliDriver.main( at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke( at sun.reflect.DelegatingMethodAccessorImpl.invoke( at java.lang.reflect.Method.invoke( at at org.apache.hadoop.util.RunJar.main( Caused by: java.lang.RuntimeException: serious problem at at at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextSplits( at org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader( at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow( ... 15 more Caused by: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap space at at java.util.concurrent.FutureTask.get( at ... 19 more Caused by: java.lang.OutOfMemoryError: Java heap space 

We can see the query is failing when it is trying to generate ORC splits.



What strategy ORC should use to create splits for execution. The available options are "BI", "ETL" and "HYBRID". Default setting is HYBRID

The HYBRID mode reads the footers for all files if there are fewer files than expected mapper count, switching over to

generating 1 split per file if the average file sizes are smaller than the default HDFS blocksize. ETL strategy always reads the

ORC footers before generating splits, while the BI strategy generates per-file splits fast without reading any data from HDFS.

Don't have an account?
Coming from Hortonworks? Activate your account here
Version history
Revision #:
1 of 1
Last update:
‎06-29-2017 07:39 PM
Updated by:
Top Kudoed Authors