Support Questions

Find answers, ask questions, and share your expertise

Out of Memory Error in Hive

avatar
Contributor

I am getting the below error while trying to execute a query like "select * from a where a.col1 not in (select b.col1 from b)"

Exception in thread "main" java.lang.OutOfMemoryError: Java heap space

at java.util.Arrays.copyOfRange(Arrays.java:2694)

at java.lang.String.<init>(String.java:203)

at java.lang.StringBuilder.toString(StringBuilder.java:405)

at org.apache.hadoop.fs.Path.toString(Path.java:390)

at org.apache.hadoop.hive.ql.optimizer.AbstractBucketJoinProc.getBucketFilePathsOfPartition(AbstractBucketJoinProc.java:87)

at org.apache.hadoop.hive.ql.optimizer.metainfo.annotation.OpTraitsRulesProcFactory$TableScanRule.checkBucketedTable(OpTraitsRulesProcFactory.java:147)

at org.apache.hadoop.hive.ql.optimizer.metainfo.annotation.OpTraitsRulesProcFactory$TableScanRule.process(OpTraitsRulesProcFactory.java:174)

at org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)

at org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:95)

at org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:79)

at org.apache.hadoop.hive.ql.lib.PreOrderWalker.walk(PreOrderWalker.java:56)

at org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:110)

at org.apache.hadoop.hive.ql.optimizer.metainfo.annotation.AnnotateWithOpTraits.transform(AnnotateWithOpTraits.java:91)

at org.apache.hadoop.hive.ql.parse.TezCompiler.runStatsAnnotation(TezCompiler.java:249)

at org.apache.hadoop.hive.ql.parse.TezCompiler.optimizeOperatorPlan(TezCompiler.java:122)

at org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:102)

at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10188)

at org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:211)

at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227)

at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:424)

at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:308)

at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1122)

at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1170)

at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)

at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)

at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)

at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)

at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)

at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)

at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)

at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

I tried increasing value for below properties, but it is not working. Hive is running on TEZ

mapreduce.map.memory.mb

mapreduce.reduce.memory.mb

hive.tez.container.size

hive.tez.java.opts

1 ACCEPTED SOLUTION

avatar
Rising Star

The problem is probably because there is too much data moving through the shuffle phase. You can reduce the amount of data moving between tasks as part of the SHUFFLE steps by using more aggressive queries and by looking carefully at your input splits and reduce summary steps. If you have the Ambari TEZ View installed, then I would recommend inspecting each of the TEZ tasks and look at the SHUFFLE BYTES counters to see how much data is moving between the steps. If you see the early steps are moving a lot of data between the tasks then you have probably found the root cause of your out of memory exception and you should be able to tune your Hive query to filter data earlier in the p

View solution in original post

7 REPLIES 7

avatar
Expert Contributor

avatar

Hello pooja

From your stack trace your table seems to be bucketed. Can you share your table definition

could you also try running the query with the setting: hive.auto.convert.join.noconditionaltask=false

avatar
Contributor

Hello,

I had tried with hive.auto.convert.join.noconditionaltask=false, but didn't work.

No table is bucketed.

avatar
Contributor
@nmaillard

I am have a same problem. I am on http://host:8080/#/main/services/HIVE/configs but not sure what to change in the configs.

avatar
Rising Star

The problem is probably because there is too much data moving through the shuffle phase. You can reduce the amount of data moving between tasks as part of the SHUFFLE steps by using more aggressive queries and by looking carefully at your input splits and reduce summary steps. If you have the Ambari TEZ View installed, then I would recommend inspecting each of the TEZ tasks and look at the SHUFFLE BYTES counters to see how much data is moving between the steps. If you see the early steps are moving a lot of data between the tasks then you have probably found the root cause of your out of memory exception and you should be able to tune your Hive query to filter data earlier in the p

avatar
Contributor

Where exactly is the OOM occurring? Is it in the AM? The table are partitioned?

Does this work for you with a scaled down dataset? Table definition will be helpful yo look at

avatar

I am facing the same issue . Can someone please help