<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Hive Job - OutOfMemoryError: Java heap space in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Hive-Job-OutOfMemoryError-Java-heap-space/m-p/397893#M250016</link>
    <description>&lt;UL&gt;&lt;LI&gt;The job failed with an OutOfMemoryError (OOME) at the child task attempt level, as indicated by the stacktrace.&lt;/LI&gt;&lt;LI&gt;It was observed that certain mapreduce properties have been set, which may potentially overwrite the hive.tez.container.size property.&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;SET mapreduce.map.java.opts=-Xmx3686m;
SET mapreduce.reduce.java.opts=-Xmx3686m;
SET mapred.child.java.opts=-Xmx10g;​&lt;/LI-CODE&gt;&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;It is recommended to validate the yarn appLogs to confirm if the child task attempts were launched with 80% of the hive.tez.container.size. If not, it is advised to remove the mapreduce configurations and try re-running the job.&lt;/LI&gt;&lt;LI&gt;Before re-running the query, it is suggested to collect statistics for all the source tables. This will assist the optimizer in creating a better execution plan.&lt;/LI&gt;&lt;/UL&gt;</description>
    <pubDate>Fri, 22 Nov 2024 13:29:56 GMT</pubDate>
    <dc:creator>ggangadharan</dc:creator>
    <dc:date>2024-11-22T13:29:56Z</dc:date>
    <item>
      <title>Hive Job - OutOfMemoryError: Java heap space</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Hive-Job-OutOfMemoryError-Java-heap-space/m-p/397338#M249819</link>
      <description>&lt;P class="first:mt-0 last:mb-0"&gt;&lt;SPAN&gt;I have a cluster, we are running INSERT OVERWRITE QUERY through HIVE CLI which fails with OutOfMemoryError: Java heap space.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;I have tried multiple set parameters and updated the machine config but no luck.&lt;/SPAN&gt;&lt;/P&gt;&lt;PRE&gt;&lt;SPAN&gt;set hive.exec.max.dynamic.partitions=4000;&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;set hive.exec.max.dynamic.partitions.pernode=500;&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;set hive.exec.dynamic.partition.mode=nonstrict;&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;SET mapreduce.map.java.opts=-Xmx3686m;&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;SET mapreduce.reduce.java.opts=-Xmx3686m;&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;SET mapred.child.java.opts=-Xmx10g;&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;set hive.tez.container.size=16384;&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;set tez.task.resource.memory.mb=16384;&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;set tez.am.resource.memory.mb=8192;&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager;&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;set hive.support.concurrency=false;&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;set hive.vectorized.execution.enabled=true;&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;set hive.vectorized.execution.reduce.enabled=true;&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;set hive.exec.orc.split.strategy=BI;&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;set hive.exec.reducers.max=150;&lt;/SPAN&gt;&lt;/PRE&gt;&lt;P class="first:mt-0 last:mb-0"&gt;&lt;SPAN&gt;Error thrown&lt;/SPAN&gt;&lt;/P&gt;&lt;PRE&gt;&lt;SPAN&gt;Status: Failed&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;Vertex failed, vertexName=Reducer 4, vertexId=vertex_1731327513546_0052_5_08, diagnostics=[Task failed, taskId=task_1731327513546_0052_5_08_000045, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : java.lang.OutOfMemoryError: Java heap space&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at java.base/java.io.BufferedOutputStream.&amp;lt;init&amp;gt;(BufferedOutputStream.java:75)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.createOutputStream(GoogleHadoopOutputStream.java:198)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.&amp;lt;init&amp;gt;(GoogleHadoopOutputStream.java:177)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem.lambda$create$5(GoogleHadoopFileSystem.java:547)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem$$Lambda$273/0x000000080077e040.apply(Unknown Source)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding$$Lambda$274/0x000000080077d040.apply(Unknown Source)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem.create(GoogleHadoopFileSystem.java:521)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1234)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1211)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.orc.impl.PhysicalFsWriter.&amp;lt;init&amp;gt;(PhysicalFsWriter.java:95)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.orc.impl.WriterImpl.&amp;lt;init&amp;gt;(WriterImpl.java:187)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.io.orc.WriterImpl.&amp;lt;init&amp;gt;(WriterImpl.java:94)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.io.orc.OrcFile.createWriter(OrcFile.java:334)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat$OrcRecordWriter.write(OrcOutputFormat.java:95)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:990)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.internalForward(CommonJoinOperator.java:816)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.createForwardJoinObject(CommonJoinOperator.java:504)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genObject(CommonJoinOperator.java:661)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genJoinObject(CommonJoinOperator.java:533)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:936)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.joinObject(CommonMergeJoinOperator.java:331)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.joinOneGroup(CommonMergeJoinOperator.java:294)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;, errorMessage=Cannot recover from this error:java.lang.OutOfMemoryError: Java heap space&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at java.base/java.io.BufferedOutputStream.&amp;lt;init&amp;gt;(BufferedOutputStream.java:75)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.createOutputStream(GoogleHadoopOutputStream.java:198)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.&amp;lt;init&amp;gt;(GoogleHadoopOutputStream.java:177)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem.lambda$create$5(GoogleHadoopFileSystem.java:547)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem$$Lambda$273/0x000000080077e040.apply(Unknown Source)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding$$Lambda$274/0x000000080077d040.apply(Unknown Source)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem.create(GoogleHadoopFileSystem.java:521)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1234)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1211)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.orc.impl.PhysicalFsWriter.&amp;lt;init&amp;gt;(PhysicalFsWriter.java:95)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.orc.impl.WriterImpl.&amp;lt;init&amp;gt;(WriterImpl.java:187)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.io.orc.WriterImpl.&amp;lt;init&amp;gt;(WriterImpl.java:94)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.io.orc.OrcFile.createWriter(OrcFile.java:334)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat$OrcRecordWriter.write(OrcOutputFormat.java:95)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:990)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.internalForward(CommonJoinOperator.java:816)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.createForwardJoinObject(CommonJoinOperator.java:504)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genObject(CommonJoinOperator.java:661)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genJoinObject(CommonJoinOperator.java:533)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:936)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.joinObject(CommonMergeJoinOperator.java:331)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;        at org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.joinOneGroup(CommonMergeJoinOperator.java:294)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;]], Vertex did not succeed due to OWN_TASK_FAILURE, failedTasks:1 killedTasks:49, Vertex vertex_1731327513546_0052_5_08 [Reducer 4] killed/failed due to:OWN_TASK_FAILURE]&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:0```&lt;/SPAN&gt;&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 11 Nov 2024 18:16:03 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Hive-Job-OutOfMemoryError-Java-heap-space/m-p/397338#M249819</guid>
      <dc:creator>Keyboard00</dc:creator>
      <dc:date>2024-11-11T18:16:03Z</dc:date>
    </item>
    <item>
      <title>Re: Hive Job - OutOfMemoryError: Java heap space</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Hive-Job-OutOfMemoryError-Java-heap-space/m-p/397339#M249820</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/120769"&gt;@Keyboard00&lt;/a&gt;&amp;nbsp;Welcome to the Cloudera Community!&lt;BR /&gt;&lt;BR /&gt;To help you get the best possible solution, I have tagged our Hive experts&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/45798"&gt;@james_jones&lt;/a&gt;&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/38161"&gt;@cravani&lt;/a&gt;&amp;nbsp; who may be able to assist you further.&lt;BR /&gt;&lt;BR /&gt;Please keep us updated on your post, and we hope you find a satisfactory solution to your query.&lt;/P&gt;</description>
      <pubDate>Mon, 11 Nov 2024 21:48:36 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Hive-Job-OutOfMemoryError-Java-heap-space/m-p/397339#M249820</guid>
      <dc:creator>DianaTorres</dc:creator>
      <dc:date>2024-11-11T21:48:36Z</dc:date>
    </item>
    <item>
      <title>Re: Hive Job - OutOfMemoryError: Java heap space</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Hive-Job-OutOfMemoryError-Java-heap-space/m-p/397893#M250016</link>
      <description>&lt;UL&gt;&lt;LI&gt;The job failed with an OutOfMemoryError (OOME) at the child task attempt level, as indicated by the stacktrace.&lt;/LI&gt;&lt;LI&gt;It was observed that certain mapreduce properties have been set, which may potentially overwrite the hive.tez.container.size property.&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;SET mapreduce.map.java.opts=-Xmx3686m;
SET mapreduce.reduce.java.opts=-Xmx3686m;
SET mapred.child.java.opts=-Xmx10g;​&lt;/LI-CODE&gt;&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;It is recommended to validate the yarn appLogs to confirm if the child task attempts were launched with 80% of the hive.tez.container.size. If not, it is advised to remove the mapreduce configurations and try re-running the job.&lt;/LI&gt;&lt;LI&gt;Before re-running the query, it is suggested to collect statistics for all the source tables. This will assist the optimizer in creating a better execution plan.&lt;/LI&gt;&lt;/UL&gt;</description>
      <pubDate>Fri, 22 Nov 2024 13:29:56 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Hive-Job-OutOfMemoryError-Java-heap-space/m-p/397893#M250016</guid>
      <dc:creator>ggangadharan</dc:creator>
      <dc:date>2024-11-22T13:29:56Z</dc:date>
    </item>
  </channel>
</rss>

