<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Hive error:java.lang.OutOfMemoryError: Java heap space in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Hive-error-java-lang-OutOfMemoryError-Java-heap-space/m-p/358361#M237816</link>
    <description>&lt;P&gt;hi&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/100776"&gt;@pankshiv1809&lt;/a&gt;&amp;nbsp; DOnt include&amp;nbsp;&lt;SPAN&gt;set tez.task.resource.memory.mb=10240;&lt;/SPAN&gt;&lt;/P&gt;</description>
    <pubDate>Fri, 25 Nov 2022 10:03:11 GMT</pubDate>
    <dc:creator>asish</dc:creator>
    <dc:date>2022-11-25T10:03:11Z</dc:date>
    <item>
      <title>Hive error:java.lang.OutOfMemoryError: Java heap space</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Hive-error-java-lang-OutOfMemoryError-Java-heap-space/m-p/358308#M237801</link>
      <description>&lt;P&gt;Hi All,&lt;/P&gt;&lt;P&gt;Could you please help to resolve below concern as -&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We are executing below script and getting ERROR. I have enclosed the same for referene.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;hive -e "set&lt;BR /&gt;hive.merge.tezfiles=true;&lt;BR /&gt;set hive.merge.mapfiles=true;&lt;BR /&gt;set hive.merge.mapredfiles=true;&lt;BR /&gt;set tez.queue.name=BIAdv;&lt;BR /&gt;set hive.execution.engine=tez;&lt;BR /&gt;set hive.vectorized.execution.enabled =true;&lt;BR /&gt;set hive.vectorized.execution.reduce.enabled ==true;&lt;BR /&gt;set hive.exec.dynamic.partition=true;set hive.exec.dynamic.partition.mode=nonstrict;&lt;BR /&gt;set hive.exec.max.dynamic.partitions.pernode=20000;&lt;BR /&gt;set hive.exec.max.dynamic.partitions=100000;set hive.merge.size.per.task=134217724;&lt;BR /&gt;set hive.merge.smallfiles.avgsize=134217724;&lt;BR /&gt;set tez.grouping.split-count=1;&lt;BR /&gt;INSERT overwrite TABLE &amp;lt;table-name1&amp;gt; partition (reported_date,last_usage_date)&lt;BR /&gt;SELECT * from &amp;lt;table-name1&amp;gt; where reported_date='2022-11-16' and last_usage_date='2022-04-10';"&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;ERROR - Log :&lt;/P&gt;&lt;P&gt;----------------------------------------------------------------------------------------------&lt;BR /&gt;ERROR : Status: Failed&lt;BR /&gt;ERROR : Vertex failed, vertexName=Map 1, vertexId=vertex_1668260139179_120620_1_00, diagnostics=[Task failed, taskId=task_1668260139179_120620_1_00_000002, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : java.lang.OutOfMemoryError: Java heap space&lt;BR /&gt;at java.util.Arrays.copyOf(Arrays.java:3332)&lt;BR /&gt;at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)&lt;BR /&gt;at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448)&lt;BR /&gt;at java.lang.StringBuilder.append(StringBuilder.java:136)&lt;BR /&gt;at org.apache.hadoop.hive.serde2.ColumnProjectionUtils.appendReadColumnNames(ColumnProjectionUtils.java:239)&lt;BR /&gt;at org.apache.hadoop.hive.serde2.ColumnProjectionUtils.appendReadColumns(ColumnProjectionUtils.java:163)&lt;BR /&gt;at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:964)&lt;BR /&gt;at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:409)&lt;BR /&gt;at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:203)&lt;BR /&gt;at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:152)&lt;BR /&gt;at org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250)&lt;BR /&gt;at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)&lt;BR /&gt;at java.security.AccessController.doPrivileged(Native Method)&lt;BR /&gt;at javax.security.auth.Subject.doAs(Subject.java:422)&lt;BR /&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)&lt;BR /&gt;at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)&lt;BR /&gt;at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)&lt;BR /&gt;at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)&lt;BR /&gt;at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)&lt;BR /&gt;at java.lang.Thread.run(Thread.java:748)&lt;BR /&gt;, errorMessage=Cannot recover from this error:java.lang.OutOfMemoryError: Java heap space&lt;BR /&gt;at java.util.Arrays.copyOf(Arrays.java:3332)&lt;BR /&gt;at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)&lt;BR /&gt;at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448)&lt;BR /&gt;at java.lang.StringBuilder.append(StringBuilder.java:136)&lt;BR /&gt;at org.apache.hadoop.hive.serde2.ColumnProjectionUtils.appendReadColumnNames(ColumnProjectionUtils.java:239)&lt;BR /&gt;at org.apache.hadoop.hive.serde2.ColumnProjectionUtils.appendReadColumns(ColumnProjectionUtils.java:163)&lt;BR /&gt;at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:964)&lt;BR /&gt;at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:409)&lt;BR /&gt;at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:203)&lt;BR /&gt;at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:152)&lt;BR /&gt;at org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250)&lt;BR /&gt;at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)&lt;BR /&gt;at java.security.AccessController.doPrivileged(Native Method)&lt;BR /&gt;at javax.security.auth.Subject.doAs(Subject.java:422)&lt;BR /&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)&lt;BR /&gt;at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)&lt;BR /&gt;at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)&lt;BR /&gt;at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)&lt;BR /&gt;at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)&lt;BR /&gt;at java.lang.Thread.run(Thread.java:748)&lt;BR /&gt;]], Vertex did not succeed due to OWN_TASK_FAILURE, failedTasks:1 killedTasks:6, Vertex vertex_1668260139179_120620_1_00 [Map 1] killed/failed due to:OWN_TASK_FAILURE]&lt;BR /&gt;ERROR : Vertex killed, vertexName=Reducer 2, vertexId=vertex_1668260139179_120620_1_01, diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not succeed due to OTHER_VERTEX_FAILURE, failedTasks:0 killedTasks:1009, Vertex vertex_1668260139179_120620_1_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]&lt;BR /&gt;ERROR : DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1&lt;BR /&gt;INFO : org.apache.tez.common.counters.DAGCounter:&lt;BR /&gt;INFO : NUM_FAILED_TASKS: 1&lt;BR /&gt;INFO : NUM_KILLED_TASKS: 6&lt;BR /&gt;INFO : NUM_SUCCEEDED_TASKS: 7&lt;BR /&gt;INFO : TOTAL_LAUNCHED_TASKS: 14&lt;BR /&gt;INFO : OTHER_LOCAL_TASKS: 2&lt;BR /&gt;INFO : RACK_LOCAL_TASKS: 5&lt;BR /&gt;INFO : AM_CPU_MILLISECONDS: 381090&lt;BR /&gt;INFO : AM_GC_TIME_MILLIS: 1830&lt;BR /&gt;INFO : File System Counters:&lt;BR /&gt;INFO : FILE_BYTES_READ: 339136&lt;BR /&gt;INFO : FILE_BYTES_WRITTEN: 4844738&lt;BR /&gt;INFO : HDFS_BYTES_READ: 4005665433&lt;BR /&gt;INFO : HDFS_BYTES_WRITTEN: 1590596979&lt;BR /&gt;INFO : HDFS_READ_OPS: 71884&lt;BR /&gt;INFO : HDFS_WRITE_OPS: 2638&lt;BR /&gt;INFO : HDFS_OP_CREATE: 1183&lt;BR /&gt;INFO : HDFS_OP_GET_FILE_STATUS: 5887&lt;BR /&gt;INFO : HDFS_OP_MKDIRS: 279&lt;BR /&gt;INFO : HDFS_OP_OPEN: 65997&lt;BR /&gt;INFO : HDFS_OP_RENAME: 1176&lt;BR /&gt;INFO : org.apache.tez.common.counters.TaskCounter:&lt;BR /&gt;INFO : SPILLED_RECORDS: 1176&lt;BR /&gt;INFO : GC_TIME_MILLIS: 3749370&lt;BR /&gt;INFO : TASK_DURATION_MILLIS: 10490948&lt;BR /&gt;INFO : CPU_MILLISECONDS: 35432800&lt;BR /&gt;INFO : PHYSICAL_MEMORY_BYTES: 46170898432&lt;BR /&gt;INFO : VIRTUAL_MEMORY_BYTES: 65302401024&lt;BR /&gt;INFO : COMMITTED_HEAP_BYTES: 46170898432&lt;BR /&gt;INFO : INPUT_RECORDS_PROCESSED: 46000210&lt;BR /&gt;INFO : INPUT_SPLIT_LENGTH_BYTES: 3020210217&lt;BR /&gt;INFO : OUTPUT_RECORDS: 1176&lt;BR /&gt;INFO : OUTPUT_LARGE_RECORDS: 0&lt;BR /&gt;INFO : OUTPUT_BYTES: 8054528&lt;BR /&gt;INFO : OUTPUT_BYTES_WITH_OVERHEAD: 8065910&lt;BR /&gt;INFO : OUTPUT_BYTES_PHYSICAL: 4675170&lt;BR /&gt;INFO : ADDITIONAL_SPILLS_BYTES_WRITTEN: 0&lt;BR /&gt;INFO : ADDITIONAL_SPILLS_BYTES_READ: 0&lt;BR /&gt;INFO : ADDITIONAL_SPILL_COUNT: 0&lt;BR /&gt;INFO : SHUFFLE_CHUNK_COUNT: 7&lt;BR /&gt;INFO : HIVE:&lt;BR /&gt;INFO : CREATED_DYNAMIC_PARTITIONS: 110&lt;BR /&gt;INFO : CREATED_FILES: 1176&lt;BR /&gt;INFO : DESERIALIZE_ERRORS: 0&lt;BR /&gt;INFO : RECORDS_IN_Map_1: 46000210&lt;BR /&gt;INFO : RECORDS_OUT_1_dim_cd_db.dim_vas_ppu_subs_base: 46000210&lt;BR /&gt;INFO : RECORDS_OUT_INTERMEDIATE_Map_1: 1176&lt;BR /&gt;INFO : RECORDS_OUT_OPERATOR_FS_3: 46000210&lt;BR /&gt;INFO : RECORDS_OUT_OPERATOR_GBY_6: 1176&lt;BR /&gt;INFO : RECORDS_OUT_OPERATOR_MAP_0: 0&lt;BR /&gt;INFO : RECORDS_OUT_OPERATOR_RS_7: 1176&lt;BR /&gt;INFO : RECORDS_OUT_OPERATOR_SEL_2: 46000210&lt;BR /&gt;INFO : RECORDS_OUT_OPERATOR_SEL_5: 46000210&lt;BR /&gt;INFO : RECORDS_OUT_OPERATOR_TS_0: 46000210&lt;BR /&gt;INFO : TaskCounter_Map_1_INPUT_dim_vas_ppu_subs_base:&lt;BR /&gt;INFO : INPUT_RECORDS_PROCESSED: 46000210&lt;BR /&gt;INFO : INPUT_SPLIT_LENGTH_BYTES: 3020210217&lt;BR /&gt;INFO : TaskCounter_Map_1_OUTPUT_Reducer_2:&lt;BR /&gt;INFO : ADDITIONAL_SPILLS_BYTES_READ: 0&lt;BR /&gt;INFO : ADDITIONAL_SPILLS_BYTES_WRITTEN: 0&lt;BR /&gt;INFO : ADDITIONAL_SPILL_COUNT: 0&lt;BR /&gt;INFO : OUTPUT_BYTES: 8054528&lt;BR /&gt;INFO : OUTPUT_BYTES_PHYSICAL: 4675170&lt;BR /&gt;INFO : OUTPUT_BYTES_WITH_OVERHEAD: 8065910&lt;BR /&gt;INFO : OUTPUT_LARGE_RECORDS: 0&lt;BR /&gt;INFO : OUTPUT_RECORDS: 1176&lt;BR /&gt;INFO : SHUFFLE_CHUNK_COUNT: 7&lt;BR /&gt;INFO : SPILLED_RECORDS: 1176&lt;BR /&gt;INFO : org.apache.hadoop.hive.ql.exec.tez.HiveInputCounters:&lt;BR /&gt;INFO : GROUPED_INPUT_SPLITS_Map_1: 14&lt;BR /&gt;INFO : INPUT_DIRECTORIES_Map_1: 169&lt;BR /&gt;INFO : INPUT_FILES_Map_1: 412191&lt;BR /&gt;INFO : RAW_INPUT_SPLITS_Map_1: 412191&lt;BR /&gt;ERROR : FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, vertexId=vertex_1668260139179_120620_1_00, diagnostics=[Task failed, taskId=task_1668260139179_120620_1_00_000002, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : java.lang.OutOfMemoryError: Java heap space&lt;BR /&gt;at java.util.Arrays.copyOf(Arrays.java:3332)&lt;BR /&gt;at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)&lt;BR /&gt;at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448)&lt;BR /&gt;at java.lang.StringBuilder.append(StringBuilder.java:136)&lt;BR /&gt;at org.apache.hadoop.hive.serde2.ColumnProjectionUtils.appendReadColumnNames(ColumnProjectionUtils.java:239)&lt;BR /&gt;at org.apache.hadoop.hive.serde2.ColumnProjectionUtils.appendReadColumns(ColumnProjectionUtils.java:163)&lt;BR /&gt;at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:964)&lt;BR /&gt;at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:409)&lt;BR /&gt;at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:203)&lt;BR /&gt;at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:152)&lt;BR /&gt;at org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250)&lt;BR /&gt;at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)&lt;BR /&gt;at java.security.AccessController.doPrivileged(Native Method)&lt;BR /&gt;at javax.security.auth.Subject.doAs(Subject.java:422)&lt;BR /&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)&lt;BR /&gt;at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)&lt;BR /&gt;at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)&lt;BR /&gt;at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)&lt;BR /&gt;at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)&lt;BR /&gt;at java.lang.Thread.run(Thread.java:748)&lt;BR /&gt;, errorMessage=Cannot recover from this error:java.lang.OutOfMemoryError: Java heap space&lt;BR /&gt;at java.util.Arrays.copyOf(Arrays.java:3332)&lt;BR /&gt;at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)&lt;BR /&gt;at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448)&lt;BR /&gt;at java.lang.StringBuilder.append(StringBuilder.java:136)&lt;BR /&gt;at org.apache.hadoop.hive.serde2.ColumnProjectionUtils.appendReadColumnNames(ColumnProjectionUtils.java:239)&lt;BR /&gt;at org.apache.hadoop.hive.serde2.ColumnProjectionUtils.appendReadColumns(ColumnProjectionUtils.java:163)&lt;BR /&gt;at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:964)&lt;BR /&gt;at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:409)&lt;BR /&gt;at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:203)&lt;BR /&gt;at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:152)&lt;BR /&gt;at org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250)&lt;BR /&gt;at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)&lt;BR /&gt;at java.security.AccessController.doPrivileged(Native Method)&lt;BR /&gt;at javax.security.auth.Subject.doAs(Subject.java:422)&lt;BR /&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)&lt;BR /&gt;at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)&lt;BR /&gt;at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)&lt;BR /&gt;at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)&lt;BR /&gt;at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)&lt;BR /&gt;at java.lang.Thread.run(Thread.java:748)&lt;BR /&gt;]], Vertex did not succeed due to OWN_TASK_FAILURE, failedTasks:1 killedTasks:6, Vertex vertex_1668260139179_120620_1_00 [Map 1] killed/failed due to:OWN_TASK_FAILURE]Vertex killed, vertexName=Reducer 2, vertexId=vertex_1668260139179_120620_1_01, diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not succeed due to OTHER_VERTEX_FAILURE, failedTasks:0 killedTasks:1009, Vertex vertex_1668260139179_120620_1_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1&lt;BR /&gt;INFO : Completed executing command(queryId=hive_20221121225133_cda2cb8e-97d3-48be-a1df-c0cd86b00ac5); Time taken: 7826.447 seconds&lt;BR /&gt;INFO : Compiling command(queryId=hive_20221121225133_cda2cb8e-97d3-48be-a1df-c0cd86b00ac5): INSERT overwrite TABLE dim_cd_db.dim_vas_ppu_subs_base partition (reported_date,last_usage_date) SELECT * from dim_cd_db.dim_vas_ppu_subs_base where reported_date='2022-09-25'&lt;BR /&gt;INFO : Semantic Analysis Completed (retrial = false)&lt;BR /&gt;INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:dim_vas_ppu_subs_base.subs_msisdn, type:varchar(25), comment:null), FieldSchema(name:dim_vas_ppu_subs_base.circle_id, type:varchar(4), comment:null), FieldSchema(name:dim_vas_ppu_subs_base.subs_key, type:varchar(25), comment:null), FieldSchema(name:dim_vas_ppu_subs_base.pre_post_ind, type:varchar(2), comment:null), FieldSchema(name:dim_vas_ppu_subs_base.act_mdt_cdr_id_key, type:varchar(70), comment:null), FieldSchema(name:dim_vas_ppu_subs_base.called_calling_number, type:varchar(25), comment:null), FieldSchema(name:dim_vas_ppu_subs_base.short_code, type:varchar(25), comment:null), FieldSchema(name:dim_vas_ppu_subs_base.discovery_bearer, type:varchar(6), comment:null), FieldSchema(name:dim_vas_ppu_subs_base.service_id, type:varchar(30), comment:null), FieldSchema(name:dim_vas_ppu_subs_base.service_sub_sub_type_id, type:varchar(30), comment:null), FieldSchema(name:dim_vas_ppu_subs_base.content_partner_code, type:varchar(30), comment:null), FieldSchema(name:dim_vas_ppu_subs_base.activation_price_amt, type:decimal(14,6), comment:null), FieldSchema(name:dim_vas_ppu_subs_base.current_price_amt, type:decimal(14,6), comment:null), FieldSchema(name:dim_vas_ppu_subs_base.activation_start_date, type:date, comment:null), FieldSchema(name:dim_vas_ppu_subs_base.activation_end_date, type:date, comment:null), FieldSchema(name:dim_vas_ppu_subs_base.last_status_updt_date, type:date, comment:null), FieldSchema(name:dim_vas_ppu_subs_base.load_timestamp, type:timestamp, comment:null), FieldSchema(name:dim_vas_ppu_subs_base.reported_date, type:date, comment:null), FieldSchema(name:dim_vas_ppu_subs_base.last_usage_date, type:date, comment:null)], properties:null)&lt;BR /&gt;INFO : Completed compiling command(queryId=hive_20221121225133_cda2cb8e-97d3-48be-a1df-c0cd86b00ac5); Time taken: 1.779 seconds&lt;BR /&gt;INFO : Executing command(queryId=hive_20221121225133_cda2cb8e-97d3-48be-a1df-c0cd86b00ac5): INSERT overwrite TABLE dim_cd_db.dim_vas_ppu_subs_base partition (reported_date,last_usage_date) SELECT * from dim_cd_db.dim_vas_ppu_subs_base where reported_date='2022-09-25'&lt;BR /&gt;INFO : Query ID = hive_20221121225133_cda2cb8e-97d3-48be-a1df-c0cd86b00ac5&lt;BR /&gt;INFO : Total jobs = 3&lt;BR /&gt;INFO : Launching Job 1 out of 3&lt;BR /&gt;INFO : Starting task [Stage-1:MAPRED] in serial mode&lt;BR /&gt;INFO : Subscribed to counters: [] for queryId: hive_20221121225133_cda2cb8e-97d3-48be-a1df-c0cd86b00ac5&lt;BR /&gt;INFO : Tez session hasn't been created yet. Opening session&lt;BR /&gt;INFO : Dag name: INSERT overwrite TAB...ed_date='2022-09-25' (Stage-1)&lt;BR /&gt;INFO : Status: Running (Executing on YARN cluster with App id application_1668260139179_121442)&lt;/P&gt;&lt;P&gt;ERROR : Status: Failed&lt;BR /&gt;ERROR : Vertex failed, vertexName=Map 1, vertexId=vertex_1668260139179_121442_1_00, diagnostics=[Task failed, taskId=task_1668260139179_121442_1_00_000002, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : java.lang.OutOfMemoryError: Java heap space&lt;BR /&gt;at java.util.Arrays.copyOf(Arrays.java:3332)&lt;BR /&gt;at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)&lt;BR /&gt;at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448)&lt;BR /&gt;at java.lang.StringBuilder.append(StringBuilder.java:136)&lt;BR /&gt;at org.apache.hadoop.hive.serde2.ColumnProjectionUtils.appendReadColumnNames(ColumnProjectionUtils.java:239)&lt;BR /&gt;at org.apache.hadoop.hive.serde2.ColumnProjectionUtils.appendReadColumns(ColumnProjectionUtils.java:163)&lt;BR /&gt;at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:964)&lt;BR /&gt;at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:409)&lt;BR /&gt;at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:203)&lt;BR /&gt;at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:152)&lt;BR /&gt;at org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250)&lt;BR /&gt;at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)&lt;BR /&gt;at java.security.AccessController.doPrivileged(Native Method)&lt;BR /&gt;at javax.security.auth.Subject.doAs(Subject.java:422)&lt;BR /&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)&lt;BR /&gt;at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)&lt;BR /&gt;at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)&lt;BR /&gt;at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)&lt;BR /&gt;at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)&lt;BR /&gt;at java.lang.Thread.run(Thread.java:748)&lt;BR /&gt;, errorMessage=Cannot recover from this error:java.lang.OutOfMemoryError: Java heap space&lt;BR /&gt;at java.util.Arrays.copyOf(Arrays.java:3332)&lt;BR /&gt;at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)&lt;BR /&gt;at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448)&lt;BR /&gt;at java.lang.StringBuilder.append(StringBuilder.java:136)&lt;BR /&gt;at org.apache.hadoop.hive.serde2.ColumnProjectionUtils.appendReadColumnNames(ColumnProjectionUtils.java:239)&lt;BR /&gt;at org.apache.hadoop.hive.serde2.ColumnProjectionUtils.appendReadColumns(ColumnProjectionUtils.java:163)&lt;BR /&gt;at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:964)&lt;BR /&gt;at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:409)&lt;BR /&gt;at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:203)&lt;BR /&gt;at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:152)&lt;BR /&gt;at org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250)&lt;BR /&gt;at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)&lt;BR /&gt;at java.security.AccessController.doPrivileged(Native Method)&lt;BR /&gt;at javax.security.auth.Subject.doAs(Subject.java:422)&lt;BR /&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)&lt;BR /&gt;at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)&lt;BR /&gt;at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)&lt;BR /&gt;at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)&lt;BR /&gt;at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)&lt;BR /&gt;at java.lang.Thread.run(Thread.java:748)&lt;BR /&gt;]], Vertex did not succeed due to OWN_TASK_FAILURE, failedTasks:1 killedTasks:6, Vertex vertex_1668260139179_121442_1_00 [Map 1] killed/failed due to:OWN_TASK_FAILURE]&lt;BR /&gt;ERROR : Vertex killed, vertexName=Reducer 2, vertexId=vertex_1668260139179_121442_1_01, diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not succeed due to OTHER_VERTEX_FAILURE, failedTasks:0 killedTasks:2, Vertex vertex_1668260139179_121442_1_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]&lt;BR /&gt;ERROR : DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1&lt;BR /&gt;INFO : org.apache.tez.common.counters.DAGCounter:&lt;BR /&gt;INFO : NUM_FAILED_TASKS: 1&lt;BR /&gt;INFO : NUM_KILLED_TASKS: 8&lt;BR /&gt;INFO : NUM_SUCCEEDED_TASKS: 7&lt;BR /&gt;INFO : TOTAL_LAUNCHED_TASKS: 16&lt;BR /&gt;INFO : OTHER_LOCAL_TASKS: 3&lt;BR /&gt;INFO : RACK_LOCAL_TASKS: 4&lt;BR /&gt;INFO : AM_CPU_MILLISECONDS: 344940&lt;BR /&gt;INFO : AM_GC_TIME_MILLIS: 1404&lt;BR /&gt;INFO : File System Counters:&lt;BR /&gt;INFO : FILE_BYTES_READ: 784&lt;BR /&gt;INFO : FILE_BYTES_WRITTEN: 4249582&lt;BR /&gt;INFO : HDFS_BYTES_READ: 4005665433&lt;BR /&gt;INFO : HDFS_BYTES_WRITTEN: 1590596809&lt;BR /&gt;INFO : HDFS_READ_OPS: 71884&lt;BR /&gt;INFO : HDFS_WRITE_OPS: 2658&lt;BR /&gt;INFO : HDFS_OP_CREATE: 1183&lt;BR /&gt;INFO : HDFS_OP_GET_FILE_STATUS: 5887&lt;BR /&gt;INFO : HDFS_OP_MKDIRS: 299&lt;BR /&gt;INFO : HDFS_OP_OPEN: 65997&lt;BR /&gt;INFO : HDFS_OP_RENAME: 1176&lt;BR /&gt;INFO : org.apache.tez.common.counters.TaskCounter:&lt;BR /&gt;INFO : SPILLED_RECORDS: 1176&lt;BR /&gt;INFO : GC_TIME_MILLIS: 2605189&lt;BR /&gt;INFO : TASK_DURATION_MILLIS: 8536033&lt;BR /&gt;INFO : CPU_MILLISECONDS: 33374430&lt;BR /&gt;INFO : PHYSICAL_MEMORY_BYTES: 45646610432&lt;BR /&gt;INFO : VIRTUAL_MEMORY_BYTES: 65281306624&lt;BR /&gt;INFO : COMMITTED_HEAP_BYTES: 45646610432&lt;BR /&gt;INFO : INPUT_RECORDS_PROCESSED: 46000210&lt;BR /&gt;INFO : INPUT_SPLIT_LENGTH_BYTES: 3020733963&lt;BR /&gt;INFO : OUTPUT_RECORDS: 1176&lt;BR /&gt;INFO : OUTPUT_LARGE_RECORDS: 0&lt;BR /&gt;INFO : OUTPUT_BYTES: 8054528&lt;BR /&gt;INFO : OUTPUT_BYTES_WITH_OVERHEAD: 8059316&lt;BR /&gt;INFO : OUTPUT_BYTES_PHYSICAL: 4249190&lt;BR /&gt;INFO : ADDITIONAL_SPILLS_BYTES_WRITTEN: 0&lt;BR /&gt;INFO : ADDITIONAL_SPILLS_BYTES_READ: 0&lt;BR /&gt;INFO : ADDITIONAL_SPILL_COUNT: 0&lt;BR /&gt;INFO : SHUFFLE_CHUNK_COUNT: 7&lt;BR /&gt;INFO : HIVE:&lt;BR /&gt;INFO : CREATED_DYNAMIC_PARTITIONS: 114&lt;BR /&gt;INFO : CREATED_FILES: 1176&lt;BR /&gt;INFO : DESERIALIZE_ERRORS: 0&lt;BR /&gt;INFO : RECORDS_IN_Map_1: 46000210&lt;BR /&gt;INFO : RECORDS_OUT_1_dim_cd_db.dim_vas_ppu_subs_base: 46000210&lt;BR /&gt;INFO : RECORDS_OUT_INTERMEDIATE_Map_1: 1176&lt;BR /&gt;INFO : RECORDS_OUT_OPERATOR_FS_3: 46000210&lt;BR /&gt;INFO : RECORDS_OUT_OPERATOR_GBY_6: 1176&lt;BR /&gt;INFO : RECORDS_OUT_OPERATOR_MAP_0: 0&lt;BR /&gt;INFO : RECORDS_OUT_OPERATOR_RS_7: 1176&lt;BR /&gt;INFO : RECORDS_OUT_OPERATOR_SEL_2: 46000210&lt;BR /&gt;INFO : RECORDS_OUT_OPERATOR_SEL_5: 46000210&lt;BR /&gt;INFO : RECORDS_OUT_OPERATOR_TS_0: 46000210&lt;BR /&gt;INFO : TaskCounter_Map_1_INPUT_dim_vas_ppu_subs_base:&lt;BR /&gt;INFO : INPUT_RECORDS_PROCESSED: 46000210&lt;BR /&gt;INFO : INPUT_SPLIT_LENGTH_BYTES: 3020733963&lt;BR /&gt;INFO : TaskCounter_Map_1_OUTPUT_Reducer_2:&lt;BR /&gt;INFO : ADDITIONAL_SPILLS_BYTES_READ: 0&lt;BR /&gt;INFO : ADDITIONAL_SPILLS_BYTES_WRITTEN: 0&lt;BR /&gt;INFO : ADDITIONAL_SPILL_COUNT: 0&lt;BR /&gt;INFO : OUTPUT_BYTES: 8054528&lt;BR /&gt;INFO : OUTPUT_BYTES_PHYSICAL: 4249190&lt;BR /&gt;INFO : OUTPUT_BYTES_WITH_OVERHEAD: 8059316&lt;BR /&gt;INFO : OUTPUT_LARGE_RECORDS: 0&lt;BR /&gt;INFO : OUTPUT_RECORDS: 1176&lt;BR /&gt;INFO : SHUFFLE_CHUNK_COUNT: 7&lt;BR /&gt;INFO : SPILLED_RECORDS: 1176&lt;BR /&gt;INFO : org.apache.hadoop.hive.ql.exec.tez.HiveInputCounters:&lt;BR /&gt;INFO : GROUPED_INPUT_SPLITS_Map_1: 14&lt;BR /&gt;INFO : INPUT_DIRECTORIES_Map_1: 169&lt;BR /&gt;INFO : INPUT_FILES_Map_1: 412191&lt;BR /&gt;INFO : RAW_INPUT_SPLITS_Map_1: 412191&lt;BR /&gt;ERROR : FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, vertexId=vertex_1668260139179_121442_1_00, diagnostics=[Task failed, taskId=task_1668260139179_121442_1_00_000002, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : java.lang.OutOfMemoryError: Java heap space&lt;BR /&gt;at java.util.Arrays.copyOf(Arrays.java:3332)&lt;BR /&gt;at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)&lt;BR /&gt;at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448)&lt;BR /&gt;at java.lang.StringBuilder.append(StringBuilder.java:136)&lt;BR /&gt;at org.apache.hadoop.hive.serde2.ColumnProjectionUtils.appendReadColumnNames(ColumnProjectionUtils.java:239)&lt;BR /&gt;at org.apache.hadoop.hive.serde2.ColumnProjectionUtils.appendReadColumns(ColumnProjectionUtils.java:163)&lt;BR /&gt;at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:964)&lt;BR /&gt;at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:409)&lt;BR /&gt;at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:203)&lt;BR /&gt;at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:152)&lt;BR /&gt;at org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250)&lt;BR /&gt;at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)&lt;BR /&gt;at java.security.AccessController.doPrivileged(Native Method)&lt;BR /&gt;at javax.security.auth.Subject.doAs(Subject.java:422)&lt;BR /&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)&lt;BR /&gt;at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)&lt;BR /&gt;at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)&lt;BR /&gt;at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)&lt;BR /&gt;at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)&lt;BR /&gt;at java.lang.Thread.run(Thread.java:748)&lt;BR /&gt;, errorMessage=Cannot recover from this error:java.lang.OutOfMemoryError: Java heap space&lt;BR /&gt;at java.util.Arrays.copyOf(Arrays.java:3332)&lt;BR /&gt;at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)&lt;BR /&gt;at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448)&lt;BR /&gt;at java.lang.StringBuilder.append(StringBuilder.java:136)&lt;BR /&gt;at org.apache.hadoop.hive.serde2.ColumnProjectionUtils.appendReadColumnNames(ColumnProjectionUtils.java:239)&lt;BR /&gt;at org.apache.hadoop.hive.serde2.ColumnProjectionUtils.appendReadColumns(ColumnProjectionUtils.java:163)&lt;BR /&gt;at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:964)&lt;BR /&gt;at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:409)&lt;BR /&gt;at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:203)&lt;BR /&gt;at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:152)&lt;BR /&gt;at org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250)&lt;BR /&gt;at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)&lt;BR /&gt;at java.security.AccessController.doPrivileged(Native Method)&lt;BR /&gt;at javax.security.auth.Subject.doAs(Subject.java:422)&lt;BR /&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)&lt;BR /&gt;at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)&lt;BR /&gt;at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)&lt;BR /&gt;at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)&lt;BR /&gt;at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)&lt;BR /&gt;at java.lang.Thread.run(Thread.java:748)&lt;BR /&gt;]], Vertex did not succeed due to OWN_TASK_FAILURE, failedTasks:1 killedTasks:6, Vertex vertex_1668260139179_121442_1_00 [Map 1] killed/failed due to:OWN_TASK_FAILURE]Vertex killed, vertexName=Reducer 2, vertexId=vertex_1668260139179_121442_1_01, diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not succeed due to OTHER_VERTEX_FAILURE, failedTasks:0 killedTasks:2, Vertex vertex_1668260139179_121442_1_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1&lt;BR /&gt;INFO : Completed executing command(queryId=hive_20221121225133_cda2cb8e-97d3-48be-a1df-c0cd86b00ac5); Time taken: 7380.715 seconds&lt;BR /&gt;Error: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, vertexId=vertex_1668260139179_121442_1_00, diagnostics=[Task failed, taskId=task_1668260139179_121442_1_00_000002, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : java.lang.OutOfMemoryError: Java heap space&lt;BR /&gt;at java.util.Arrays.copyOf(Arrays.java:3332)&lt;BR /&gt;at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)&lt;BR /&gt;at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448)&lt;BR /&gt;at java.lang.StringBuilder.append(StringBuilder.java:136)&lt;BR /&gt;at org.apache.hadoop.hive.serde2.ColumnProjectionUtils.appendReadColumnNames(ColumnProjectionUtils.java:239)&lt;BR /&gt;at org.apache.hadoop.hive.serde2.ColumnProjectionUtils.appendReadColumns(ColumnProjectionUtils.java:163)&lt;BR /&gt;at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:964)&lt;BR /&gt;at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:409)&lt;BR /&gt;at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:203)&lt;BR /&gt;at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:152)&lt;BR /&gt;at org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250)&lt;BR /&gt;at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)&lt;BR /&gt;at java.security.AccessController.doPrivileged(Native Method)&lt;BR /&gt;at javax.security.auth.Subject.doAs(Subject.java:422)&lt;BR /&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)&lt;BR /&gt;at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)&lt;BR /&gt;at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)&lt;BR /&gt;at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)&lt;BR /&gt;at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)&lt;BR /&gt;at java.lang.Thread.run(Thread.java:748)&lt;BR /&gt;, errorMessage=Cannot recover from this error:java.lang.OutOfMemoryError: Java heap space&lt;BR /&gt;at java.util.Arrays.copyOf(Arrays.java:3332)&lt;BR /&gt;at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)&lt;BR /&gt;at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448)&lt;BR /&gt;at java.lang.StringBuilder.append(StringBuilder.java:136)&lt;BR /&gt;at org.apache.hadoop.hive.serde2.ColumnProjectionUtils.appendReadColumnNames(ColumnProjectionUtils.java:239)&lt;BR /&gt;at org.apache.hadoop.hive.serde2.ColumnProjectionUtils.appendReadColumns(ColumnProjectionUtils.java:163)&lt;BR /&gt;at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:964)&lt;BR /&gt;at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:409)&lt;BR /&gt;at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:203)&lt;BR /&gt;at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:152)&lt;BR /&gt;at org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267)&lt;BR /&gt;at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250)&lt;BR /&gt;at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)&lt;BR /&gt;at java.security.AccessController.doPrivileged(Native Method)&lt;BR /&gt;at javax.security.auth.Subject.doAs(Subject.java:422)&lt;BR /&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)&lt;BR /&gt;at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)&lt;BR /&gt;at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)&lt;BR /&gt;at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)&lt;BR /&gt;at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)&lt;BR /&gt;at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)&lt;BR /&gt;at java.lang.Thread.run(Thread.java:748)&lt;BR /&gt;]], Vertex did not succeed due to OWN_TASK_FAILURE, failedTasks:1 killedTasks:6, Vertex vertex_1668260139179_121442_1_00 [Map 1] killed/failed due to:OWN_TASK_FAILURE]Vertex killed, vertexName=Reducer 2, vertexId=vertex_1668260139179_121442_1_01, diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not succeed due to OTHER_VERTEX_FAILURE, failedTasks:0 killedTasks:2, Vertex vertex_1668260139179_121442_1_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1 (state=08S01,code=2)&lt;BR /&gt;Closing: 0: jdbc:hive2://ndc3hdpprodmn05.vodafoneidea.com:2181,ndc3hdpprodmn06.vodafoneidea.com:2181,ndc3hdpprodmn07.vodafoneidea.com:2181/default;httpPath=cliservice;password=biuser2;principal=hive/_HOST@INROOT.IN;serviceDiscoveryMode=zooKeeper;transportMode=http;user=biuser2;zooKeeperNamespace=hiveserver2&lt;BR /&gt;[biuser2@ndc3hdpproden01 Sudhakar]$&lt;BR /&gt;You have new mail in /var/spool/mail/biuser2&lt;BR /&gt;[biuser2@ndc3hdpproden01 Sudhakar]$ cat merge_vas_ppu.out&lt;BR /&gt;SLF4J: Class path contains multiple SLF4J bindings.&lt;BR /&gt;SLF4J: Found binding in [jar:file:/usr/hdp/3.1.5.0-152/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]&lt;BR /&gt;SLF4J: Found binding in [jar:file:/usr/hdp/3.1.5.0-152/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]&lt;BR /&gt;SLF4J: See &lt;A href="http://www.slf4j.org/codes.html#multiple_bindings" target="_blank"&gt;http://www.slf4j.org/codes.html#multiple_bindings&lt;/A&gt; for an explanation.&lt;BR /&gt;SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]&lt;BR /&gt;Connecting to jdbc:hive2://ndc3hdpprodmn05.vodafoneidea.com:2181,ndc3hdpprodmn06.vodafoneidea.com:2181,ndc3hdpprodmn07.vodafoneidea.com:2181/default;httpPath=cliservice;password=biuser2;principal=hive/_HOST@INROOT.IN;serviceDiscoveryMode=zooKeeper;transportMode=http;user=biuser2;zooKeeperNamespace=hiveserver2&lt;BR /&gt;22/11/21 22:51:33 [main]: INFO jdbc.HiveConnection: Connected to NDC3HDPPRODMN06.vodafoneidea.com:10001&lt;BR /&gt;Connected to: Apache Hive (version 3.1.0.3.1.5.0-152)&lt;BR /&gt;Driver: Hive JDBC (version 3.1.0.3.1.5.0-152)&lt;BR /&gt;Transaction isolation: TRANSACTION_REPEATABLE_READ&lt;BR /&gt;No rows affected (0.064 seconds)&lt;BR /&gt;No rows affected (0.009 seconds)&lt;BR /&gt;No rows affected (0.008 seconds)&lt;BR /&gt;No rows affected (0.006 seconds)&lt;BR /&gt;No rows affected (0.005 seconds)&lt;BR /&gt;No rows affected (0.004 seconds)&lt;BR /&gt;No rows affected (0.004 seconds)&lt;BR /&gt;No rows affected (0.004 seconds)&lt;BR /&gt;No rows affected (0.004 seconds)&lt;BR /&gt;No rows affected (0.004 seconds)&lt;BR /&gt;No rows affected (0.003 seconds)&lt;BR /&gt;No rows affected (0.004 seconds)&lt;BR /&gt;No rows affected (0.004 seconds)&lt;BR /&gt;No rows affected (0.004 seconds)&lt;BR /&gt;INFO : Compiling command(queryId=hive_20221121225133_cda2cb8e-97d3-48be-a1df-c0cd86b00ac5): INSERT overwrite TABLE dim_cd_db.dim_vas_ppu_subs_base partition (reported_date,last_usage_date) SELECT * from dim_cd_db.dim_vas_ppu_subs_base where reported_date='2022-09-25'&lt;BR /&gt;INFO : Semantic Analysis Completed (retrial = false)&lt;BR /&gt;INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:dim_vas_ppu_subs_base.subs_msisdn, type:varchar(25), comment:null), FieldSchema(name:dim_vas_ppu_subs_base.circle_id, type:varchar(4), comment:null), FieldSchema(name:dim_vas_ppu_subs_base.subs_key, type:varchar(25), comment:null), FieldSchema(name:dim_vas_ppu_subs_base.pre_post_ind, type:varchar(2), comment:null), FieldSchema(name:dim_vas_ppu_subs_base.act_mdt_cdr_id_key, type:varchar(70), comment:null), FieldSchema(name:dim_vas_ppu_subs_base.called_calling_number, type:varchar(25), comment:null), FieldSchema(name:dim_vas_ppu_subs_base.short_code, type:varchar(25), comment:null), FieldSchema(name:dim_vas_ppu_subs_base.discovery_bearer, type:varchar(6), comment:null), FieldSchema(name:dim_vas_ppu_subs_base.service_id, type:varchar(30), comment:null), FieldSchema(name:dim_vas_ppu_subs_base.service_sub_sub_type_id, type:varchar(30), comment:null), FieldSchema(name:dim_vas_ppu_subs_base.content_partner_code, type:varchar(30), comment:null), FieldSchema(name:dim_vas_ppu_subs_base.activation_price_amt, type:decimal(14,6), comment:null), FieldSchema(name:dim_vas_ppu_subs_base.current_price_amt, type:decimal(14,6), comment:null), FieldSchema(name:dim_vas_ppu_subs_base.activation_start_date, type:date, comment:null), FieldSchema(name:dim_vas_ppu_subs_base.activation_end_date, type:date, comment:null), FieldSchema(name:dim_vas_ppu_subs_base.last_status_updt_date, type:date, comment:null), FieldSchema(name:dim_vas_ppu_subs_base.load_timestamp, type:timestamp, comment:null), FieldSchema(name:dim_vas_ppu_subs_base.reported_date, type:date, comment:null), FieldSchema(name:dim_vas_ppu_subs_base.last_usage_date, type:date, comment:null)], properties:null)&lt;BR /&gt;INFO : Completed compiling command(queryId=hive_20221121225133_cda2cb8e-97d3-48be-a1df-c0cd86b00ac5); Time taken: 1.636 seconds&lt;BR /&gt;INFO : Executing command(queryId=hive_20221121225133_cda2cb8e-97d3-48be-a1df-c0cd86b00ac5): INSERT overwrite TABLE dim_cd_db.dim_vas_ppu_subs_base partition (reported_date,last_usage_date) SELECT * from dim_cd_db.dim_vas_ppu_subs_base where reported_date='2022-09-25'&lt;BR /&gt;INFO : Query ID = hive_20221121225133_cda2cb8e-97d3-48be-a1df-c0cd86b00ac5&lt;BR /&gt;INFO : Total jobs = 3&lt;BR /&gt;INFO : Launching Job 1 out of 3&lt;BR /&gt;INFO : Starting task [Stage-1:MAPRED] in serial mode&lt;BR /&gt;INFO : Subscribed to counters: [] for queryId: hive_20221121225133_cda2cb8e-97d3-48be-a1df-c0cd86b00ac5&lt;BR /&gt;INFO : Tez session hasn't been created yet. Opening session&lt;BR /&gt;INFO : Dag name: INSERT overwrite TAB...ed_date='2022-09-25' (Stage-1)&lt;BR /&gt;INFO : Status: Running (Executing on YARN cluster with App id application_1668260139179_120620)&lt;/P&gt;</description>
      <pubDate>Thu, 24 Nov 2022 09:51:14 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Hive-error-java-lang-OutOfMemoryError-Java-heap-space/m-p/358308#M237801</guid>
      <dc:creator>pankshiv1809</dc:creator>
      <dc:date>2022-11-24T09:51:14Z</dc:date>
    </item>
    <item>
      <title>Re: Hive error:java.lang.OutOfMemoryError: Java heap space</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Hive-error-java-lang-OutOfMemoryError-Java-heap-space/m-p/358357#M237813</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/100776"&gt;@pankshiv1809&lt;/a&gt;&amp;nbsp; Please increase the container size:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;set hive.tez.container.size=10240;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;set tez.runtime.io.sort.mb=4096;&amp;nbsp; ==&amp;gt; 40% of&amp;nbsp;hive.tez.container.size&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="p1"&gt;&amp;nbsp;&lt;/P&gt;&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;Keep on increasing the container.&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="p1"&gt;&amp;nbsp;&lt;/P&gt;&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;Please also collect table and column stats too &lt;A href="https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.6.5/bk_cloud-data-access/content/hive-analyzing-tables.html" target="_blank"&gt;https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.6.5/bk_cloud-data-access/content/hive-analyzing-tables.html&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="p1"&gt;&amp;nbsp;&lt;/P&gt;&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;Please mark it "Accept As Solution". if your query is answered.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 25 Nov 2022 07:59:48 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Hive-error-java-lang-OutOfMemoryError-Java-heap-space/m-p/358357#M237813</guid>
      <dc:creator>asish</dc:creator>
      <dc:date>2022-11-25T07:59:48Z</dc:date>
    </item>
    <item>
      <title>Re: Hive error:java.lang.OutOfMemoryError: Java heap space</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Hive-error-java-lang-OutOfMemoryError-Java-heap-space/m-p/358360#M237815</link>
      <description>&lt;P&gt;Hi Asish,&lt;/P&gt;&lt;P&gt;I was trying to add below parameter to do workaround memory. Will include new parameter mentioned by you for today's task and share update. Hope below parameter gives some optimized memory for Reason code - 2 solution.&lt;/P&gt;&lt;P&gt;set tez.am.resource.memory.mb=16384;&lt;BR /&gt;set tez.task.resource.memory.mb=16384;&lt;BR /&gt;set hive.tez.container.size=16384;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks for sharing parameter with values.&lt;/P&gt;&lt;P&gt;set tez.am.resource.memory.mb=10240;&lt;BR /&gt;set tez.task.resource.memory.mb=10240;&lt;BR /&gt;set tez.runtime.io.sort.mb=4096;&lt;BR /&gt;set hive.tez.container.size=10240;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 25 Nov 2022 08:23:31 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Hive-error-java-lang-OutOfMemoryError-Java-heap-space/m-p/358360#M237815</guid>
      <dc:creator>pankshiv1809</dc:creator>
      <dc:date>2022-11-25T08:23:31Z</dc:date>
    </item>
    <item>
      <title>Re: Hive error:java.lang.OutOfMemoryError: Java heap space</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Hive-error-java-lang-OutOfMemoryError-Java-heap-space/m-p/358361#M237816</link>
      <description>&lt;P&gt;hi&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/100776"&gt;@pankshiv1809&lt;/a&gt;&amp;nbsp; DOnt include&amp;nbsp;&lt;SPAN&gt;set tez.task.resource.memory.mb=10240;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 25 Nov 2022 10:03:11 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Hive-error-java-lang-OutOfMemoryError-Java-heap-space/m-p/358361#M237816</guid>
      <dc:creator>asish</dc:creator>
      <dc:date>2022-11-25T10:03:11Z</dc:date>
    </item>
    <item>
      <title>Re: Hive error:java.lang.OutOfMemoryError: Java heap space</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Hive-error-java-lang-OutOfMemoryError-Java-heap-space/m-p/358362#M237817</link>
      <description>&lt;P&gt;okay . any reason if i include in task ?&lt;/P&gt;</description>
      <pubDate>Fri, 25 Nov 2022 10:05:03 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Hive-error-java-lang-OutOfMemoryError-Java-heap-space/m-p/358362#M237817</guid>
      <dc:creator>pankshiv1809</dc:creator>
      <dc:date>2022-11-25T10:05:03Z</dc:date>
    </item>
    <item>
      <title>Re: Hive error:java.lang.OutOfMemoryError: Java heap space</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Hive-error-java-lang-OutOfMemoryError-Java-heap-space/m-p/358373#M237824</link>
      <description>&lt;P&gt;Please find the difference between hive.tez.container.size and tez.task.resource.mb&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;hive.tez.container.size&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;This property specifies tez container size. Usually value of this property should be the same as or a small multiple (1 or 2 times that) of YARN container size&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;yarn.scheduler.minimum-allocation-mb&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and should not exceed value of&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;yarn.scheduler.maximum-allocation-mb&lt;/STRONG&gt;.&lt;/P&gt;&lt;P&gt;As a general rule don't put value higher than memory per processor as you want 1 processor per container and you want to spun up multiple containers.&lt;/P&gt;&lt;P&gt;You can find very detailed answer and a great architecture diagram on Hortonworks community answer&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://community.hortonworks.com/articles/14309/demystify-tez-tuning-step-by-step.html" rel="nofollow noreferrer" target="_blank"&gt;here&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;tez.task.resource.memory.mb&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Amount of memory used by launched task in TEZ container.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;tez.task.resource.memory.mb&lt;/STRONG&gt; should be set &amp;lt;&amp;nbsp;&lt;STRONG&gt;hive.tez.container.size&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;This will be recalculated. Run the job without setting.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 25 Nov 2022 15:37:29 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Hive-error-java-lang-OutOfMemoryError-Java-heap-space/m-p/358373#M237824</guid>
      <dc:creator>asish</dc:creator>
      <dc:date>2022-11-25T15:37:29Z</dc:date>
    </item>
    <item>
      <title>Re: Hive error:java.lang.OutOfMemoryError: Java heap space</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Hive-error-java-lang-OutOfMemoryError-Java-heap-space/m-p/358396#M237835</link>
      <description>&lt;P&gt;GM Asish... Thanks for sharing difference between two parameter. Will take note on&amp;nbsp; &lt;STRONG&gt;tez.task.resource.memory.mb&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;should be set &amp;lt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;hive.tez.container.size&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Appreciate for your support.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Pankaj shivankar&lt;/P&gt;</description>
      <pubDate>Sat, 26 Nov 2022 05:34:30 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Hive-error-java-lang-OutOfMemoryError-Java-heap-space/m-p/358396#M237835</guid>
      <dc:creator>pankshiv1809</dc:creator>
      <dc:date>2022-11-26T05:34:30Z</dc:date>
    </item>
  </channel>
</rss>

