<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Java heap space issues while running spark jobs in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Java-heap-space-issues-while-running-spark-jobs/m-p/327736#M230132</link>
    <description>&lt;P&gt;Hello Team,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;can someone please help me, i am facing out of memory in spark jobs also find below parameters and also find below configurations.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;21/10/11 17:22:53 INFO executor.Executor: Finished task 194.0 in stage 79.0 (TID 14855). 11767 bytes result sent to driver&lt;BR /&gt;21/10/11 17:23:34 ERROR executor.CoarseGrainedExecutorBackend: RECEIVED SIGNAL TERM&lt;BR /&gt;21/10/11 17:23:34 ERROR executor.Executor: Exception in task 167.0 in stage 79.0 (TID 14825)&lt;BR /&gt;java.lang.OutOfMemoryError: Java heap spac&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;21/10/11 17:23:34 INFO storage.DiskBlockManager: Shutdown hook called&lt;BR /&gt;21/10/11 17:23:34 INFO util.ShutdownHookManager: Shutdown hook called&lt;BR /&gt;21/10/11 17:23:34 INFO executor.Executor: Not reporting error to driver during JVM shutdown.&lt;BR /&gt;21/10/11 17:23:34 ERROR util.SparkUncaughtExceptionHandler: [Container in shutdown] Uncaught exception in thread Thread[Executor task la&lt;BR /&gt;java.lang.OutOfMemoryError: Java heap space&lt;BR /&gt;at org.apache.spark.sql.catalyst.expressions.UnsafeRow.copy(UnsafeRow.java:502)&lt;BR /&gt;at org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArray.add(ExternalAppendOnlyUnsafeRowArray.scala:108)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Job Configurations details:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;conf = { "app_name": "CX360", "spark.yarn.queue": "CXMT",&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "spark.port.maxRetries": 500,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "spark.driver.memoryOverhead": 4096,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "spark.executor.memoryOverhead": '14g',&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "spark.driver.memory": "50g",&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "spark.driver.maxResultSize": 0,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "spark.executor.memory": "50g",&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "spark.executor.instances": 2,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;"spark.executor.cores": 5,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "spark.driver.cores": 5&lt;/P&gt;&lt;P&gt;}&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Tried with different values but still facing issues, please find yarn queue capacity details&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Queue Name : CXMT&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;Queue State : running&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;Scheduling Info : Capacity: 8.0, MaximumCapacity: 8.0, CurrentCapacity: 0.8620696&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Please do the&amp;nbsp; needfull ASAP.&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Thanks&lt;/STRONG&gt;&lt;/P&gt;</description>
    <pubDate>Fri, 15 Oct 2021 12:28:50 GMT</pubDate>
    <dc:creator>Kallem</dc:creator>
    <dc:date>2021-10-15T12:28:50Z</dc:date>
    <item>
      <title>Java heap space issues while running spark jobs</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Java-heap-space-issues-while-running-spark-jobs/m-p/327736#M230132</link>
      <description>&lt;P&gt;Hello Team,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;can someone please help me, i am facing out of memory in spark jobs also find below parameters and also find below configurations.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;21/10/11 17:22:53 INFO executor.Executor: Finished task 194.0 in stage 79.0 (TID 14855). 11767 bytes result sent to driver&lt;BR /&gt;21/10/11 17:23:34 ERROR executor.CoarseGrainedExecutorBackend: RECEIVED SIGNAL TERM&lt;BR /&gt;21/10/11 17:23:34 ERROR executor.Executor: Exception in task 167.0 in stage 79.0 (TID 14825)&lt;BR /&gt;java.lang.OutOfMemoryError: Java heap spac&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;21/10/11 17:23:34 INFO storage.DiskBlockManager: Shutdown hook called&lt;BR /&gt;21/10/11 17:23:34 INFO util.ShutdownHookManager: Shutdown hook called&lt;BR /&gt;21/10/11 17:23:34 INFO executor.Executor: Not reporting error to driver during JVM shutdown.&lt;BR /&gt;21/10/11 17:23:34 ERROR util.SparkUncaughtExceptionHandler: [Container in shutdown] Uncaught exception in thread Thread[Executor task la&lt;BR /&gt;java.lang.OutOfMemoryError: Java heap space&lt;BR /&gt;at org.apache.spark.sql.catalyst.expressions.UnsafeRow.copy(UnsafeRow.java:502)&lt;BR /&gt;at org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArray.add(ExternalAppendOnlyUnsafeRowArray.scala:108)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Job Configurations details:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;conf = { "app_name": "CX360", "spark.yarn.queue": "CXMT",&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "spark.port.maxRetries": 500,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "spark.driver.memoryOverhead": 4096,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "spark.executor.memoryOverhead": '14g',&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "spark.driver.memory": "50g",&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "spark.driver.maxResultSize": 0,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "spark.executor.memory": "50g",&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "spark.executor.instances": 2,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;"spark.executor.cores": 5,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "spark.driver.cores": 5&lt;/P&gt;&lt;P&gt;}&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Tried with different values but still facing issues, please find yarn queue capacity details&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Queue Name : CXMT&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;Queue State : running&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;Scheduling Info : Capacity: 8.0, MaximumCapacity: 8.0, CurrentCapacity: 0.8620696&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Please do the&amp;nbsp; needfull ASAP.&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Thanks&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 15 Oct 2021 12:28:50 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Java-heap-space-issues-while-running-spark-jobs/m-p/327736#M230132</guid>
      <dc:creator>Kallem</dc:creator>
      <dc:date>2021-10-15T12:28:50Z</dc:date>
    </item>
    <item>
      <title>Re: Java heap space issues while running spark jobs</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Java-heap-space-issues-while-running-spark-jobs/m-p/327777#M230144</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Is this is only the spark job failing with OOM error? what was the initial executor and driver memory that you have tried with?&lt;/P&gt;&lt;P&gt;Can you also try to into increase the num-executors and executor-cores and run the job? rerun the job by increasing executors and cores and see if it works.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Chethan YM&lt;/P&gt;</description>
      <pubDate>Sat, 16 Oct 2021 03:22:31 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Java-heap-space-issues-while-running-spark-jobs/m-p/327777#M230144</guid>
      <dc:creator>ChethanYM</dc:creator>
      <dc:date>2021-10-16T03:22:31Z</dc:date>
    </item>
    <item>
      <title>Re: Java heap space issues while running spark jobs</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Java-heap-space-issues-while-running-spark-jobs/m-p/328519#M230269</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/91514"&gt;@Kallem&lt;/a&gt;&amp;nbsp;Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.&amp;nbsp; If you are still experiencing the issue, can you provide the information&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/75213"&gt;@ChethanYM&lt;/a&gt;&amp;nbsp;&amp;nbsp;has requested?&lt;/P&gt;</description>
      <pubDate>Thu, 21 Oct 2021 17:29:53 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Java-heap-space-issues-while-running-spark-jobs/m-p/328519#M230269</guid>
      <dc:creator>VidyaSargur</dc:creator>
      <dc:date>2021-10-21T17:29:53Z</dc:date>
    </item>
  </channel>
</rss>

