<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question ExecutorLostFailure  Reason: Container killed by YARN for exceeding memory limits in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/ExecutorLostFailure-Reason-Container-killed-by-YARN-for/m-p/41761#M21242</link>
    <description>&lt;P&gt;Hi&lt;/P&gt;&lt;P&gt;I am using cloudera 5.7.0 . and running spark streaming application using kafka which doing some opencv operation .&lt;/P&gt;&lt;P&gt;some of my containers killed by Yarn with below reason :&lt;BR /&gt;ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 3.1 GB of 3 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead&lt;/P&gt;&lt;P&gt;i am using below configuarion .&lt;BR /&gt;spark-submit --num-executors 20 --executor-memory 2g --executor-cores 2 - --conf spark.yarn.executor.memoryOverhead=1000&lt;BR /&gt;.&lt;/P&gt;&lt;P&gt;how can i solve this issue&lt;/P&gt;&lt;P&gt;Regards&lt;BR /&gt;Prateek&lt;/P&gt;</description>
    <pubDate>Fri, 16 Sep 2022 10:23:54 GMT</pubDate>
    <dc:creator>aroraprateek</dc:creator>
    <dc:date>2022-09-16T10:23:54Z</dc:date>
  </channel>
</rss>

