<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Issue on running spark application in Yarn-cluster mode in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Issue-on-running-spark-application-in-Yarn-cluster-mode/m-p/351288#M236211</link>
    <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/98709"&gt;@shraddha&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Could you please check by any chance if you have set master as local while creating SparkSession in your code.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Use the following sample code to run locally and cluster without updating the master value.&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;val appName = "MySparkApp"
        
// Creating the SparkConf object
val sparkConf = new SparkConf().setAppName(appName).setIfMissing("spark.master", "local[2]")
    
// Creating the SparkSession object
val spark: SparkSession = SparkSession.builder().config(sparkConf).getOrCreate()&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Verify the whole logs once again to check is there any others errors.&lt;/P&gt;</description>
    <pubDate>Thu, 01 Sep 2022 04:29:49 GMT</pubDate>
    <dc:creator>RangaReddy</dc:creator>
    <dc:date>2022-09-01T04:29:49Z</dc:date>
    <item>
      <title>Issue on running spark application in Yarn-cluster mode</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Issue-on-running-spark-application-in-Yarn-cluster-mode/m-p/346055#M234731</link>
      <description>&lt;P&gt;Code deos not have any jar files, I have provided the python folders as zip and using following command to run the code.&amp;nbsp;&lt;/P&gt;&lt;P&gt;spark2-submit --queue abc &lt;STRONG&gt;--master yarn --deploy-mode cluster&lt;/STRONG&gt; --num-executors 5 --executor-cores 5 --executor-memory 20G --driver-memory 5g --conf spark.yarn.executor.memoryOverhead=4096 --conf spark.sql.shuffle.partitions=400 --conf spark.driver.maxResultSize=0 --conf spark.scheduler.mode=FAIR --conf spark.serializer=org.apache.spark.serializer.KryoSerializer --conf spark.kryoserializer.buffer.max=512m --conf spark.executor.heartbeatInterval=100 --conf spark.sql.autoBroadcastJoinThreshold=-1 --conf spark.sql.broadcastTimeout=-1 --py-files /abc/python/dependencies.zip,/abc/python/modules.zip /abc/python/main.py&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Following is the error:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Exit code: 13&lt;BR /&gt;Shell output: main : command provided 1&lt;BR /&gt;main : run as user is ***&lt;BR /&gt;main : requested yarn user is***&lt;BR /&gt;Getting exit code file...&lt;BR /&gt;Creating script paths...&lt;BR /&gt;Writing pid file...&lt;BR /&gt;Writing to tmp file /&lt;BR /&gt;Writing to cgroup task files...&lt;BR /&gt;Creating local dirs...&lt;BR /&gt;Launching container...&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;[2022-06-21 07:29:57.254]Container exited with a non-zero exit code 13. Error file: prelaunch.err.&lt;BR /&gt;Last 4096 bytes of prelaunch.err :&lt;BR /&gt;Last 4096 bytes of stderr :&lt;BR /&gt;22/06/21 07:29:53 INFO util.SignalUtils: Registered signal handler for TERM&lt;BR /&gt;22/06/21 07:29:53 INFO util.SignalUtils: Registered signal handler for HUP&lt;BR /&gt;22/06/21 07:29:53 INFO util.SignalUtils: Registered signal handler for INT&lt;BR /&gt;22/06/21 07:29:54 WARN spark.SparkConf: The configuration key 'spark.yarn.executor.memoryOverhead' has been deprecated as of Spark 2.3 and may be removed in the future. Please use the new key 'spark.executor.memoryOverhead' instead.&lt;BR /&gt;22/06/21 07:29:54 INFO spark.SecurityManager: Changing view acls to: ****&lt;BR /&gt;22/06/21 07:29:54 INFO spark.SecurityManager: Changing modify acls to: ***&lt;BR /&gt;22/06/21 07:29:54 INFO spark.SecurityManager: Changing view acls groups to:&lt;BR /&gt;22/06/21 07:29:54 INFO spark.SecurityManager: Changing modify acls groups to:&lt;BR /&gt;22/06/21 07:29:54 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls enabled; users with view permissions: Set(***, *); groups with view permissions: Set(); users with modify permissions: Set(***); groups with modify permissions: Set()&lt;BR /&gt;22/06/21 07:29:54 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1653193227336_217585_000002&lt;BR /&gt;22/06/21 07:29:54 INFO yarn.ApplicationMaster: Starting the user application in a separate Thread&lt;BR /&gt;22/06/21 07:29:54 INFO yarn.ApplicationMaster: Waiting for spark context initialization...&lt;BR /&gt;22/06/21 07:29:54 WARN spark.SparkConf: The configuration key 'spark.yarn.executor.memoryOverhead' has been deprecated as of Spark 2.3 and may be removed in the future. Please use the new key 'spark.executor.memoryOverhead' instead.&lt;BR /&gt;22/06/21 07:29:55 ERROR yarn.ApplicationMaster: User application exited with status 1&lt;BR /&gt;22/06/21 07:29:55 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 13, (reason: User application exited with status 1)&lt;BR /&gt;22/06/21 07:29:55 ERROR yarn.ApplicationMaster: Uncaught exception:&lt;BR /&gt;org.apache.spark.SparkException: Exception thrown in awaitResult:&lt;BR /&gt;at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:226)&lt;BR /&gt;at org.apache.spark.deploy.yarn.ApplicationMaster.runDriver(ApplicationMaster.scala:448)&lt;BR /&gt;at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:276)&lt;BR /&gt;at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:821)&lt;BR /&gt;at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:820)&lt;BR /&gt;at java.security.AccessController.doPrivileged(Native Method)&lt;BR /&gt;at javax.security.auth.Subject.doAs(Subject.java:422)&lt;/P&gt;</description>
      <pubDate>Tue, 21 Jun 2022 13:58:33 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Issue-on-running-spark-application-in-Yarn-cluster-mode/m-p/346055#M234731</guid>
      <dc:creator>shraddha</dc:creator>
      <dc:date>2022-06-21T13:58:33Z</dc:date>
    </item>
    <item>
      <title>Re: Issue on running spark application in Yarn-cluster mode</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Issue-on-running-spark-application-in-Yarn-cluster-mode/m-p/351288#M236211</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/98709"&gt;@shraddha&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Could you please check by any chance if you have set master as local while creating SparkSession in your code.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Use the following sample code to run locally and cluster without updating the master value.&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;val appName = "MySparkApp"
        
// Creating the SparkConf object
val sparkConf = new SparkConf().setAppName(appName).setIfMissing("spark.master", "local[2]")
    
// Creating the SparkSession object
val spark: SparkSession = SparkSession.builder().config(sparkConf).getOrCreate()&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Verify the whole logs once again to check is there any others errors.&lt;/P&gt;</description>
      <pubDate>Thu, 01 Sep 2022 04:29:49 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Issue-on-running-spark-application-in-Yarn-cluster-mode/m-p/351288#M236211</guid>
      <dc:creator>RangaReddy</dc:creator>
      <dc:date>2022-09-01T04:29:49Z</dc:date>
    </item>
  </channel>
</rss>

