<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Getting &amp;quot;Job cancelled because SparkContext was shut down&amp;quot; when running a Job using YA in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Getting-quot-Job-cancelled-because-SparkContext-was-shut/m-p/21794#M3798</link>
    <description>&lt;P&gt;Hi Srowen, thanks for the reply. I discovered the problem and its was related with memory limits in YARN configuration. Now I can run my job, but I still having a doult:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I&amp;nbsp; have four nodes on the cluster, but the job is running in only two of them. Why is that? I'm firing the job from hadoop1 shell and the job is running only with nodes 3 and 4. Node 2 became the driver. How can I better use my resources?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Here is my YARN instances:&lt;/P&gt;&lt;P&gt;&lt;IMG src="https://community.cloudera.com/t5/image/serverpage/image-id/739i9E1288ABA8C5A823/image-size/original?v=mpbl-1&amp;amp;px=-1" border="0" alt="yarn instances.png" title="yarn instances.png" align="middle" /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;And here is my Job running:&lt;/P&gt;&lt;P&gt;&lt;IMG src="https://community.cloudera.com/t5/image/serverpage/image-id/740i19D528E36695DEDD/image-size/original?v=mpbl-1&amp;amp;px=-1" border="0" alt="spark execution.png" title="spark execution.png" align="middle" /&gt;&lt;/P&gt;</description>
    <pubDate>Wed, 19 Nov 2014 10:40:41 GMT</pubDate>
    <dc:creator>Vitor</dc:creator>
    <dc:date>2014-11-19T10:40:41Z</dc:date>
    <item>
      <title>Getting "Job cancelled because SparkContext was shut down" when running a Job using YARN</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Getting-quot-Job-cancelled-because-SparkContext-was-shut/m-p/21758#M3794</link>
      <description>&lt;P&gt;I'm following this example using Java:&lt;/P&gt;
&lt;P&gt;&lt;A href="http://blog.cloudera.com/blog/2014/04/how-to-run-a-simple-apache-spark-app-in-cdh-5/" target="_blank" rel="noopener"&gt;http://blog.cloudera.com/blog/2014/04/how-to-run-a-simple-apache-spark-app-in-cdh-5/&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Everything works fine when using "-master local" option, but I'm getting error when trying to run on YARN ("-master yarn").&lt;/P&gt;
&lt;P&gt;I'm running a CDH 5.2.0 cluster with 4 VMs (8GB RAM in master node and 2GB on the others).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Am I running the Job in correct way to have it running over the four nodes? What's the differece to use ("-master spark://IP:Port") and what is the default port to run this way?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here is the full console output when trying to run on YARN:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;root@hadoop1:~# spark-submit --class testesVitor.JavaWordCounter --master yarn sparkwordcount-0.0.1-SNAPSHOT.jar /user/vitor/Posts.xml 2 &amp;gt; output.txt&lt;BR /&gt;SLF4J: Class path contains multiple SLF4J bindings.&lt;BR /&gt;SLF4J: Found binding in [jar:file:/usr/lib/spark/assembly/lib/spark-assembly-1.1.0-cdh5.2.0-hadoop2.5.0-cdh5.2.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]&lt;BR /&gt;SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]&lt;BR /&gt;SLF4J: See &lt;A href="http://www.slf4j.org/codes.html#multiple_bindings" target="_blank" rel="noopener"&gt;http://www.slf4j.org/codes.html#multiple_bindings&lt;/A&gt; for an explanation.&lt;BR /&gt;SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]&lt;BR /&gt;14/11/18 16:26:49 INFO SecurityManager: Changing view acls to: root&lt;BR /&gt;14/11/18 16:26:49 INFO SecurityManager: Changing modify acls to: root&lt;BR /&gt;14/11/18 16:26:49 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)&lt;BR /&gt;14/11/18 16:26:51 INFO Slf4jLogger: Slf4jLogger started&lt;BR /&gt;14/11/18 16:26:51 INFO Remoting: Starting remoting&lt;BR /&gt;14/11/18 16:26:52 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@hadoop1.example.com:58545]&lt;BR /&gt;14/11/18 16:26:52 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkDriver@hadoop1.example.com:58545]&lt;BR /&gt;14/11/18 16:26:52 INFO Utils: Successfully started service 'sparkDriver' on port 58545.&lt;BR /&gt;14/11/18 16:26:52 INFO SparkEnv: Registering MapOutputTracker&lt;BR /&gt;14/11/18 16:26:52 INFO SparkEnv: Registering BlockManagerMaster&lt;BR /&gt;14/11/18 16:26:52 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20141118162652-0ff3&lt;BR /&gt;14/11/18 16:26:52 INFO Utils: Successfully started service 'Connection manager for block manager' on port 46763.&lt;BR /&gt;14/11/18 16:26:52 INFO ConnectionManager: Bound socket to port 46763 with id = ConnectionManagerId(hadoop1.example.com,46763)&lt;BR /&gt;14/11/18 16:26:52 INFO MemoryStore: MemoryStore started with capacity 267.3 MB&lt;BR /&gt;14/11/18 16:26:52 INFO BlockManagerMaster: Trying to register BlockManager&lt;BR /&gt;14/11/18 16:26:52 INFO BlockManagerMasterActor: Registering block manager hadoop1.example.com:46763 with 267.3 MB RAM&lt;BR /&gt;14/11/18 16:26:52 INFO BlockManagerMaster: Registered BlockManager&lt;BR /&gt;14/11/18 16:26:52 INFO HttpFileServer: HTTP File server directory is /tmp/spark-cfde3cf0-024a-47db-b97d-374710b989fc&lt;BR /&gt;14/11/18 16:26:52 INFO HttpServer: Starting HTTP Server&lt;BR /&gt;14/11/18 16:26:52 INFO Utils: Successfully started service 'HTTP file server' on port 40252.&lt;BR /&gt;14/11/18 16:26:54 INFO Utils: Successfully started service 'SparkUI' on port 4040.&lt;BR /&gt;14/11/18 16:26:54 INFO SparkUI: Started SparkUI at &lt;A href="http://hadoop1.example.com:4040" target="_blank" rel="noopener"&gt;http://hadoop1.example.com:4040&lt;/A&gt;&lt;BR /&gt;14/11/18 16:27:00 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable&lt;BR /&gt;14/11/18 16:27:00 INFO EventLoggingListener: Logging events to hdfs://hadoop1.example.com:8020/user/spark/applicationHistory/spark-count-1416335217999&lt;BR /&gt;14/11/18 16:27:01 INFO SparkContext: Added JAR file:/root/sparkwordcount-0.0.1-SNAPSHOT.jar at &lt;A href="http://192.168.56.101:40252/jars/sparkwordcount-0.0.1-SNAPSHOT.jar" target="_blank" rel="noopener"&gt;http://192.168.56.101:40252/jars/sparkwordcount-0.0.1-SNAPSHOT.jar&lt;/A&gt; with timestamp 1416335221103&lt;BR /&gt;14/11/18 16:27:01 INFO RMProxy: Connecting to ResourceManager at hadoop1.example.com/192.168.56.101:8032&lt;BR /&gt;14/11/18 16:27:02 INFO Client: Got cluster metric info from ResourceManager, number of NodeManagers: 3&lt;BR /&gt;14/11/18 16:27:02 INFO Client: Max mem capabililty of a single resource in this cluster 1029&lt;BR /&gt;14/11/18 16:27:02 INFO Client: Preparing Local resources&lt;BR /&gt;14/11/18 16:27:02 INFO Client: Uploading file:/usr/lib/spark/assembly/lib/spark-assembly-1.1.0-cdh5.2.0-hadoop2.5.0-cdh5.2.0.jar to hdfs://hadoop1.example.com:8020/user/root/.sparkStaging/application_1415718283355_0004/spark-assembly-1.1.0-cdh5.2.0-hadoop2.5.0-cdh5.2.0.jar&lt;BR /&gt;14/11/18 16:27:08 INFO Client: Prepared Local resources Map(__spark__.jar -&amp;gt; resource { scheme: "hdfs" host: "hadoop1.example.com" port: 8020 file: "/user/root/.sparkStaging/application_1415718283355_0004/spark-assembly-1.1.0-cdh5.2.0-hadoop2.5.0-cdh5.2.0.jar" } size: 95567637 timestamp: 1416335228534 type: FILE visibility: PRIVATE)&lt;BR /&gt;14/11/18 16:27:08 INFO Client: Setting up the launch environment&lt;BR /&gt;14/11/18 16:27:08 INFO Client: Setting up container launch context&lt;BR /&gt;14/11/18 16:27:08 INFO Client: Yarn AM launch context:&lt;BR /&gt;14/11/18 16:27:08 INFO Client:&amp;nbsp;&amp;nbsp; class:&amp;nbsp;&amp;nbsp; org.apache.spark.deploy.yarn.ExecutorLauncher&lt;BR /&gt;14/11/18 16:27:08 INFO Client:&amp;nbsp;&amp;nbsp; env:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Map(CLASSPATH -&amp;gt; $PWD:$PWD/__spark__.jar:$HADOOP_CLIENT_CONF_DIR:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME/*:$HADOOP_COMMON_HOME/lib/*:$HADOOP_HDFS_HOME/*:$HADOOP_HDFS_HOME/lib/*:$HADOOP_YARN_HOME/*:$HADOOP_YARN_HOME/lib/*:$HADOOP_MAPRED_HOME/*:$HADOOP_MAPRED_HOME/lib/*:$MR2_CLASSPATH:$PWD/__app__.jar:$PWD/*, SPARK_YARN_CACHE_FILES_FILE_SIZES -&amp;gt; 95567637, SPARK_YARN_STAGING_DIR -&amp;gt; .sparkStaging/application_1415718283355_0004/, SPARK_YARN_CACHE_FILES_VISIBILITIES -&amp;gt; PRIVATE, SPARK_USER -&amp;gt; root, SPARK_YARN_MODE -&amp;gt; true, SPARK_YARN_CACHE_FILES_TIME_STAMPS -&amp;gt; 1416335228534, SPARK_YARN_CACHE_FILES -&amp;gt; hdfs://hadoop1.example.com:8020/user/root/.sparkStaging/application_1415718283355_0004/spark-assembly-1.1.0-cdh5.2.0-hadoop2.5.0-cdh5.2.0.jar#__spark__.jar)&lt;BR /&gt;14/11/18 16:27:08 INFO Client:&amp;nbsp;&amp;nbsp; command: $JAVA_HOME/bin/java -server -Xmx512m -Djava.io.tmpdir=$PWD/tmp '-Dspark.tachyonStore.folderName=spark-ea602029-5871-4097-b72f-d2bd46c74054' '-Dspark.yarn.historyServer.address=&lt;A href="http://hadoop1.example.com:18088'" target="_blank" rel="noopener"&gt;http://hadoop1.example.com:18088'&lt;/A&gt; '-Dspark.eventLog.enabled=true' '-Dspark.yarn.secondary.jars=' '-Dspark.driver.host=hadoop1.example.com' '-Dspark.driver.appUIHistoryAddress=&lt;A href="http://hadoop1.example.com:18088/history/spark-count-1416335217999'" target="_blank" rel="noopener"&gt;http://hadoop1.example.com:18088/history/spark-count-1416335217999'&lt;/A&gt; '-Dspark.app.name=Spark Count' '-Dspark.driver.appUIAddress=hadoop1.example.com:4040' '-Dspark.jars=file:/root/sparkwordcount-0.0.1-SNAPSHOT.jar' '-Dspark.fileserver.uri=&lt;A href="http://192.168.56.101:40252'" target="_blank" rel="noopener"&gt;http://192.168.56.101:40252'&lt;/A&gt; '-Dspark.eventLog.dir=hdfs://hadoop1.example.com:8020/user/spark/applicationHistory' '-Dspark.master=yarn-client' '-Dspark.driver.port=58545' org.apache.spark.deploy.yarn.ExecutorLauncher --class 'notused' --jar&amp;nbsp; null&amp;nbsp; --arg&amp;nbsp; 'hadoop1.example.com:58545' --executor-memory 1024 --executor-cores 1 --num-executors&amp;nbsp; 2 1&amp;gt; &amp;lt;LOG_DIR&amp;gt;/stdout 2&amp;gt; &amp;lt;LOG_DIR&amp;gt;/stderr&lt;BR /&gt;14/11/18 16:27:08 INFO SecurityManager: Changing view acls to: root&lt;BR /&gt;14/11/18 16:27:08 INFO SecurityManager: Changing modify acls to: root&lt;BR /&gt;14/11/18 16:27:08 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)&lt;BR /&gt;14/11/18 16:27:08 INFO Client: Submitting application to ResourceManager&lt;BR /&gt;14/11/18 16:27:08 INFO YarnClientImpl: Submitted application application_1415718283355_0004&lt;BR /&gt;14/11/18 16:27:09 INFO YarnClientSchedulerBackend: Application report from ASM:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; appMasterRpcPort: -1&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; appStartTime: 1416335228936&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; yarnAppState: ACCEPTED&lt;BR /&gt;&lt;BR /&gt;14/11/18 16:27:10 INFO YarnClientSchedulerBackend: Application report from ASM:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; appMasterRpcPort: -1&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; appStartTime: 1416335228936&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; yarnAppState: ACCEPTED&lt;BR /&gt;&lt;BR /&gt;14/11/18 16:27:11 INFO YarnClientSchedulerBackend: Application report from ASM:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; appMasterRpcPort: -1&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; appStartTime: 1416335228936&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; yarnAppState: ACCEPTED&lt;BR /&gt;&lt;BR /&gt;14/11/18 16:27:12 INFO YarnClientSchedulerBackend: Application report from ASM:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; appMasterRpcPort: -1&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; appStartTime: 1416335228936&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; yarnAppState: ACCEPTED&lt;BR /&gt;&lt;BR /&gt;14/11/18 16:27:13 INFO YarnClientSchedulerBackend: Application report from ASM:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; appMasterRpcPort: -1&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; appStartTime: 1416335228936&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; yarnAppState: ACCEPTED&lt;BR /&gt;&lt;BR /&gt;14/11/18 16:27:14 INFO YarnClientSchedulerBackend: Application report from ASM:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; appMasterRpcPort: -1&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; appStartTime: 1416335228936&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; yarnAppState: ACCEPTED&lt;BR /&gt;&lt;BR /&gt;14/11/18 16:27:15 INFO YarnClientSchedulerBackend: Application report from ASM:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; appMasterRpcPort: -1&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; appStartTime: 1416335228936&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; yarnAppState: ACCEPTED&lt;BR /&gt;&lt;BR /&gt;14/11/18 16:27:16 INFO YarnClientSchedulerBackend: Application report from ASM:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; appMasterRpcPort: -1&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; appStartTime: 1416335228936&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; yarnAppState: ACCEPTED&lt;BR /&gt;&lt;BR /&gt;14/11/18 16:27:17 INFO YarnClientSchedulerBackend: Application report from ASM:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; appMasterRpcPort: -1&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; appStartTime: 1416335228936&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; yarnAppState: ACCEPTED&lt;BR /&gt;&lt;BR /&gt;14/11/18 16:27:18 INFO YarnClientSchedulerBackend: Application report from ASM:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; appMasterRpcPort: -1&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; appStartTime: 1416335228936&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; yarnAppState: ACCEPTED&lt;BR /&gt;&lt;BR /&gt;14/11/18 16:27:19 INFO YarnClientSchedulerBackend: Application report from ASM:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; appMasterRpcPort: -1&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; appStartTime: 1416335228936&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; yarnAppState: ACCEPTED&lt;BR /&gt;&lt;BR /&gt;14/11/18 16:27:20 INFO YarnClientSchedulerBackend: Application report from ASM:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; appMasterRpcPort: -1&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; appStartTime: 1416335228936&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; yarnAppState: ACCEPTED&lt;BR /&gt;&lt;BR /&gt;14/11/18 16:27:21 INFO YarnClientSchedulerBackend: Application report from ASM:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; appMasterRpcPort: -1&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; appStartTime: 1416335228936&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; yarnAppState: ACCEPTED&lt;BR /&gt;&lt;BR /&gt;14/11/18 16:27:22 INFO YarnClientSchedulerBackend: Application report from ASM:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; appMasterRpcPort: 0&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; appStartTime: 1416335228936&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; yarnAppState: RUNNING&lt;BR /&gt;&lt;BR /&gt;14/11/18 16:27:31 INFO YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)&lt;BR /&gt;14/11/18 16:27:31 INFO MemoryStore: ensureFreeSpace(258371) called with curMem=0, maxMem=280248975&lt;BR /&gt;14/11/18 16:27:31 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 252.3 KB, free 267.0 MB)&lt;BR /&gt;14/11/18 16:27:31 INFO MemoryStore: ensureFreeSpace(20625) called with curMem=258371, maxMem=280248975&lt;BR /&gt;14/11/18 16:27:31 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 20.1 KB, free 267.0 MB)&lt;BR /&gt;14/11/18 16:27:31 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on hadoop1.example.com:46763 (size: 20.1 KB, free: 267.2 MB)&lt;BR /&gt;14/11/18 16:27:31 INFO BlockManagerMaster: Updated info of block broadcast_0_piece0&lt;BR /&gt;14/11/18 16:27:31 INFO FileInputFormat: Total input paths to process : 1&lt;BR /&gt;14/11/18 16:27:31 INFO NetworkTopology: Adding a new node: /default/192.168.56.104:50010&lt;BR /&gt;14/11/18 16:27:31 INFO NetworkTopology: Adding a new node: /default/192.168.56.103:50010&lt;BR /&gt;14/11/18 16:27:31 INFO NetworkTopology: Adding a new node: /default/192.168.56.102:50010&lt;BR /&gt;14/11/18 16:27:32 INFO SparkContext: Starting job: collect at JavaWordCounter.java:84&lt;BR /&gt;14/11/18 16:27:32 INFO DAGScheduler: Registering RDD 3 (mapToPair at JavaWordCounter.java:30)&lt;BR /&gt;14/11/18 16:27:32 INFO DAGScheduler: Registering RDD 7 (mapToPair at JavaWordCounter.java:68)&lt;BR /&gt;14/11/18 16:27:32 INFO DAGScheduler: Got job 0 (collect at JavaWordCounter.java:84) with 228 output partitions (allowLocal=false)&lt;BR /&gt;14/11/18 16:27:32 INFO DAGScheduler: Final stage: Stage 0(collect at JavaWordCounter.java:84)&lt;BR /&gt;14/11/18 16:27:32 INFO DAGScheduler: Parents of final stage: List(Stage 2)&lt;BR /&gt;14/11/18 16:27:32 INFO DAGScheduler: Missing parents: List(Stage 2)&lt;BR /&gt;14/11/18 16:27:32 INFO DAGScheduler: Submitting Stage 1 (MappedRDD[3] at mapToPair at JavaWordCounter.java:30), which has no missing parents&lt;BR /&gt;14/11/18 16:27:32 INFO MemoryStore: ensureFreeSpace(4096) called with curMem=278996, maxMem=280248975&lt;BR /&gt;14/11/18 16:27:32 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.0 KB, free 267.0 MB)&lt;BR /&gt;14/11/18 16:27:32 INFO MemoryStore: ensureFreeSpace(2457) called with curMem=283092, maxMem=280248975&lt;BR /&gt;14/11/18 16:27:32 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.4 KB, free 267.0 MB)&lt;BR /&gt;14/11/18 16:27:32 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on hadoop1.example.com:46763 (size: 2.4 KB, free: 267.2 MB)&lt;BR /&gt;14/11/18 16:27:32 INFO BlockManagerMaster: Updated info of block broadcast_1_piece0&lt;BR /&gt;14/11/18 16:27:32 INFO DAGScheduler: Submitting 228 missing tasks from Stage 1 (MappedRDD[3] at mapToPair at JavaWordCounter.java:30)&lt;BR /&gt;14/11/18 16:27:32 INFO YarnClientClusterScheduler: Adding task set 1.0 with 228 tasks&lt;BR /&gt;14/11/18 16:27:32 INFO RackResolver: Resolved 192.168.56.104 to /default&lt;BR /&gt;14/11/18 16:27:32 INFO RackResolver: Resolved 192.168.56.103 to /default&lt;BR /&gt;14/11/18 16:27:32 INFO RackResolver: Resolved 192.168.56.102 to /default&lt;BR /&gt;14/11/18 16:27:32 INFO RackResolver: Resolved hadoop2.example.com to /default&lt;BR /&gt;14/11/18 16:27:32 INFO RackResolver: Resolved hadoop3.example.com to /default&lt;BR /&gt;14/11/18 16:27:32 INFO RackResolver: Resolved hadoop4.example.com to /default&lt;BR /&gt;14/11/18 16:27:36 ERROR YarnClientSchedulerBackend: Yarn application already ended: FAILED&lt;BR /&gt;14/11/18 16:27:36 INFO SparkUI: Stopped Spark web UI at &lt;A href="http://hadoop1.example.com:4040" target="_blank" rel="noopener"&gt;http://hadoop1.example.com:4040&lt;/A&gt;&lt;BR /&gt;14/11/18 16:27:36 INFO DAGScheduler: Stopping DAGScheduler&lt;BR /&gt;14/11/18 16:27:36 INFO YarnClientSchedulerBackend: Shutting down all executors&lt;BR /&gt;14/11/18 16:27:36 INFO YarnClientSchedulerBackend: Asking each executor to shut down&lt;BR /&gt;14/11/18 16:27:36 INFO YarnClientSchedulerBackend: Stopped&lt;BR /&gt;14/11/18 16:27:36 INFO DAGScheduler: Failed to run collect at JavaWordCounter.java:84&lt;BR /&gt;Exception in thread "main" org.apache.spark.SparkException: Job cancelled because SparkContext was shut down&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:694)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:693)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:693)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at org.apache.spark.scheduler.DAGSchedulerEventProcessActor.postStop(DAGScheduler.scala:1399)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at akka.actor.dungeon.FaultHandling$class.akka$actor$dungeon$FaultHandling$$finishTerminate(FaultHandling.scala:201)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at akka.actor.dungeon.FaultHandling$class.terminate(FaultHandling.scala:163)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at akka.actor.ActorCell.terminate(ActorCell.scala:338)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:431)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at akka.actor.ActorCell.systemInvoke(ActorCell.scala:447)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at akka.dispatch.Mailbox.run(Mailbox.scala:218)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)&lt;BR /&gt;root@hadoop1:~#&lt;/P&gt;</description>
      <pubDate>Wed, 11 Dec 2019 05:04:39 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Getting-quot-Job-cancelled-because-SparkContext-was-shut/m-p/21758#M3794</guid>
      <dc:creator>Vitor</dc:creator>
      <dc:date>2019-12-11T05:04:39Z</dc:date>
    </item>
    <item>
      <title>Re: Getting "Job cancelled because SparkContext was shut down" when running a Job using YA</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Getting-quot-Job-cancelled-because-SparkContext-was-shut/m-p/21760#M3795</link>
      <description>&lt;P&gt;This isn't the problem. It's just a symptom of the app failing for another reason:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;14/11/18 16:27:36 ERROR YarnClientSchedulerBackend: Yarn application already ended: FAILED&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;You'd have to look at the actual app worker logs to see why it's failing.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 18 Nov 2014 18:54:14 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Getting-quot-Job-cancelled-because-SparkContext-was-shut/m-p/21760#M3795</guid>
      <dc:creator>srowen</dc:creator>
      <dc:date>2014-11-18T18:54:14Z</dc:date>
    </item>
    <item>
      <title>Re: Getting "Job cancelled because SparkContext was shut down" when running a Job using YA</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Getting-quot-Job-cancelled-because-SparkContext-was-shut/m-p/21761#M3796</link>
      <description>&lt;P&gt;Where can I get this log?&lt;/P&gt;</description>
      <pubDate>Tue, 18 Nov 2014 18:58:21 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Getting-quot-Job-cancelled-because-SparkContext-was-shut/m-p/21761#M3796</guid>
      <dc:creator>Vitor</dc:creator>
      <dc:date>2014-11-18T18:58:21Z</dc:date>
    </item>
    <item>
      <title>Re: Getting "Job cancelled because SparkContext was shut down" when running a Job using YA</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Getting-quot-Job-cancelled-because-SparkContext-was-shut/m-p/21764#M3797</link>
      <description>&lt;P&gt;You're running on YARN, so you should see the application as a "FAILED" application in the Resource Manager UI. Click through and you can find the logs of individual containers, which should show some failure.&lt;/P&gt;</description>
      <pubDate>Tue, 18 Nov 2014 20:06:31 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Getting-quot-Job-cancelled-because-SparkContext-was-shut/m-p/21764#M3797</guid>
      <dc:creator>srowen</dc:creator>
      <dc:date>2014-11-18T20:06:31Z</dc:date>
    </item>
    <item>
      <title>Re: Getting "Job cancelled because SparkContext was shut down" when running a Job using YA</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Getting-quot-Job-cancelled-because-SparkContext-was-shut/m-p/21794#M3798</link>
      <description>&lt;P&gt;Hi Srowen, thanks for the reply. I discovered the problem and its was related with memory limits in YARN configuration. Now I can run my job, but I still having a doult:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I&amp;nbsp; have four nodes on the cluster, but the job is running in only two of them. Why is that? I'm firing the job from hadoop1 shell and the job is running only with nodes 3 and 4. Node 2 became the driver. How can I better use my resources?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Here is my YARN instances:&lt;/P&gt;&lt;P&gt;&lt;IMG src="https://community.cloudera.com/t5/image/serverpage/image-id/739i9E1288ABA8C5A823/image-size/original?v=mpbl-1&amp;amp;px=-1" border="0" alt="yarn instances.png" title="yarn instances.png" align="middle" /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;And here is my Job running:&lt;/P&gt;&lt;P&gt;&lt;IMG src="https://community.cloudera.com/t5/image/serverpage/image-id/740i19D528E36695DEDD/image-size/original?v=mpbl-1&amp;amp;px=-1" border="0" alt="spark execution.png" title="spark execution.png" align="middle" /&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 19 Nov 2014 10:40:41 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Getting-quot-Job-cancelled-because-SparkContext-was-shut/m-p/21794#M3798</guid>
      <dc:creator>Vitor</dc:creator>
      <dc:date>2014-11-19T10:40:41Z</dc:date>
    </item>
    <item>
      <title>Re: Getting "Job cancelled because SparkContext was shut down" when running a Job using YA</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Getting-quot-Job-cancelled-because-SparkContext-was-shut/m-p/21797#M3799</link>
      <description>&lt;P&gt;I'll post this in different thread.&lt;/P&gt;</description>
      <pubDate>Wed, 19 Nov 2014 12:09:07 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Getting-quot-Job-cancelled-because-SparkContext-was-shut/m-p/21797#M3799</guid>
      <dc:creator>Vitor</dc:creator>
      <dc:date>2014-11-19T12:09:07Z</dc:date>
    </item>
    <item>
      <title>Re: Getting "Job cancelled because SparkContext was shut down" when running a Job using YA</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Getting-quot-Job-cancelled-because-SparkContext-was-shut/m-p/285308#M3800</link>
      <description>&lt;P&gt;&lt;SPAN&gt;What were the memory limits you changed in YARN configuration? Please post them.. It will be helpful for me solving similar issue in my application..&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 11 Dec 2019 04:55:56 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Getting-quot-Job-cancelled-because-SparkContext-was-shut/m-p/285308#M3800</guid>
      <dc:creator>JavaSpark</dc:creator>
      <dc:date>2019-12-11T04:55:56Z</dc:date>
    </item>
  </channel>
</rss>

