<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: How to get SparkContext in executor in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/How-to-get-SparkContext-in-executor/m-p/114171#M76969</link>
    <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/420/vjain.html" nodeid="420"&gt;@Vedant Jain&lt;/A&gt; , &lt;A rel="user" href="https://community.cloudera.com/users/452/bwalter.html" nodeid="452"&gt;@Bernhard Walter&lt;/A&gt; &lt;/P&gt;&lt;P&gt;    well, may be my question is misleading you , let me elaborate it.&lt;/P&gt;&lt;PRE&gt;val textFile = sc.textFile("hdfs://...") 
val counts = textFile.flatMap(line =&amp;gt; line.split(" "))
                 .map(word =&amp;gt; (word, 1))
                 .reduceByKey(_ + _) 
counts.saveAsTextFile("hdfs://...")&lt;/PRE&gt;&lt;P&gt;a simple wordcount problem.. this piece of code given to driver program, which creates DAG and stages, and given task to respective worker nodes where actual operation is happening.&lt;/P&gt;&lt;P&gt;Now, lets look at the first line of the program. From the file, RDD is generated (SparkContext implemented textFile() function  which generates RDD from file).  file is resides in worker node. from worker node, we needs to get the RDD out.&lt;/P&gt;&lt;P&gt;In order to acheive that , Worker node ( or executor ) needs to have the SparkContext, Isn't it ?&lt;/P&gt;&lt;P&gt;My Question is, How does executor gets the spark context ?&lt;/P&gt;</description>
    <pubDate>Tue, 22 Nov 2016 11:16:01 GMT</pubDate>
    <dc:creator>gobi_subramani</dc:creator>
    <dc:date>2016-11-22T11:16:01Z</dc:date>
  </channel>
</rss>

