<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: How to get SparkContext in executor in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/How-to-get-SparkContext-in-executor/m-p/114173#M76971</link>
    <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/12833/gobisubramani.html" nodeid="12833"&gt;@Gobi Subramani&lt;/A&gt;&lt;/P&gt;&lt;P&gt;You are looking at it wrong. Spark Context is the main entry point into Spark and is the connection to a Spark cluster, and can be used to create RDDs, accumulators etc. on that cluster. You can run both in cluster as well as local mode and if you would have to define which one, you'd define that in the Spark context. The workers don't get the Spark context per say, but if you were to package your program into a jar, the cluster manager would be responsible for copying the jar file to the workers, before it allocates tasks.
&lt;/P&gt;</description>
    <pubDate>Tue, 22 Nov 2016 23:15:45 GMT</pubDate>
    <dc:creator>vjain</dc:creator>
    <dc:date>2016-11-22T23:15:45Z</dc:date>
  </channel>
</rss>

