<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Tutorial Exercise 3 - not found: value sc in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Tutorial-Exercise-3-not-found-value-sc/m-p/78744#M3473</link>
    <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/28686"&gt;@rupertlssmith&lt;/a&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;You have to initialize sc depends upon how you are executing your code. If you are using spark-shell command line then you don't need to initilize sc as it will be initialized by default when you login but if you&amp;nbsp;are developing code in other 3rd party tools and executing then you have to initilialize as follows:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;You can add the below lines before you call&amp;nbsp;rddFromParquetHdfsFile&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;import org.apache.spark.SparkConf&lt;BR /&gt;import org.apache.spark.SparkContext&lt;/P&gt;&lt;P&gt;val conf = new SparkConf().setAppName("your topic").setMaster("yarn-client")&lt;BR /&gt;val sc = new SparkContext(conf)&lt;/P&gt;</description>
    <pubDate>Tue, 21 Aug 2018 11:32:28 GMT</pubDate>
    <dc:creator>saranvisa</dc:creator>
    <dc:date>2018-08-21T11:32:28Z</dc:date>
  </channel>
</rss>

