<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: How do I identify Spark 2.3.1 installed on HDP 3.0 is working properly? in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/How-do-I-identify-Spark-2-3-1-installed-on-HDP-3-0-is/m-p/218114#M180015</link>
    <description>&lt;P&gt; &lt;A rel="user" href="https://community.cloudera.com/users/18672/girish-khole.html" nodeid="18672"&gt;@Girish Khole&lt;/A&gt;&lt;/P&gt;&lt;P&gt;How did you installed the spark client that is not part of the cluster? There are few considerations if the node is not managed by ambari such as:&lt;/P&gt;&lt;P&gt;1. The spark client version should be same as the one in the cluster&lt;/P&gt;&lt;P&gt;2. You need to make sure all the configuration files for hdfs/yarn/hive are copied from the cluster &lt;/P&gt;&lt;P&gt;3. When you launch a client in spark master mode this does not run in the cluster. This is running in standalone mode. To test cluster you need to use --master yarn (which can be used with client or cluster deployment modes)&lt;/P&gt;&lt;P&gt;HTH&lt;/P&gt;&lt;P&gt;*** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.&lt;/P&gt;</description>
    <pubDate>Fri, 10 Aug 2018 19:58:06 GMT</pubDate>
    <dc:creator>falbani</dc:creator>
    <dc:date>2018-08-10T19:58:06Z</dc:date>
    <item>
      <title>How do I identify Spark 2.3.1 installed on HDP 3.0 is working properly?</title>
      <link>https://community.cloudera.com/t5/Support-Questions/How-do-I-identify-Spark-2-3-1-installed-on-HDP-3-0-is/m-p/218113#M180014</link>
      <description>&lt;P&gt;I have installed HDP 3.0 cluster on 5 nodes. and installed Spark 2.3.1 using Ambari service on one of the node. Spark installed node is: &lt;STRONG&gt;ser5.dev.local&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;I am trying to access this spark from other system which is not part of the cluster say &lt;STRONG&gt;cpu686.dev.local &lt;/STRONG&gt;using pyspark in jupyter notebook. please find below code for reference:&lt;/P&gt;&lt;PRE&gt;import pyspak
from pyspark import SQLContext
conf = pyspark.SparkConf().setMaster("spark://ser5.dev.local:7077").setAppName("SparkServer1").setAll([('spark.executor.memory', '16g'), ('spark.executor.cores', '8'), ('spark.cores.max', '8'), ('spark.driver.memory','16g')])
sc = pyspark.SparkContext(conf=conf)
rddFile = sc.textFile("Filterd_data.csv")
rddFile = rddFile.mapPartitions(lambda x: csv.reader(x))
rddFile.collect()
&lt;/PRE&gt;&lt;P&gt;Now, all connection is proper. spark context is created using the spark://ser5.dev.local:7077 url. RDD rddFile is also ran successfully. but when I ran rddFile.collect() then it keeps running. no output no error. Even we tried to upload csv file with less than 10 records. still it kept on running the code.&lt;/P&gt;&lt;P&gt;Is there any way that i can configure Spark, or where i can get master url to check running application in spark. when i click on spark UI in ambari it opens spark-history-server. &lt;/P&gt;&lt;P&gt;&lt;BR /&gt;We tried csv file upload from HDFS using following code&lt;/P&gt;&lt;PRE&gt;conf = pyspark.SparkConf().setMaster("spark://ser5.dev.local:7077").setAppName("SparkServer1").setAll([('spark.executor.memory', '16g'), ('spark.executor.cores', '8'), ('spark.cores.max', '8'), ('spark.driver.memory','16g')])
sc = pyspark.SparkContext(conf=conf)
sqlC = SQLContext(sc)
df = sqlC.read.csv("hdfs://ser2.dev.local:8020/UnusualTime/Filterd_data.csv")&lt;/PRE&gt;&lt;P&gt;Still issue remains same.&lt;/P&gt;&lt;P&gt;Note: I installed spark using following documentation:&lt;/P&gt;&lt;P&gt;&lt;A href="https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/installing-spark/content/installing_spark_using_ambari.html" target="_blank"&gt;https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/installing-spark/content/installing_spark_using_ambari.html&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 10 Aug 2018 13:48:18 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/How-do-I-identify-Spark-2-3-1-installed-on-HDP-3-0-is/m-p/218113#M180014</guid>
      <dc:creator>girish_khole</dc:creator>
      <dc:date>2018-08-10T13:48:18Z</dc:date>
    </item>
    <item>
      <title>Re: How do I identify Spark 2.3.1 installed on HDP 3.0 is working properly?</title>
      <link>https://community.cloudera.com/t5/Support-Questions/How-do-I-identify-Spark-2-3-1-installed-on-HDP-3-0-is/m-p/218114#M180015</link>
      <description>&lt;P&gt; &lt;A rel="user" href="https://community.cloudera.com/users/18672/girish-khole.html" nodeid="18672"&gt;@Girish Khole&lt;/A&gt;&lt;/P&gt;&lt;P&gt;How did you installed the spark client that is not part of the cluster? There are few considerations if the node is not managed by ambari such as:&lt;/P&gt;&lt;P&gt;1. The spark client version should be same as the one in the cluster&lt;/P&gt;&lt;P&gt;2. You need to make sure all the configuration files for hdfs/yarn/hive are copied from the cluster &lt;/P&gt;&lt;P&gt;3. When you launch a client in spark master mode this does not run in the cluster. This is running in standalone mode. To test cluster you need to use --master yarn (which can be used with client or cluster deployment modes)&lt;/P&gt;&lt;P&gt;HTH&lt;/P&gt;&lt;P&gt;*** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.&lt;/P&gt;</description>
      <pubDate>Fri, 10 Aug 2018 19:58:06 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/How-do-I-identify-Spark-2-3-1-installed-on-HDP-3-0-is/m-p/218114#M180015</guid>
      <dc:creator>falbani</dc:creator>
      <dc:date>2018-08-10T19:58:06Z</dc:date>
    </item>
    <item>
      <title>Re: How do I identify Spark 2.3.1 installed on HDP 3.0 is working properly?</title>
      <link>https://community.cloudera.com/t5/Support-Questions/How-do-I-identify-Spark-2-3-1-installed-on-HDP-3-0-is/m-p/218115#M180016</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/18672/girish-khole.html" nodeid="18672"&gt;@Girish Khole&lt;/A&gt;, did the above helped?&lt;/P&gt;</description>
      <pubDate>Tue, 14 Aug 2018 01:44:49 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/How-do-I-identify-Spark-2-3-1-installed-on-HDP-3-0-is/m-p/218115#M180016</guid>
      <dc:creator>falbani</dc:creator>
      <dc:date>2018-08-14T01:44:49Z</dc:date>
    </item>
    <item>
      <title>Re: How do I identify Spark 2.3.1 installed on HDP 3.0 is working properly?</title>
      <link>https://community.cloudera.com/t5/Support-Questions/How-do-I-identify-Spark-2-3-1-installed-on-HDP-3-0-is/m-p/218116#M180017</link>
      <description>&lt;P&gt;Thank you very much &lt;A rel="user" href="https://community.cloudera.com/users/11048/falbani.html" nodeid="11048"&gt;@Felix Albani&lt;/A&gt;, I copied yarn-site.xml, core-site.xml, hdfs-site.xml to standalone spark instance. and started spark on HDP, and connection established successfully. issue got resolved. Thanks..&lt;/P&gt;</description>
      <pubDate>Thu, 16 Aug 2018 12:42:17 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/How-do-I-identify-Spark-2-3-1-installed-on-HDP-3-0-is/m-p/218116#M180017</guid>
      <dc:creator>girish_khole</dc:creator>
      <dc:date>2018-08-16T12:42:17Z</dc:date>
    </item>
  </channel>
</rss>

