<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Phoenix driver not found in Spark job in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Phoenix-driver-not-found-in-Spark-job/m-p/147511#M110050</link>
    <description>&lt;P&gt;One last point, the driver upstream from the above connects to Phoenix successfully during the app startup.  The code above is where we are querying Phoenix with the SQL query shown to pull rows and kick off an RDD per row returned.  It seems like we enter a different context in the call to sparkContext.newAPIHadoopRDD() and the foreach(rdd -&amp;gt; ....) and the stack trace gives me the impression we are (duh) somewhere between the driver and the executors that are trying to instantiate the Phoenix driver.&lt;/P&gt;&lt;P&gt;In other parts of the code I had to add a Class.forName("org.apache.phoenix.jdbc.PhoenixDriver") to get rid of this exception, thus I added that code prior to creating the Java Spark Context, before the call to newAPIHadoopRDD() and in the start of the foreach(rdd -&amp;gt;....), but to no avail.&lt;/P&gt;</description>
    <pubDate>Tue, 04 Jul 2017 07:41:58 GMT</pubDate>
    <dc:creator>jeff_watson</dc:creator>
    <dc:date>2017-07-04T07:41:58Z</dc:date>
  </channel>
</rss>

