<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Spark HBase Connector (SHC) job fails to connect to Zookeeper cause connection faillure to HBase in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155295#M53031</link>
    <description>&lt;P&gt;No this is absolutely not the problem. It does not guarantee in any manner that the spark job will take it into account. See answer to @anatva for a proper answer to this. &lt;/P&gt;&lt;P&gt;Further more my post indicate that --files option is used with the correct files passed.&lt;/P&gt;</description>
    <pubDate>Tue, 07 Feb 2017 19:56:06 GMT</pubDate>
    <dc:creator>samuel_sayag</dc:creator>
    <dc:date>2017-02-07T19:56:06Z</dc:date>
    <item>
      <title>Spark HBase Connector (SHC) job fails to connect to Zookeeper cause connection faillure to HBase</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155283#M53019</link>
      <description>&lt;A href="https://community.cloudera.com/legacyfs/online/attachments/11903-spark-shell.txt"&gt;spark-shell.txt&lt;/A&gt;&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;I am trying to execute a basic code using the &lt;A href="https://github.com/hortonworks-spark/shc"&gt;shc&lt;/A&gt; connector. It is a connector apparently provide by Hortonworks (in their github at least) that conveniently allows to insert/request data on HBase. So the code rework from the example of the project is building a Dataframe of fake data and try to insert it via the connector.&lt;/P&gt;&lt;P&gt;This is the hbase configuration: &lt;/P&gt;&lt;P&gt;&lt;A href="https://community.cloudera.com/legacyfs/online/attachments/11904-screenshot-from-2017-01-31-13-47-21.png"&gt;screenshot-from-2017-01-31-13-47-21.png&lt;/A&gt;&lt;/P&gt;&lt;P&gt;The code is run under a spark shell which launching command line is the following:&lt;/P&gt;&lt;PRE&gt;spark-shell --master yarn \
            --deploy-mode client \
            --name "hive2hbase" \
            --repositories "http://repo.hortonworks.com/content/groups/public/" \
            --packages "com.hortonworks:shc-core:1.0.1-1.6-s_2.10" \
            --files "/usr/hdp/current/hbase-client/conf/hbase-site.xml,/usr/hdp/current/hive-client/conf/hive-site.xml" \
            --jars /usr/hdp/current/phoenix-client/phoenix-server.jar
            --driver-memory 1G \
            --executor-memory 1500m \
            --num-executors 8&lt;/PRE&gt;&lt;P&gt;The log of spark shell tells me that it correctly load the hbase-site.xml and hive-site.xml files. I also checked that the configuration of the zookeeper quorum is correct in the HBase configuration. However the zookeeper objects are failing to connect because they are trying quorum:localhost:2081 instead of the addresses of the one of the three zookeeper nodes.&lt;/P&gt;&lt;P&gt;As a consequence it also fails to give me the HBase connection that is needed.&lt;/P&gt;&lt;P&gt;Note: I already tried to erase from the zookeeper command line the configuration relative to hbase (/hbase-unsecure) and restart zookeeper so as to let him rebuild it but this solution fails also.&lt;/P&gt;&lt;P&gt;Thanks for any help that may be provided&lt;/P&gt;</description>
      <pubDate>Tue, 31 Jan 2017 20:08:08 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155283#M53019</guid>
      <dc:creator>samuel_sayag</dc:creator>
      <dc:date>2017-01-31T20:08:08Z</dc:date>
    </item>
    <item>
      <title>Re: Spark HBase Connector (SHC) job fails to connect to Zookeeper cause connection faillure to HBase</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155284#M53020</link>
      <description>&lt;P&gt;Samuel,&lt;/P&gt;&lt;P&gt;Just for the sake of narrowing down the issue, can you add the hbase-site.xml, hive-site.xml to SPARK_CLASSPATH and retry ?&lt;/P&gt;</description>
      <pubDate>Wed, 01 Feb 2017 04:04:10 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155284#M53020</guid>
      <dc:creator>anatva</dc:creator>
      <dc:date>2017-02-01T04:04:10Z</dc:date>
    </item>
    <item>
      <title>Re: Spark HBase Connector (SHC) job fails to connect to Zookeeper cause connection faillure to HBase</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155285#M53021</link>
      <description>&lt;A rel="user" href="https://community.cloudera.com/users/12986/samuelsayag.html" nodeid="12986"&gt;@samuel sayag&lt;/A&gt;&lt;P&gt;The error you are getting is this&lt;/P&gt;&lt;P&gt;Unable to set watcher on znode (/hbase/hbaseid)&lt;/P&gt;&lt;P&gt;Is your zookeeper running? If yes, please share your hbase-site.xml.&lt;/P&gt;</description>
      <pubDate>Wed, 01 Feb 2017 04:20:01 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155285#M53021</guid>
      <dc:creator>mqureshi</dc:creator>
      <dc:date>2017-02-01T04:20:01Z</dc:date>
    </item>
    <item>
      <title>Re: Spark HBase Connector (SHC) job fails to connect to Zookeeper cause connection faillure to HBase</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155286#M53022</link>
      <description>&lt;P&gt;
	I thought that's what I did &lt;span class="lia-unicode-emoji" title=":winking_face:"&gt;😉&lt;/span&gt; What is the purpose of adding the files to spark-shell by --files option if it is not to add it to the spark classpath.&lt;/P&gt;&lt;P&gt;
	&lt;CITE&gt;You said: "can you add the hbase-site.xml, hive-site.xml to SPARK_CLASSPATH and retry ?"&lt;/CITE&gt;&lt;/P&gt;&lt;P&gt;
	How do you do this ?&lt;/P&gt;&lt;P&gt;Note: please see the next post for hive-site.xml and hbase-site.xml&lt;/P&gt;&lt;P&gt;
	Many thanks for your answer.&lt;/P&gt;</description>
      <pubDate>Wed, 01 Feb 2017 19:16:21 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155286#M53022</guid>
      <dc:creator>samuel_sayag</dc:creator>
      <dc:date>2017-02-01T19:16:21Z</dc:date>
    </item>
    <item>
      <title>Re: Spark HBase Connector (SHC) job fails to connect to Zookeeper cause connection faillure to HBase</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155287#M53023</link>
      <description>&lt;P&gt;My zookeeper is running green on the ambari. I am able to hbase shell from the node where I launch the spark-shell.&lt;/P&gt;&lt;P&gt;No problem. Here it is: &lt;/P&gt;&lt;P&gt;&lt;A href="https://community.cloudera.com/legacyfs/online/attachments/11950-hive-site.xml"&gt;hbase-site.xml, 
&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;A href="https://community.cloudera.com/legacyfs/online/attachments/11950-hive-site.xml"&gt;hive-site.xml&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 01 Feb 2017 19:22:59 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155287#M53023</guid>
      <dc:creator>samuel_sayag</dc:creator>
      <dc:date>2017-02-01T19:22:59Z</dc:date>
    </item>
    <item>
      <title>Re: Spark HBase Connector (SHC) job fails to connect to Zookeeper cause connection faillure to HBase</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155288#M53024</link>
      <description>&lt;P&gt;--files will add it to the working directory of the YARN app master and container and this would mean that those files (and not jars) would be in the classpath of the app master and container. But in client mode jobs the main driver code is running in the client machine. So these --files are not available on the driver. SPARK_CLASSPATH adds these files to the driver classpath. Its an env var. So one could say the following. Note it will warn saying its deprecated and cannot be used concurrently with --driver-class-path option. More information can be found here.&lt;/P&gt;&lt;P&gt;&lt;A href="https://github.com/hortonworks-spark/shc" target="_blank"&gt;https://github.com/hortonworks-spark/shc&lt;/A&gt;&lt;/P&gt;&lt;PRE&gt;export SPARK_CLASSPATH=/a/b/c/hbase-site.xml;/d/e/f/hive-site.xml&lt;/PRE&gt;</description>
      <pubDate>Thu, 02 Feb 2017 06:01:11 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155288#M53024</guid>
      <dc:creator>bikas</dc:creator>
      <dc:date>2017-02-02T06:01:11Z</dc:date>
    </item>
    <item>
      <title>Re: Spark HBase Connector (SHC) job fails to connect to Zookeeper cause connection faillure to HBase</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155289#M53025</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/12986/samuelsayag.html" nodeid="12986"&gt;@samuel sayag&lt;/A&gt; &lt;/P&gt;&lt;P&gt;what is this script element in your hbase-site.xml and hive-site.xml. Can you please remove that and try it again?&lt;/P&gt;</description>
      <pubDate>Thu, 02 Feb 2017 07:10:02 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155289#M53025</guid>
      <dc:creator>mqureshi</dc:creator>
      <dc:date>2017-02-02T07:10:02Z</dc:date>
    </item>
    <item>
      <title>Re: Spark HBase Connector (SHC) job fails to connect to Zookeeper cause connection faillure to HBase</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155290#M53026</link>
      <description>&lt;P&gt;I don't see no script element in these files. What do you mean ?&lt;/P&gt;</description>
      <pubDate>Thu, 02 Feb 2017 14:01:07 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155290#M53026</guid>
      <dc:creator>samuel_sayag</dc:creator>
      <dc:date>2017-02-02T14:01:07Z</dc:date>
    </item>
    <item>
      <title>Re: Spark HBase Connector (SHC) job fails to connect to Zookeeper cause connection faillure to HBase</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155291#M53027</link>
      <description>&lt;P&gt;Samuel,&lt;/P&gt;&lt;P&gt;Have you tried by explicitly exporting the hbase-site.xml to SPARK_CLASSPATH ?&lt;/P&gt;&lt;P&gt;Your logs show that the hbase base znode is /hbase, whereas the hbase-site.xml shows that the base znode is /hbase-unsecure. This indicates that spark hbase connector is not looking at the correct hbase-site.xml.&lt;/P&gt;</description>
      <pubDate>Fri, 03 Feb 2017 04:06:02 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155291#M53027</guid>
      <dc:creator>anatva</dc:creator>
      <dc:date>2017-02-03T04:06:02Z</dc:date>
    </item>
    <item>
      <title>Re: Spark HBase Connector (SHC) job fails to connect to Zookeeper cause connection faillure to HBase</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155292#M53028</link>
      <description>&lt;P&gt;I see following in your hbase-site.xml when I open it.&lt;/P&gt;&lt;PRE&gt;&amp;lt;script data-x-lastpass=""&amp;gt;
(function(){var c=0;if("undefined"!==typeof CustomEvent&amp;amp;&amp;amp;"function"===typeof window.dispatchEvent){var a=function(a){try{if("object"===typeof a&amp;amp;&amp;amp;(a=JSON.stringify(a)),"string"===typeof a)return window.dispatchEvent(new CustomEvent("lprequeststart",{detail:{data:a,requestID:++c}})),c}catch(f){}},b=function(a){try{window.dispatchEvent(new CustomEvent("lprequestend",{detail:a}))}catch(f){}};"undefined"!==typeof XMLHttpRequest&amp;amp;&amp;amp;XMLHttpRequest.prototype&amp;amp;&amp;amp;XMLHttpRequest.prototype.send&amp;amp;&amp;amp;(XMLHttpRequest.prototype.send= function(c){return function(f){var d=this,e=a(f);e&amp;amp;&amp;amp;d.addEventListener("loadend",function(){b({requestID:e,statusCode:d.status})});return c.apply(d,arguments)}}(XMLHttpRequest.prototype.send));"function"===typeof fetch&amp;amp;&amp;amp;(fetch=function(c){return function(f,d){var e=a(d),g=c.apply(this,arguments);if(e){var h=function(a){b({requestID:e,statusCode:a&amp;amp;&amp;amp;a.status})};g.then(h)["catch"](h)}return g}}(fetch))}})(); (function(){if("undefined"!==typeof CustomEvent){var c=function(a){if(a.lpsubmit)return a;var b=function(){try{this.dispatchEvent(new CustomEvent("lpsubmit"))}catch(k){}return a.apply(this,arguments)};b.lpsubmit=!0;return b};window.addEventListener("DOMContentLoaded",function(){if(document&amp;amp;&amp;amp;document.forms&amp;amp;&amp;amp;0&amp;lt;document.forms.length)for(var a=0;a&amp;lt;document.forms.length;++a)document.forms[a].submit=c(document.forms[a].submit)},!0);document.createElement=function(a){return function(){var b=a.apply(this, arguments);b&amp;amp;&amp;amp;"FORM"===b.nodeName&amp;amp;&amp;amp;b.submit&amp;amp;&amp;amp;(b.submit=c(b.submit));return b}}(document.createElement)}})();
&amp;lt;/script&amp;gt;

&lt;/PRE&gt;</description>
      <pubDate>Fri, 03 Feb 2017 04:09:43 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155292#M53028</guid>
      <dc:creator>mqureshi</dc:creator>
      <dc:date>2017-02-03T04:09:43Z</dc:date>
    </item>
    <item>
      <title>Re: Spark HBase Connector (SHC) job fails to connect to Zookeeper cause connection faillure to HBase</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155293#M53029</link>
      <description>&lt;P&gt;I am going to do it. I had production problem until now that kept me out of the problem. It is not close and I will try with your advice. Thanks.&lt;/P&gt;</description>
      <pubDate>Sun, 05 Feb 2017 21:46:41 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155293#M53029</guid>
      <dc:creator>samuel_sayag</dc:creator>
      <dc:date>2017-02-05T21:46:41Z</dc:date>
    </item>
    <item>
      <title>Re: Spark HBase Connector (SHC) job fails to connect to Zookeeper cause connection faillure to HBase</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155294#M53030</link>
      <description>&lt;P&gt;Hi Samuel,&lt;/P&gt;&lt;P&gt;You probably need to copy hbase-site.xml to /etc/spark/conf folder.&lt;/P&gt;</description>
      <pubDate>Mon, 06 Feb 2017 10:32:49 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155294#M53030</guid>
      <dc:creator>vjiang</dc:creator>
      <dc:date>2017-02-06T10:32:49Z</dc:date>
    </item>
    <item>
      <title>Re: Spark HBase Connector (SHC) job fails to connect to Zookeeper cause connection faillure to HBase</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155295#M53031</link>
      <description>&lt;P&gt;No this is absolutely not the problem. It does not guarantee in any manner that the spark job will take it into account. See answer to @anatva for a proper answer to this. &lt;/P&gt;&lt;P&gt;Further more my post indicate that --files option is used with the correct files passed.&lt;/P&gt;</description>
      <pubDate>Tue, 07 Feb 2017 19:56:06 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155295#M53031</guid>
      <dc:creator>samuel_sayag</dc:creator>
      <dc:date>2017-02-07T19:56:06Z</dc:date>
    </item>
    <item>
      <title>Re: Spark HBase Connector (SHC) job fails to connect to Zookeeper cause connection faillure to HBase</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155296#M53032</link>
      <description>&lt;P&gt;Thanks for your advice it seems this is the problem. &lt;/P&gt;&lt;P&gt;As a test I ran the example of the shc connector &lt;A href="https://github.com/hortonworks-spark/shc/tree/master/examples/src/main/scala/org/apache/spark/sql/execution/datasources/hbase"&gt;here&lt;/A&gt;  with --master yarn-cluster and --master yarn-client and this was the problem. The quorum are respectively found/not found in each test. So spark doest not have the file in its path when working as a client.&lt;/P&gt;</description>
      <pubDate>Tue, 07 Feb 2017 20:00:38 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155296#M53032</guid>
      <dc:creator>samuel_sayag</dc:creator>
      <dc:date>2017-02-07T20:00:38Z</dc:date>
    </item>
    <item>
      <title>Re: Spark HBase Connector (SHC) job fails to connect to Zookeeper cause connection faillure to HBase</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155297#M53033</link>
      <description>&lt;P&gt;Thanks for your advice it seems this is the problem. &lt;/P&gt;&lt;P&gt;As a test I ran the example of the shc connector &lt;A href="https://github.com/hortonworks-spark/shc/tree/master/examples/src/main/scala/org/apache/spark/sql/execution/datasources/hbase"&gt;here&lt;/A&gt; 
 with --master yarn-cluster and --master yarn-client and this was the 
problem. The quorum are respectively found/not found in each test. So 
spark doest not have the file in its path when working as a client.&lt;/P&gt;</description>
      <pubDate>Tue, 07 Feb 2017 20:01:11 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155297#M53033</guid>
      <dc:creator>samuel_sayag</dc:creator>
      <dc:date>2017-02-07T20:01:11Z</dc:date>
    </item>
    <item>
      <title>Re: Spark HBase Connector (SHC) job fails to connect to Zookeeper cause connection faillure to HBase</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155298#M53034</link>
      <description>&lt;P&gt;&lt;STRONG&gt;This is the code&lt;/STRONG&gt; &lt;/P&gt;&lt;P&gt;ubuntu@ip-10-0-2-24:~$ spark-shell --packages com.hortonworks:shc-core:1.1.0-2.1                                                                                                             -s_2.11 --repositories &lt;A href="http://repo.hortonworks.com/content/groups/public/" target="_blank"&gt;http://repo.hortonworks.com/content/groups/public/&lt;/A&gt;
scala&amp;gt; import org.apache.spark.sql.{SQLContext, _}
import org.apache.spark.sql.{SQLContext, _} &lt;/P&gt;&lt;P&gt;scala&amp;gt;  import org.apache.spark.sql.execution.datasources.hbase._ &lt;/P&gt;&lt;P&gt;import org.apache.spark.sql.execution.datasources.hbase._ &lt;/P&gt;&lt;P&gt;scala&amp;gt;  import org.apache.spark.{SparkConf, SparkContext}&lt;/P&gt;&lt;P&gt;
import org.apache.spark.{SparkConf, SparkContext}&lt;/P&gt;&lt;P&gt;
scala&amp;gt;  import spark.sqlContext.implicits._ &lt;/P&gt;&lt;P&gt;import spark.sqlContext.implicits._ &lt;/P&gt;&lt;P&gt;scala&amp;gt; def catalog = s"""{
     |      |"table":{"namespace":"default", "name":"Contacts"},
     |      |"rowkey":"key",
     |      |"columns":{
     |      |"rowkey":{"cf":"rowkey", "col":"key", "type":"string"},
     |      |"officeAddress":{"cf":"Office", "col":"Address", "type":"string"},
     |      |"officePhone":{"cf":"Office", "col":"Phone", "type":"string"},
     |      |"personalName":{"cf":"Personal", "col":"Name", "type":"string"},
     |      |"personalPhone":{"cf":"Personal", "col":"Phone", "type":"string"}
     |      |}
     |  |}""".stripMargin
catalog: String &lt;/P&gt;&lt;P&gt;scala&amp;gt; def withCatalog(cat: String): DataFrame = {
     |          spark.sqlContext
     |          .read
     |          .options(Map(HBaseTableCatalog.tableCatalog-&amp;gt;cat))
     |          .format("org.apache.spark.sql.execution.datasources.hbase")
     |          .load()
     |      } &lt;/P&gt;&lt;P&gt;withCatalog: (cat: String)org.apache.spark.sql.DataFrame &lt;/P&gt;&lt;P&gt;scala&amp;gt; val df = withCatalog(catalog)
df: org.apache.spark.sql.DataFrame = [rowkey: string, officeAddress: string ... 3 more fields]&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;this is the error  &lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;scala&amp;gt; df.show &lt;/P&gt;&lt;P&gt;java.lang.RuntimeException: java.lang.NullPointerException
  at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:208)
  at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)
  at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:295)
  at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:160)
  at org.apache.hadoop.hbase.client.ClientScanner.&amp;lt;init&amp;gt;(ClientScanner.java:155)
  at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:821)
  at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:193)
  at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:89)
  at org.apache.hadoop.hbase.client.MetaScanner.listTableRegionLocations(MetaScanner.java:343)
  at org.apache.hadoop.hbase.client.HRegionLocator.listRegionLocations(HRegionLocator.java:142)
  at org.apache.hadoop.hbase.client.HRegionLocator.getStartEndKeys(HRegionLocator.java:118)
  at org.apache.spark.sql.execution.datasources.hbase.RegionResource$anonfun$1.apply(HBaseResources.scala:109)
  at org.apache.spark.sql.execution.datasources.hbase.RegionResource$anonfun$1.apply(HBaseResources.scala:108)
  at org.apache.spark.sql.execution.datasources.hbase.ReferencedResource$class.releaseOnException(HBaseResources.scala:77)
  at org.apache.spark.sql.execution.datasources.hbase.RegionResource.releaseOnException(HBaseResources.scala:88)
  at org.apache.spark.sql.execution.datasources.hbase.RegionResource.&amp;lt;init&amp;gt;(HBaseResources.scala:108)
  at org.apache.spark.sql.execution.datasources.hbase.HBaseTableScanRDD.getPartitions(HBaseTableScan.scala:61)
  at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:252)
  at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:250)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
  at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
  at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:252)
  at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:250)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
  at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
  at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:252)
  at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:250)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
  at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
  at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:252)
  at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:250)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
  at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:314)
  at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
  at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$collectFromPlan(Dataset.scala:2861)
  at org.apache.spark.sql.Dataset$anonfun$head$1.apply(Dataset.scala:2150)
  at org.apache.spark.sql.Dataset$anonfun$head$1.apply(Dataset.scala:2150)
  at org.apache.spark.sql.Dataset$anonfun$55.apply(Dataset.scala:2842)
  at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
  at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2841)
  at org.apache.spark.sql.Dataset.head(Dataset.scala:2150)
  at org.apache.spark.sql.Dataset.take(Dataset.scala:2363)
  at org.apache.spark.sql.Dataset.showString(Dataset.scala:241)
  at org.apache.spark.sql.Dataset.show(Dataset.scala:637)
  at org.apache.spark.sql.Dataset.show(Dataset.scala:596)
  at org.apache.spark.sql.Dataset.show(Dataset.scala:605)
  ... 54 elided
Caused by: java.lang.NullPointerException
  at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.getMetaReplicaNodes(ZooKeeperWatcher.java:395)
  at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:553)
  at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:61)
  at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateMeta(ConnectionManager.java:1185)
  at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1152)
  at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:300)
  at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:151)
  at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:59)
  at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
  ... 103 more&lt;/P&gt;</description>
      <pubDate>Fri, 21 Sep 2018 21:53:36 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Spark-HBase-Connector-SHC-job-fails-to-connect-to-Zookeeper/m-p/155298#M53034</guid>
      <dc:creator>RakeshPatra</dc:creator>
      <dc:date>2018-09-21T21:53:36Z</dc:date>
    </item>
  </channel>
</rss>

