<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question The node /hbase is not in ZooKeeper. It should have been written by the master in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/The-node-hbase-is-not-in-ZooKeeper-It-should-have-been/m-p/162333#M24751</link>
    <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;   I'm trying to write to Phoenix from Spark. But I'm getting exception saying&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;"&lt;/STRONG&gt;&lt;STRONG&gt;ERROR org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation  - The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.&lt;/STRONG&gt;"&lt;/P&gt;&lt;P&gt;Environment Info:&lt;/P&gt;&lt;P&gt;    Hadoop Environment: HDP 2.3.4&lt;/P&gt;&lt;P&gt;    HBase Version : 1.1.2&lt;/P&gt;&lt;P&gt;    Phoenix Version : 4.4.0&lt;/P&gt;&lt;P&gt;After installing phoenix I have executed "&lt;STRONG&gt;./psql.py zookeeperhost1,zookeeperhost2,zookeeperhost3&lt;/STRONG&gt;&lt;STRONG&gt;:2181 ~/phoenix/us_population.sql ~/phoenix/us_population.csv ~/phoenix/us_population_queries.sql&lt;/STRONG&gt;" and this worked fine&lt;/P&gt;&lt;P&gt;Here is my Spark Code:&lt;/P&gt;&lt;PRE&gt;import org.apache.spark._
import org.apache.spark.SparkContext
import org.apache.phoenix.spark._
object PhoenixSample {
   def main(args: Array[String]) {
    val conf = new SparkConf().setMaster(args(0)).setAppName(args(1))
    //conf.set("zookeeper.znode.parent", "/hbase-unsecure") //Tried this as well, same error.
    val sc = new SparkContext(conf)
    
    val dataSet = List(("MI", "Holland", 100), ("MI", "Detroit", 200), ("MI", "Cleave Land", 300))
    sc
    .parallelize(dataSet)
        .saveToPhoenix(
                args(2),
                Seq("STATE","CITY","POPULATION"),
                zkUrl = Some(args(3))
        )
   }
}
&lt;/PRE&gt;</description>
    <pubDate>Fri, 08 Apr 2016 19:54:18 GMT</pubDate>
    <dc:creator>biswajit_kundu</dc:creator>
    <dc:date>2016-04-08T19:54:18Z</dc:date>
    <item>
      <title>The node /hbase is not in ZooKeeper. It should have been written by the master</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/The-node-hbase-is-not-in-ZooKeeper-It-should-have-been/m-p/162333#M24751</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;   I'm trying to write to Phoenix from Spark. But I'm getting exception saying&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;"&lt;/STRONG&gt;&lt;STRONG&gt;ERROR org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation  - The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.&lt;/STRONG&gt;"&lt;/P&gt;&lt;P&gt;Environment Info:&lt;/P&gt;&lt;P&gt;    Hadoop Environment: HDP 2.3.4&lt;/P&gt;&lt;P&gt;    HBase Version : 1.1.2&lt;/P&gt;&lt;P&gt;    Phoenix Version : 4.4.0&lt;/P&gt;&lt;P&gt;After installing phoenix I have executed "&lt;STRONG&gt;./psql.py zookeeperhost1,zookeeperhost2,zookeeperhost3&lt;/STRONG&gt;&lt;STRONG&gt;:2181 ~/phoenix/us_population.sql ~/phoenix/us_population.csv ~/phoenix/us_population_queries.sql&lt;/STRONG&gt;" and this worked fine&lt;/P&gt;&lt;P&gt;Here is my Spark Code:&lt;/P&gt;&lt;PRE&gt;import org.apache.spark._
import org.apache.spark.SparkContext
import org.apache.phoenix.spark._
object PhoenixSample {
   def main(args: Array[String]) {
    val conf = new SparkConf().setMaster(args(0)).setAppName(args(1))
    //conf.set("zookeeper.znode.parent", "/hbase-unsecure") //Tried this as well, same error.
    val sc = new SparkContext(conf)
    
    val dataSet = List(("MI", "Holland", 100), ("MI", "Detroit", 200), ("MI", "Cleave Land", 300))
    sc
    .parallelize(dataSet)
        .saveToPhoenix(
                args(2),
                Seq("STATE","CITY","POPULATION"),
                zkUrl = Some(args(3))
        )
   }
}
&lt;/PRE&gt;</description>
      <pubDate>Fri, 08 Apr 2016 19:54:18 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/The-node-hbase-is-not-in-ZooKeeper-It-should-have-been/m-p/162333#M24751</guid>
      <dc:creator>biswajit_kundu</dc:creator>
      <dc:date>2016-04-08T19:54:18Z</dc:date>
    </item>
    <item>
      <title>Re: The node /hbase is not in ZooKeeper. It should have been written by the master</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/The-node-hbase-is-not-in-ZooKeeper-It-should-have-been/m-p/162334#M24752</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;  Missed to add the spark submit command I used. Here it goes&lt;/P&gt;&lt;PRE&gt;cur_dir=`pwd`
spark_master_url=local[2] 
app_name=PhoenixInsert
phoenix_table_name=US_POPULATION
zookeeper_url="zookeeperhost1,zookeeperhost2,zookeeperhost3:2181"

supporting_jars=/usr/hdp/2.3.4.0-3485/phoenix/lib/antlr-3.5.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/calcite-avatica-1.2.0.2.3.4.0-3485.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/calcite-avatica-server-1.2.0.2.3.4.0-3485.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/commons-codec-1.7.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/commons-configuration-1.6.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/commons-csv-1.0.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/commons-io-2.4.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/commons-lang-2.6.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/commons-logging-1.2.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/guava-12.0.1.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/hadoop-annotations.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/hadoop-auth.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/hadoop-common.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/hadoop-hdfs.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/hadoop-mapreduce-client-core.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/hadoop-yarn-api.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/hadoop-yarn-common.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/hbase-client.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/hbase-common.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/hbase-it.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/hbase-protocol.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/hbase-testing-util.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/jackson-core-asl-1.8.8.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/jackson-mapper-asl-1.8.8.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/log4j-1.2.17.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/netty-3.6.2.Final.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-core-4.4.0.2.3.4.0-3485.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-core-4.4.0.2.3.4.0-3485-sources.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-core-4.4.0.2.3.4.0-3485-tests.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-core-4.4.0.2.3.4.0-3485-test-sources.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-flume-4.4.0.2.3.4.0-3485.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-flume-4.4.0.2.3.4.0-3485-sources.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-flume-4.4.0.2.3.4.0-3485-tests.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-flume-4.4.0.2.3.4.0-3485-test-sources.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-pig-4.4.0.2.3.4.0-3485.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-pig-4.4.0.2.3.4.0-3485-sources.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-pig-4.4.0.2.3.4.0-3485-tests.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-pig-4.4.0.2.3.4.0-3485-test-sources.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-server-4.4.0.2.3.4.0-3485.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-server-4.4.0.2.3.4.0-3485-runnable.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-server-4.4.0.2.3.4.0-3485-sources.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-server-4.4.0.2.3.4.0-3485-tests.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-server-4.4.0.2.3.4.0-3485-test-sources.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-server-client-4.4.0.2.3.4.0-3485.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-server-client-4.4.0.2.3.4.0-3485-sources.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-server-client-4.4.0.2.3.4.0-3485-tests.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-server-client-4.4.0.2.3.4.0-3485-test-sources.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-spark-4.4.0.2.3.4.0-3485.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-spark-4.4.0.2.3.4.0-3485-sources.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-spark-4.4.0.2.3.4.0-3485-tests.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-spark-4.4.0.2.3.4.0-3485-test-sources.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/protobuf-java-2.5.0.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/slf4j-api-1.6.4.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/slf4j-log4j12-1.7.10.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/zookeeper.jar

spark-submit --jars $supporting_jars --class "PhoenixSample" $cur_dir/SparkApps-1.0.jar $spark_master_url $app_name $phoenix_table_name $zookeeper_url


&lt;/PRE&gt;</description>
      <pubDate>Fri, 08 Apr 2016 20:00:07 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/The-node-hbase-is-not-in-ZooKeeper-It-should-have-been/m-p/162334#M24752</guid>
      <dc:creator>biswajit_kundu</dc:creator>
      <dc:date>2016-04-08T20:00:07Z</dc:date>
    </item>
    <item>
      <title>Re: The node /hbase is not in ZooKeeper. It should have been written by the master</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/The-node-hbase-is-not-in-ZooKeeper-It-should-have-been/m-p/162335#M24753</link>
      <description>&lt;P&gt;In an unkerberised HDP cluster the hbase node is&lt;/P&gt;&lt;P&gt;/hbase-unsecure&lt;/P&gt;&lt;P&gt;and will be changed to &lt;/P&gt;&lt;P&gt;/hbase-secure &lt;/P&gt;&lt;P&gt;In this question he did the same thing and fixed it by adding the url &lt;/P&gt;&lt;P&gt;"zkUrl","sandbox:2181:/hbase-unsecure",&lt;/P&gt;&lt;P&gt;&lt;A href="https://community.hortonworks.com/questions/18228/phoenix-hbase-problem-with-hdp-234-and-java.html" target="_blank"&gt;https://community.hortonworks.com/questions/18228/phoenix-hbase-problem-with-hdp-234-and-java.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;I doubt adding it to the spark config helps anything ( only parameters with spark. get serialized for example ) &lt;/P&gt;&lt;P&gt;sqlline needed the /hbae-unsecure before but in the newest version they seem to take the znode from the hbase-site.xml if not otherwise configured.&lt;/P&gt;&lt;P&gt;You can check in your hbase-site which node is needed.&lt;/P&gt;</description>
      <pubDate>Fri, 08 Apr 2016 20:39:02 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/The-node-hbase-is-not-in-ZooKeeper-It-should-have-been/m-p/162335#M24753</guid>
      <dc:creator>bleonhardi</dc:creator>
      <dc:date>2016-04-08T20:39:02Z</dc:date>
    </item>
    <item>
      <title>Re: The node /hbase is not in ZooKeeper. It should have been written by the master</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/The-node-hbase-is-not-in-ZooKeeper-It-should-have-been/m-p/162336#M24754</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/4488/biswajitkundu.html" nodeid="4488"&gt;@BIswajit Kundu&lt;/A&gt; : Were you able to resolve this issue?&lt;/P&gt;</description>
      <pubDate>Mon, 03 Apr 2017 22:53:23 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/The-node-hbase-is-not-in-ZooKeeper-It-should-have-been/m-p/162336#M24754</guid>
      <dc:creator>dhavalmodi24</dc:creator>
      <dc:date>2017-04-03T22:53:23Z</dc:date>
    </item>
  </channel>
</rss>

