Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

The node /hbase is not in ZooKeeper. It should have been written by the master

avatar
New Contributor

Hi,

I'm trying to write to Phoenix from Spark. But I'm getting exception saying

"ERROR org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation - The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master."

Environment Info:

Hadoop Environment: HDP 2.3.4

HBase Version : 1.1.2

Phoenix Version : 4.4.0

After installing phoenix I have executed "./psql.py zookeeperhost1,zookeeperhost2,zookeeperhost3:2181 ~/phoenix/us_population.sql ~/phoenix/us_population.csv ~/phoenix/us_population_queries.sql" and this worked fine

Here is my Spark Code:

import org.apache.spark._
import org.apache.spark.SparkContext
import org.apache.phoenix.spark._
object PhoenixSample {
   def main(args: Array[String]) {
    val conf = new SparkConf().setMaster(args(0)).setAppName(args(1))
    //conf.set("zookeeper.znode.parent", "/hbase-unsecure") //Tried this as well, same error.
    val sc = new SparkContext(conf)
    
    val dataSet = List(("MI", "Holland", 100), ("MI", "Detroit", 200), ("MI", "Cleave Land", 300))
    sc
    .parallelize(dataSet)
        .saveToPhoenix(
                args(2),
                Seq("STATE","CITY","POPULATION"),
                zkUrl = Some(args(3))
        )
   }
}
1 ACCEPTED SOLUTION

avatar
Master Guru

In an unkerberised HDP cluster the hbase node is

/hbase-unsecure

and will be changed to

/hbase-secure

In this question he did the same thing and fixed it by adding the url

"zkUrl","sandbox:2181:/hbase-unsecure",

https://community.hortonworks.com/questions/18228/phoenix-hbase-problem-with-hdp-234-and-java.html

I doubt adding it to the spark config helps anything ( only parameters with spark. get serialized for example )

sqlline needed the /hbae-unsecure before but in the newest version they seem to take the znode from the hbase-site.xml if not otherwise configured.

You can check in your hbase-site which node is needed.

View solution in original post

3 REPLIES 3

avatar
New Contributor

Hi,

Missed to add the spark submit command I used. Here it goes

cur_dir=`pwd`
spark_master_url=local[2] 
app_name=PhoenixInsert
phoenix_table_name=US_POPULATION
zookeeper_url="zookeeperhost1,zookeeperhost2,zookeeperhost3:2181"

supporting_jars=/usr/hdp/2.3.4.0-3485/phoenix/lib/antlr-3.5.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/calcite-avatica-1.2.0.2.3.4.0-3485.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/calcite-avatica-server-1.2.0.2.3.4.0-3485.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/commons-codec-1.7.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/commons-configuration-1.6.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/commons-csv-1.0.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/commons-io-2.4.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/commons-lang-2.6.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/commons-logging-1.2.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/guava-12.0.1.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/hadoop-annotations.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/hadoop-auth.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/hadoop-common.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/hadoop-hdfs.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/hadoop-mapreduce-client-core.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/hadoop-yarn-api.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/hadoop-yarn-common.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/hbase-client.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/hbase-common.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/hbase-it.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/hbase-protocol.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/hbase-testing-util.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/jackson-core-asl-1.8.8.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/jackson-mapper-asl-1.8.8.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/log4j-1.2.17.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/netty-3.6.2.Final.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-core-4.4.0.2.3.4.0-3485.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-core-4.4.0.2.3.4.0-3485-sources.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-core-4.4.0.2.3.4.0-3485-tests.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-core-4.4.0.2.3.4.0-3485-test-sources.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-flume-4.4.0.2.3.4.0-3485.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-flume-4.4.0.2.3.4.0-3485-sources.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-flume-4.4.0.2.3.4.0-3485-tests.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-flume-4.4.0.2.3.4.0-3485-test-sources.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-pig-4.4.0.2.3.4.0-3485.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-pig-4.4.0.2.3.4.0-3485-sources.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-pig-4.4.0.2.3.4.0-3485-tests.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-pig-4.4.0.2.3.4.0-3485-test-sources.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-server-4.4.0.2.3.4.0-3485.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-server-4.4.0.2.3.4.0-3485-runnable.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-server-4.4.0.2.3.4.0-3485-sources.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-server-4.4.0.2.3.4.0-3485-tests.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-server-4.4.0.2.3.4.0-3485-test-sources.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-server-client-4.4.0.2.3.4.0-3485.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-server-client-4.4.0.2.3.4.0-3485-sources.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-server-client-4.4.0.2.3.4.0-3485-tests.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-server-client-4.4.0.2.3.4.0-3485-test-sources.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-spark-4.4.0.2.3.4.0-3485.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-spark-4.4.0.2.3.4.0-3485-sources.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-spark-4.4.0.2.3.4.0-3485-tests.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/phoenix-spark-4.4.0.2.3.4.0-3485-test-sources.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/protobuf-java-2.5.0.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/slf4j-api-1.6.4.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/slf4j-log4j12-1.7.10.jar,/usr/hdp/2.3.4.0-3485/phoenix/lib/zookeeper.jar

spark-submit --jars $supporting_jars --class "PhoenixSample" $cur_dir/SparkApps-1.0.jar $spark_master_url $app_name $phoenix_table_name $zookeeper_url


avatar
Contributor

@BIswajit Kundu : Were you able to resolve this issue?

avatar
Master Guru

In an unkerberised HDP cluster the hbase node is

/hbase-unsecure

and will be changed to

/hbase-secure

In this question he did the same thing and fixed it by adding the url

"zkUrl","sandbox:2181:/hbase-unsecure",

https://community.hortonworks.com/questions/18228/phoenix-hbase-problem-with-hdp-234-and-java.html

I doubt adding it to the spark config helps anything ( only parameters with spark. get serialized for example )

sqlline needed the /hbae-unsecure before but in the newest version they seem to take the znode from the hbase-site.xml if not otherwise configured.

You can check in your hbase-site which node is needed.