Archives of Support Questions (Read Only)

This is an archived board for historical reference. Information and links may no longer be available or relevant
Announcements
This board is archived and read-only for historical reference. To ask a new question, please post a new topic on the appropriate active board.

Spark and HBase

avatar
Rising Star

I am trying to write to HBase from SparkStreaming job.

I get the following exception when calling:

table.put(_config,put);

java.lang.NullPointerException

at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.getMetaReplicaNodes(ZooKeeperWatcher.java:269)

I see that the configuration value is not correct but I tried to set up values I don't know exactly which values to setup

I am running on azure with hortonworks sandbox and running the job via spark-submit with --jars supplying the entire jars needed to run. (spark version 1.6)

The streaming is running perfect except that I cant write to HBase

Here is my code to put data:

class Adapter extendsSerializable{
var_conf = new HBaseConfiguration()
val_admin = new HBaseAdmin(_conf)
defPutData(tableName:String,columnFamily:String,key:String,data:String){
//  _conf = HBaseConfiguration.create().asInstanceOf[HBaseConfiguration]
        _conf.addResource("//etc//hbase//conf//hbase-site.xml")
        _conf.set("hbase.zookeeper.quorum","sandbox.hortonworks.com");
        _conf.setInt("hbase.zookeeper.clientport",2181);
println(_conf.get("hbase.zookeeper.quorum")); 
println(_conf.get("hbase.zookeeper.clientport"));
valtable = new HTable(_conf,tableName)
valput = new Put(Bytes.toBytes(key));
valobj = JSON.parseFull(data); 
objmatch {
caseSome(m:Map[String,Any])=>
m.map((r)=>{
valv=r._2.asInstanceOf[String];
put.add(Bytes.toBytes(columnFamily),Bytes.toBytes(r._1),Bytes.toBytes(v))
//println(r._1+":"+v)
          })
      }      
println("writing to HBase"); 
table.put(put); 
  }
}
1 ACCEPTED SOLUTION

avatar
Super Guru
@Avraha Zilberma

Try setting zookeeper znode property as per your cluster conf, it will help.

conf.set("zookeeper.znode.parent", "VALUE")

Thanks

View solution in original post

7 REPLIES 7

avatar
Super Guru
@Avraha Zilberma

Try setting zookeeper znode property as per your cluster conf, it will help.

conf.set("zookeeper.znode.parent", "VALUE")

Thanks

avatar
Super Guru

@Avraha Zilberman

Why your confs have double slash( "//") in path everywhere?

avatar
Rising Star

Hi thank you for the answer .

Tried it

Didn't help the same exception

I put the following code:

_conf.set("zookeeper.znode.parent", "//hbase-unsecure") //copied the value from the configuration file

avatar
Super Guru

Can you please remove one "/" from "//hbase-unsecure" and try again? @Avraha Zilberman

avatar
Rising Star

Solved

Many thanks

I will try to remove the slashes on addresource and see if it solves also

Many thanks

avatar
Rising Star

Hi thank you for the answer .

Tried it

Didn't help the same exception

I put the following code:

_conf.set("zookeeper.znode.parent", "//hbase-unsecure") //copied the value from the configuration file

avatar
New Member

I believe it should be like _conf.set("zookeeper.znode.parent", "/hbase-unsecure")