Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Spark and HBase

Solved Go to solution

Spark and HBase

Contributor

I am trying to write to HBase from SparkStreaming job.

I get the following exception when calling:

table.put(_config,put);

java.lang.NullPointerException

at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.getMetaReplicaNodes(ZooKeeperWatcher.java:269)

I see that the configuration value is not correct but I tried to set up values I don't know exactly which values to setup

I am running on azure with hortonworks sandbox and running the job via spark-submit with --jars supplying the entire jars needed to run. (spark version 1.6)

The streaming is running perfect except that I cant write to HBase

Here is my code to put data:

class Adapter extendsSerializable{
var_conf = new HBaseConfiguration()
val_admin = new HBaseAdmin(_conf)
defPutData(tableName:String,columnFamily:String,key:String,data:String){
//  _conf = HBaseConfiguration.create().asInstanceOf[HBaseConfiguration]
        _conf.addResource("//etc//hbase//conf//hbase-site.xml")
        _conf.set("hbase.zookeeper.quorum","sandbox.hortonworks.com");
        _conf.setInt("hbase.zookeeper.clientport",2181);
println(_conf.get("hbase.zookeeper.quorum")); 
println(_conf.get("hbase.zookeeper.clientport"));
valtable = new HTable(_conf,tableName)
valput = new Put(Bytes.toBytes(key));
valobj = JSON.parseFull(data); 
objmatch {
caseSome(m:Map[String,Any])=>
m.map((r)=>{
valv=r._2.asInstanceOf[String];
put.add(Bytes.toBytes(columnFamily),Bytes.toBytes(r._1),Bytes.toBytes(v))
//println(r._1+":"+v)
          })
      }      
println("writing to HBase"); 
table.put(put); 
  }
}
1 ACCEPTED SOLUTION

Accepted Solutions

Re: Spark and HBase

@Avraha Zilberma

Try setting zookeeper znode property as per your cluster conf, it will help.

conf.set("zookeeper.znode.parent", "VALUE")

Thanks

7 REPLIES 7

Re: Spark and HBase

@Avraha Zilberma

Try setting zookeeper znode property as per your cluster conf, it will help.

conf.set("zookeeper.znode.parent", "VALUE")

Thanks

Re: Spark and HBase

@Avraha Zilberman

Why your confs have double slash( "//") in path everywhere?

Re: Spark and HBase

Contributor

Hi thank you for the answer .

Tried it

Didn't help the same exception

I put the following code:

_conf.set("zookeeper.znode.parent", "//hbase-unsecure") //copied the value from the configuration file

Re: Spark and HBase

Can you please remove one "/" from "//hbase-unsecure" and try again? @Avraha Zilberman

Re: Spark and HBase

Contributor

Solved

Many thanks

I will try to remove the slashes on addresource and see if it solves also

Many thanks

Re: Spark and HBase

Contributor

Hi thank you for the answer .

Tried it

Didn't help the same exception

I put the following code:

_conf.set("zookeeper.znode.parent", "//hbase-unsecure") //copied the value from the configuration file

Re: Spark and HBase

New Contributor

I believe it should be like _conf.set("zookeeper.znode.parent", "/hbase-unsecure")