Archives of Support Questions (Read Only)

This is an archived board for historical reference. Information and links may no longer be available or relevant
Announcements
This board is archived and read-only for historical reference. To ask a new question, please post a new topic on the appropriate active board.

TableNotFoundException in Storm-HBASE

avatar
New Member

Hello,

I trying to create an HBASE bolt.

here the code:

Config config = new Config(); 
config.setDebug(true); 
config.put(Config.TOPOLOGY_MAX_SPOUT_PENDING, 1); 
		
Map<String, Object> HBConf = new HashMap<String,Object>();

HBConf.put("hbase.rootdir","hdfs://localhost:8020/apps/hbase/data");
HBConf.put("hbase.zookeeper.property.clientPort","2181");
HBConf.put("hbase.master", "localhost:60000");
		
config.put("HBCONFIG",HBConf);

//TEST HBASE
SimpleHBaseMapper mapper = new SimpleHBaseMapper() 
      .withRowKeyField("nome")
      .withColumnFields(new Fields("cognome"))
      .withColumnFamily("cf");
		
HBaseBolt hbase = new HBaseBolt("WordCount", mapper).withConfigKey("HBCONFIG");

TopologyBuilder builder = new TopologyBuilder(); 

builder.setSpout("word-spout", new WordGenerator());
builder.setBolt("pre-hive", new PrepareTuple()).shuffleGrouping("word-spout");

builder.setBolt("hbase-bolt", hbase).shuffleGrouping("pre-hive");

LocalCluster cluster = new LocalCluster(); 
cluster.submitTopology("HelloStorm", config, builder.createTopology());

The HBASE bolt should write on the existing table WordCount. I can see the table in the result of hbase shell command list;

When I run my topology I have the following error:

8019 [Thread-14-hbase-bolt-executor[2 2]] ERROR o.a.h.h.c.AsyncProcess - Cannot get replica 0 location for {"totalColumns":1,"row":"Storm apache","families":{"cf":[{"qualifier":"cognome","vlen":7,"tag":[],"timestamp":9223372036854775807}]}}
org.apache.hadoop.hbase.TableNotFoundException: WordCount
        at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1264) ~[hbase-client-1.1.2.2.5.0.0-1245.jar:1.1.2.2.5.0.0-1245]
        at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1162) ~[hbase-client-1.1.2.2.5.0.0-1245.jar:1.1.2.2.5.0.0-1245]
        at org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.findAllLocationsOrFail(AsyncProcess.java:958) [hbase-client-1.1.2.2.5.0.0-1245.jar:1.1.2.2.5.0.0-1245]
        at org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.groupAndSendMultiAction(AsyncProcess.java:866) [hbase-client-1.1.2.2.5.0.0-1245.jar:1.1.2.2.5.0.0-1245]
        at org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.access$100(AsyncProcess.java:584) [hbase-client-1.1.2.2.5.0.0-1245.jar:1.1.2.2.5.0.0-1245]
        at org.apache.hadoop.hbase.client.AsyncProcess.submitAll(AsyncProcess.java:566) [hbase-client-1.1.2.2.5.0.0-1245.jar:1.1.2.2.5.0.0-1245]
        at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:906) [hbase-client-1.1.2.2.5.0.0-1245.jar:1.1.2.2.5.0.0-1245]
        at org.apache.storm.hbase.common.HBaseClient.batchMutate(HBaseClient.java:101) [storm-hbase-1.0.2.jar:1.0.2]
        at org.apache.storm.hbase.bolt.HBaseBolt.execute(HBaseBolt.java:96) [storm-hbase-1.0.2.jar:1.0.2]
        at org.apache.storm.daemon.executor$fn__6571$tuple_action_fn__6573.invoke(executor.clj:734) [storm-core-1.0.1.2.5.0.0-1245.jar:1.0.1.2.5.0.0-1245]
        at org.apache.storm.daemon.executor$mk_task_receiver$fn__6492.invoke(executor.clj:469) [storm-core-1.0.1.2.5.0.0-1245.jar:1.0.1.2.5.0.0-1245]
        at org.apache.storm.disruptor$clojure_handler$reify__6005.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.0.0-1245.jar:1.0.1.2.5.0.0-1245]
        at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.0.0-1245.jar:1.0.1.2.5.0.0-1245]
        at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.0.0-1245.jar:1.0.1.2.5.0.0-1245]
        at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.0.0-1245.jar:1.0.1.2.5.0.0-1245]
        at org.apache.storm.daemon.executor$fn__6571$fn__6584$fn__6637.invoke(executor.clj:853) [storm-core-1.0.1.2.5.0.0-1245.jar:1.0.1.2.5.0.0-1245]
        at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.0.0-1245.jar:1.0.1.2.5.0.0-1245]
        at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]


Where can be the error?

Thanks

1 ACCEPTED SOLUTION

avatar
Super Guru

@Giuseppe Mento

could you please try after adding this property in HBConf and see if it helps,update hbase_zk_node according to your configuration by default it should be /hbase if it is not a secure cluster

HBConf.put("zookeeper.znode.parent", "<hbase_zk_node>");

View solution in original post

1 REPLY 1

avatar
Super Guru

@Giuseppe Mento

could you please try after adding this property in HBConf and see if it helps,update hbase_zk_node according to your configuration by default it should be /hbase if it is not a secure cluster

HBConf.put("zookeeper.znode.parent", "<hbase_zk_node>");