Support Questions

Find answers, ask questions, and share your expertise

Spark job reeturns empty rows from HBase

Hi Community,

I'm running a basic spark job which reads from an HBase table.

I can see the job is getting complete without any error, but in output I get the empty rows.

Will appreciate any help.

Below is my code

object objectName {
  def catalog = s"""{
         |"table":{"namespace":"namespaceName", "name":"tableName"},
           |"Key":{"cf":"rowkey", "col":"rowKeyAttribute", "type":"string"},
           |"col1":{"cf":"cfName", "col":"col1", "type":"bigint"},
           |"col2":{"cf":"cfName", "col":"col2", "type":"string"}

  def main(args: Array[String]) {
    val spark = SparkSession.builder()

    val sc = spark.sparkContext
    val sqlContext = spark.sqlContext   
    import sqlContext.implicits._  

    def withCatalog(cat: String): DataFrame = {
        .options(Map(HBaseTableCatalog.tableCatalog -> cat))



@vivek jain

I dont see any code making use of withCatalog function. If this function is not beeing used what is the expected output?

As an example perhaps you could try adding something like this to show some of the content of the hbase table:

val df = withCatalog(catalog)


*** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.

Hi @Felix Albani thanks for your response. Please accept my sincere apologies I somehow missed to include that part of the code. I have updated now.

This is the output I see(Please note that I have changed the number of columns in above code, hence the difference).

+----+----+----+----+----+----+----+----+----+ |col4|col7|col1|col3|col6|col0|col8|col2|col5| +----+----+----+----+----+----+----+----+----+ +----+----+----+----+----+----+----+----+----+ 18/07/03 16:16:27 INFO CodeGenerator: Code generated in 10.60842 ms 18/07/03 16:16:27 INFO CodeGenerator: Code generated in 8.990531 ms +----+ |col0| +----+ +----+

@vivek jain

Please run the following from HBase shell:

hbase> scan 'tableName', {'LIMIT' => 5}

Also check what the describe table prints:

bhase> describe ‘tableName’

Make sure you are using case-sensitive name when referencing table from spark code.


Hi @Felix Albani I have checked these things already.

@vivek jain Could you try running the following steps and see if that works:

including table creation?

@Felix Albani I too really wanted to try this but these libraries are not deployed in cluster instead I create a dependencies jar and then I use it spark-submit.

@Felix Albani know what, I tried for a table with default namespace. I'm able to view data. Seems its working for tables without namespace.

@Felix Albani

just found that if I mention table as "table":{"name":"namespace:tablename"} in catalog then it works. Thanks for your time.

@vivek jain Good to hear that. If you think the answer and followups have helped please take a moment to login and mark as "Accepted"

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.