Support Questions

Find answers, ask questions, and share your expertise

What's the best practice to get data from hbase and form dataframe for Python/R?

avatar
Contributor

What's the best practice to get data from hbase and form dataframe for Python/R? If we want to use our Panda/R libraries, how to get data from hbase and form dataframe automatically?

1 ACCEPTED SOLUTION

avatar

We have an experimental Spark HBase connector, https://github.com/zhzhan/shc

With the following features

  • First class support for DataFrame API
  • JSON based catalog with rich data type support
  • Performant, scalable, enterprise-ready
  • Partition Pruning
  • Predicate Pushdown
  • Scan optimizations
  • Data Locality
  • Composite Rowkey
  • Leverage existing work in the HBase community

Please take look at the README of the above project.

Also see example https://github.com/zhzhan/shc/blob/master/src/main/scala/org/apache/spark/sql/execution/datasources/...

View solution in original post

11 REPLIES 11

avatar

@Cui Lin I am not R guy but this would give you a good starting point depending on you want to use RevR, R or Python.

RHbase tutorials -->

https://github.com/RevolutionAnalytics/RHadoop/wik...

http://www.odbms.org/2015/06/intro-to-hbase-via-r-...

http://radar.oreilly.com/2014/08/scaling-up-data-f...

PandaHbase -->

https://github.com/livingstonese/pandas-hbase

avatar
Contributor

It seems that the above can't satisfy all my need. What's the best way to get data out of hbase and save into files instead?

avatar
Master Mentor

you can write Mapreduce program to dump data to files, you can use pig, you can use python with happybase, you have a lot of different options @Cui Lin.

avatar
Contributor

I need to firstly run query to select records based on time, and then dump data into files or data frame. Happybase can't support query and its index has to be integer. Could you lead me some example on mapreduce or pig example?

avatar
Master Mentor

@Cui Lin I updated my response above with links to mapreduce examples. You will need to setup a scanner based on your criteria and then run mapreduce to write out the data to files for Pig, here's an example to read data from Hbase table, then you just call "store data into 'location' using storage of your choice.

avatar
Contributor

Is there any example to get data from Hbase using Spark in Hortonworks? MapR and Cloudera has some packages like this, not sure if it could work in Hortonworks.

avatar
Master Mentor

there's work in progress to make Spark and HBase work efficiently together on the Hortonworks Side, we're not publishing anything until we can support it.

avatar
Master Mentor

avatar

We have an experimental Spark HBase connector, https://github.com/zhzhan/shc

With the following features

  • First class support for DataFrame API
  • JSON based catalog with rich data type support
  • Performant, scalable, enterprise-ready
  • Partition Pruning
  • Predicate Pushdown
  • Scan optimizations
  • Data Locality
  • Composite Rowkey
  • Leverage existing work in the HBase community

Please take look at the README of the above project.

Also see example https://github.com/zhzhan/shc/blob/master/src/main/scala/org/apache/spark/sql/execution/datasources/...