This project shows how to analyze an HBase Snapshot using Spark.
The main motivation for writing this code is to reduce the impact on the HBase Region Servers while analyzing HBase records. By creating a snapshot of the HBase table, we can run Spark jobs against the snapshot, eliminating the impact to region servers and reducing the risk to operational systems.
At a high-level, here's what the code is doing:
Reads an HBase Snapshot into a Spark
Parses the HBase KeyValue to a Spark Dataframe
Applies arbitrary data processing (timestamp and rowkey filtering)
Saves the results back to an HBase (HFiles / KeyValue) format within HDFS, using HFileOutputFormat.
The output format maintains the original rowkey, timestamp, column family, qualifier, and value structure.
From here, you can bulkload the HDFS file into HBase