I'm running the LoadIncrementalHFiles command iteratively to import many HFile directories into a HBase table.
hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles -Dcreate.table=no <pathToReadFrom> my_table
//=> with <pathToReadFrom> = /user/testuser/data/hfiles_iteration_[0-299]
This seems to fill up my cached memory of the worker nodes of my Hadoop Cluster, see this screenshot (increasing begins / stops with the start / end of my application).
Does HBase need some time to process the cached data or am I doing something wrong here? Can I somehow "manually" empty this cache?
I'm running a HDP 2.6 cluster with HBase 1.1.2 on it.