Does SHC(Spark HBASE Connector) provided by hortonworks support bulk load? From the source code i could see it uses saveAsNewAPIHadoopDataset to save the HFile directly. But when i run a test example i could see it writes to memstore,WAL and compactions do happen.
The other question is around write locality if it supports bulk load. Does the connector ensures that the spark executor gets launched in the region where the hbase data is supposed to be written, so that the HFile write would be local.