I want to run Spark as a engine instead of MapReduce. I am using CDH 5.4 VM.
Can I do something like: hive.execution.engine=spark;
You should enable hive.enable.spark.execution.engine property.
Please read below document.
I tried hive on Spark on my Quickstart VM yesterday. It worked. :)
Just trying to undergo the procedure posted on the website Cloudera but can not be found in the Configuration tab of the parameter that contains the word Spark. I CDH update to version 5.4.2 and nothing.