04-27-2015 01:51 PM - edited 04-27-2015 01:54 PM
I want to run Spark as a engine instead of MapReduce. I am using CDH 5.4 VM.
Can I do something like: hive.execution.engine=spark;
04-27-2015 04:32 PM
You should enable hive.enable.spark.execution.engine property.
Please read below document.
I tried hive on Spark on my Quickstart VM yesterday. It worked. :)
06-03-2015 12:40 AM
Just trying to undergo the procedure posted on the website Cloudera but can not be found in the Configuration tab of the parameter that contains the word Spark. I CDH update to version 5.4.2 and nothing.