I tried to activate hive on spark according to this document. Everything works well, but only if I run "set hive.execution.engine=spark;" in CLI or beeline for each session. If I restart hive client, it defaults back to MapReduce as the execution engine.
I tried to persist the change, so I tried to add it to hive-site.xml by editing the advanced configuration snippet (hive-site.xml), but hive's behavior did not change.
I also tried to set HIVE_HOME in the Environment Advanced Configuration Snippet but it did not change anything.
Is there a way to persist hive.execution.engine across sessions ?
Whatever I tried, I could not get it to work from Cloudera manager. Setting the parameter in advanced configuration snippet had no effect and so did setting SPARK_HOME in the environment configuration snippet (of course I restarted the services).
I ended up changing the hive-site.xml file manually, like in Apache hadoop, and that did the trick.
However, it seems like a problem in Cloudera manager...