Reply
Explorer
Posts: 21
Registered: ‎03-15-2016

Hive on spark problem (cannot persist hive.execution.engine)

Hello

 

I tried to activate hive on spark according to this document. Everything works well, but only if I run "set hive.execution.engine=spark;" in CLI or beeline for each session. If I restart hive client, it defaults back to MapReduce as the execution engine.

I tried to persist the change, so I tried to add it to hive-site.xml by editing the advanced configuration snippet (hive-site.xml), but hive's behavior did not change.

I also tried to set HIVE_HOME in the Environment Advanced Configuration Snippet but it did not change anything.

 

Is there a way to persist hive.execution.engine across sessions ?

 

Thank you

 

Guy

Posts: 1,903
Kudos: 435
Solutions: 305
Registered: ‎07-31-2013

Re: Hive on spark problem (cannot persist hive.execution.engine)

Did you ensure to do a redeploy of all client configuration after making the change in the Hive Service Configuration? See https://www.youtube.com/watch?v=4S9H3wftM_0

Also, which specific hive-site.xml Advanced Configuration Snippet did you edit?
Explorer
Posts: 21
Registered: ‎03-15-2016

Re: Hive on spark problem (cannot persist hive.execution.engine)

Hi

 

Whatever I tried, I could not get it to work from Cloudera manager. Setting the parameter in advanced configuration snippet had no effect and so did setting SPARK_HOME in the environment configuration snippet (of course I restarted the services).

 

I ended up changing the hive-site.xml file manually, like in Apache hadoop, and that did the trick.

However, it seems like a problem in Cloudera manager...

 

Thanks

 

Guy