Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Hive on spark problem (cannot persist hive.execution.engine)

Hive on spark problem (cannot persist hive.execution.engine)

Explorer

Hello

 

I tried to activate hive on spark according to this document. Everything works well, but only if I run "set hive.execution.engine=spark;" in CLI or beeline for each session. If I restart hive client, it defaults back to MapReduce as the execution engine.

I tried to persist the change, so I tried to add it to hive-site.xml by editing the advanced configuration snippet (hive-site.xml), but hive's behavior did not change.

I also tried to set HIVE_HOME in the Environment Advanced Configuration Snippet but it did not change anything.

 

Is there a way to persist hive.execution.engine across sessions ?

 

Thank you

 

Guy

2 REPLIES 2

Re: Hive on spark problem (cannot persist hive.execution.engine)

Master Guru
Did you ensure to do a redeploy of all client configuration after making the change in the Hive Service Configuration? See https://www.youtube.com/watch?v=4S9H3wftM_0

Also, which specific hive-site.xml Advanced Configuration Snippet did you edit?
Highlighted

Re: Hive on spark problem (cannot persist hive.execution.engine)

Explorer

Hi

 

Whatever I tried, I could not get it to work from Cloudera manager. Setting the parameter in advanced configuration snippet had no effect and so did setting SPARK_HOME in the environment configuration snippet (of course I restarted the services).

 

I ended up changing the hive-site.xml file manually, like in Apache hadoop, and that did the trick.

However, it seems like a problem in Cloudera manager...

 

Thanks

 

Guy