Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Zeppelin, Livy, Hive, Kerberos & Spark 1.6

avatar
New Contributor

When using Zeppelin with Livy on a kerberized CDH 5.10 cluster and trying to access Hive, I ran into this

 

https://issues.apache.org/jira/browse/SPARK-13478

 

Since Hive on Spark is not supported on Spark 2.0 in CDH 5.10, only on Spark 1.6, when will the fix be backported please?

 

The fix is available for Spark 1.6.4, so one option would be to upgrade Hive on Spark to support Spark 1.6.4. What is your timeline to do this?

 

Also, when will you provide packaging for LIvy?

thank you

 

1 ACCEPTED SOLUTION

avatar
New Contributor
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login
4 REPLIES 4

avatar
New Contributor
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login

avatar
Expert Contributor

Hi,

 

Something to be careful is when you do "Deploy Client Configuration" on your Spark2 service it will remove the link or the hive-site.xml if you have copied it.

 

I have noticed all these config are in $SPARK_CONF_DIR/yarn-conf/ so I wish Livy could also load them when it starts up the Spark. 

 


 

avatar
Expert Contributor
OK, I have tried and it seems it's best to copy hive-site.xml into livy/conf/ and it will load it in every session.

Best,

avatar
New Contributor
Hi,

Set this in livy-env.sh instead to get it working in a more maintainable way:
export HADOOP_CONF_DIR=/etc/hive/conf