Member since
05-26-2016
3
Posts
2
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3052 | 11-14-2016 11:24 AM | |
1949 | 05-26-2016 06:18 PM |
11-14-2016
11:24 AM
1 Kudo
Hi @azeltov please use the following snippet, which worked for me. %livy
sc.version
val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
%livy.sql
show tables
... View more
06-02-2016
01:37 PM
1 Kudo
@Artem Ervits, context sharing in Spark just got better with the latest Tech preview of Zeppelin which is Livy integrated - https://hortonworks.com/hadoop-tutorial/apache-zeppelin-hdp-2-4-2/. Livy acts both as a Job server, and in addition enables multi-user scenarios, allowing the users to latch on to an existing session.
... View more
05-26-2016
06:18 PM
@Bigdata Lover Please take a look here - http://hortonworks.com/apache/zeppelin/#section_3. When you use %spark interpreter, as of now the job runs as Zeppelin. The impersonation works at this point in time, only with the Livy interpreter, %lspark. In this case, the jobs will run as the logged-in user.
... View more