Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Can I use SparkSQL on a cluster using Hive on Spark?

Solved Go to solution

Can I use SparkSQL on a cluster using Hive on Spark?

Explorer

I am using CDH 6.1.1 Cluster.

Cluster is configured to use Spark as the execution engine for Hive.

 

 

Is there anything wrong with using SparkSQL on this Cluster?

Is it ok to create Hive Tables and change data using SparkSQL?

 

Since SparkSQL uses the Hive Metastore, I suspect that there may be a conflict between SparkSQL and Hive on Spark.

 

In addition, please refer to documentation on how to intergrate Cloudera CDH Hive with Apache Zeppelin's Spark interpreter. 

 

Thank you.

 

1 ACCEPTED SOLUTION

Accepted Solutions

Re: Can I use SparkSQL on a cluster using Hive on Spark?

Cloudera Employee

Hey @avengers,

 

Just thought, this could add some more value to this question here.

 

Spark SQL uses a Hive Metastore to manage the metadata of persistent relational entities (e.g. databases, tables, columns, partitions) in a relational database (for fast access) [1].

 

Also, I don't think there would be a MetaStore crash if we use it along with HiveOnSpark.

 

[1] https://jaceklaskowski.gitbooks.io/mastering-spark-sql/spark-sql-hive-metastore.html

5 REPLIES 5

Re: Can I use SparkSQL on a cluster using Hive on Spark?

Contributor

Hi @av ,

 

Here the links for the Hive and Spark interpreter doc's :

https://zeppelin.apache.org/docs/0.8.2/interpreter/hive.html

https://zeppelin.apache.org/docs/0.8.2/interpreter/spark.html

 

Best,

Helmi KHALIFA

 

Re: Can I use SparkSQL on a cluster using Hive on Spark?

Explorer

 

Thanks. However, I have already read them.

I'am already connecting to Hive from Zeppelin using JDBC.

 

I want to query Hive Table with SparkSQL.

And I'm wondering if the metastore won't crash if I use it in a Cluster using HiveOnSpark.

 

For example.

 

 

%spark
val df = spark.read.format("csv").option("header", "true")
.option("inferSchema", "true").load("/somefile.csv")

df.createOrReplaceTempView("csvTable");

%spark.sql
select * 
from csvTable lt
join hiveTable rt
on lt.col = rt.col

 

 

Highlighted

Re: Can I use SparkSQL on a cluster using Hive on Spark?

Contributor

Hi @avengers ,

 

U will need to share variables between two zeppelin interpreters and i dont think that we can do it between spark and sparkSQL.

 

I find an easier way by using sqlContext inside the same interpreter %spark:

 

%spark


val df = spark.read.format("csv").option("header", "true")
.option("inferSchema", "true").load("/somefile.csv")

 

df.createOrReplaceTempView("csvTable");

 

val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)

 

val resultat = sqlContext.sql("select * from csvTable lt join hiveTable rt on lt.col = rt.col")

 

resultat.show()

 

I tried it and it works ! 

 

Best,

Helmi KHALIFA

Re: Can I use SparkSQL on a cluster using Hive on Spark?

Cloudera Employee

Hey @avengers,

 

Just thought, this could add some more value to this question here.

 

Spark SQL uses a Hive Metastore to manage the metadata of persistent relational entities (e.g. databases, tables, columns, partitions) in a relational database (for fast access) [1].

 

Also, I don't think there would be a MetaStore crash if we use it along with HiveOnSpark.

 

[1] https://jaceklaskowski.gitbooks.io/mastering-spark-sql/spark-sql-hive-metastore.html

Re: Can I use SparkSQL on a cluster using Hive on Spark?

Contributor

hi @avengers 

 

If it works for you, would you be kind enough to accept the answer please ?

 

Best,

Helmi KHALIFA

Don't have an account?
Coming from Hortonworks? Activate your account here