I am using CDH 6.1.1 Cluster.
Cluster is configured to use Spark as the execution engine for Hive.
Is there anything wrong with using SparkSQL on this Cluster?
Is it ok to create Hive Tables and change data using SparkSQL?
Since SparkSQL uses the Hive Metastore, I suspect that there may be a conflict between SparkSQL and Hive on Spark.
In addition, please refer to documentation on how to intergrate Cloudera CDH Hive with Apache Zeppelin's Spark interpreter.
Just thought, this could add some more value to this question here.
Spark SQL uses a Hive Metastore to manage the metadata of persistent relational entities (e.g. databases, tables, columns, partitions) in a relational database (for fast access) .
Also, I don't think there would be a MetaStore crash if we use it along with HiveOnSpark.
View solution in original post
Hi @av ,
Here the links for the Hive and Spark interpreter doc's :
Thanks. However, I have already read them.
I'am already connecting to Hive from Zeppelin using JDBC.
I want to query Hive Table with SparkSQL.
And I'm wondering if the metastore won't crash if I use it in a Cluster using HiveOnSpark.
val df = spark.read.format("csv").option("header", "true")
from csvTable lt
join hiveTable rt
on lt.col = rt.col
Hi @avengers ,
U will need to share variables between two zeppelin interpreters and i dont think that we can do it between spark and sparkSQL.
I find an easier way by using sqlContext inside the same interpreter %spark:
val df = spark.read.format("csv").option("header", "true").option("inferSchema", "true").load("/somefile.csv")
val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
val resultat = sqlContext.sql("select * from csvTable lt join hiveTable rt on lt.col = rt.col")
I tried it and it works !
If it works for you, would you be kind enough to accept the answer please ?