Support Questions

Find answers, ask questions, and share your expertise
Celebrating as our community reaches 100,000 members! Thank you!

HiveWarehouseSession vs SQLContext spark execution


Can someone explain what is different about these two spark execution engines.below?


Environment: CDP private cluster

Spark version 2


We have a full ACID hive managed table that we need to access from spark ETL. We used the documentation provided to connect to Hive WareHouse connector ->


In addition to using hive warehouse connector to access the acid tables, what execution differences are there between two submissions. We don't see any DAG in spark history server and the query takes far too long (x3) than a similar query from SQLContext using a non-acid managed table.


from pyspark_llap import HiveWarehouseSession
hive = HiveWarehouseSession.session(spark).build()

df= hive.sql("select * from incidents LIMIT 100")


#additional spark transformation code..

# NO DAG in spark history server, slower, takes higher memory


The same pattern using SQLContext

from pyspark.sql import SQLContext

sqlSparkContext = SQLContext(spark.sparkContext)

df = sqlSparkContext.sql("select * from incidents LIMIT 100")


 #additional spark transformation code..

# SHOWS DAG in spark history server, faster


Can someone please explain the difference apart from hive table access where the HiveWarehouseSession spark code gets executed, engines in play, optimization, memory usage etc. vs spark code using SQLContext. I suspect 



In case of HWC, user query will be processed by HWC API connecting to HS2 server where HS2 will execute query either within HS2 or Tez/LLAP daemons

In case of Spark API, spark's framework is used to execute the query by getting necessary metadata about table from HMS


Please refer to below articles to know more about HWC