Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Test spark sql performance

avatar

I want to execute the tpch queries with spark to test spark performance, I already read a lot about this subject but I still have some doubts.

The main doubt is this: I already have generated the files relative to each tpch table, but now where do we store this tables? Where its suppose to create the database schema? So that we can acess that database with spark sql.

More details:

For what I already learn, the Spark SQL enables spark to acess a database and execute SQL queries without the need of Hive, right? So, if I want to use Spark SQL to execute the tpch queries, already having the files relatives to each tpch table, now, where I create the database schema with that table files? It is necessary to create in Hive? Cant be in Spark SQL?

Because, I already see a lot of studies where people store the tpch tables on hive and then execute the tpch queries with spark sql against that hive tables. But if we create the database schema in hive and then we acess with spark sql that tables, in fact we are using hive and not spark sql, right? In terms of performance, we are not really testing spark sql performance, but hiveql instead?

The questions can be a little basic, but I already read a lot about this subject but I still have that doubts.

1 ACCEPTED SOLUTION

avatar
Master Mentor
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login
2 REPLIES 2

avatar
Master Mentor
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login

avatar
New Contributor

@Jan J

You don't need to store your data in a database per say; you can store columnar files in S3 or HDFS and upload them using Spark's API. This will allow you to use the features of Spark SQL as opposed to the HiveQL shim. A sample application would look as follows:

from pyspark import SparkContext 
from pyspark.sql import SparkSession 
sc = SparkContext() 
spark = SparkSession(sc)


#import parquetfiles from HDFS
df_customer = spark.read.parquet("hdfs://customer.parquet") 
df_orders = spark.read.parquet("hdfs://orders.parquet") 

#Create temp-tables for the purpose of querying our now structured data 
df_customer.registerTempTable("customer") 
df_orders.registerTempTable("orders") 

#Define a function to run our query set and highlight the actual differences amongst the filtered predicates and joins 

# init vars 
runtimes = [] 

def runBenchmarkQuery(query,message):
	print("Starting: " + message);   
	#start time
  	queryStartTime = datetime.now()   
	#run the query and show the result   
	spark.sql(query).show()   
	#end time   
	queryStopTime = datetime.now()   
	runTime = (queryStopTime-queryStartTime).seconds   
	print("Runtime: %s seconds" % (runTime))   
	runtimes.append( runTime )
	print ("Finishing: " + message);   
	return

#TPCH Query 1
runBenchmarkQuery("""
    SELECT
    n_name,
    sum(l_extendedprice * (1 - l_discount)) as revenue
FROM
    customer,
    orders,
    lineitem,
    supplier,
    nation,
    region
WHERE
    c_custkey = o_custkey
    AND l_orderkey = o_orderkey
    AND l_suppkey = s_suppkey
    AND c_nationkey = s_nationkey
    AND s_nationkey = n_nationkey
    AND n_regionkey = r_regionkey
    AND r_name = 'AFRICA'
    AND o_orderdate >= cast('1993-01-01' as date)
    AND o_orderdate < add_months(cast('1993-01-01' as date), '12')
GROUP BY
    n_name
ORDER BY
    revenue desc
""", "Run 1")