Support Questions

Find answers, ask questions, and share your expertise

Test spark sql performance

avatar

I want to execute the tpch queries with spark to test spark performance, I already read a lot about this subject but I still have some doubts.

The main doubt is this: I already have generated the files relative to each tpch table, but now where do we store this tables? Where its suppose to create the database schema? So that we can acess that database with spark sql.

More details:

For what I already learn, the Spark SQL enables spark to acess a database and execute SQL queries without the need of Hive, right? So, if I want to use Spark SQL to execute the tpch queries, already having the files relatives to each tpch table, now, where I create the database schema with that table files? It is necessary to create in Hive? Cant be in Spark SQL?

Because, I already see a lot of studies where people store the tpch tables on hive and then execute the tpch queries with spark sql against that hive tables. But if we create the database schema in hive and then we acess with spark sql that tables, in fact we are using hive and not spark sql, right? In terms of performance, we are not really testing spark sql performance, but hiveql instead?

The questions can be a little basic, but I already read a lot about this subject but I still have that doubts.

1 ACCEPTED SOLUTION

avatar
Master Mentor
@Jan J

I would store in HDFS. I would leverage Hive...See this http://spark.apache.org/docs/latest/sql-programming-guide.html#hive-tables

When you use spark to read hive tables then you are using spark features to read data from hive.

In terms of performance, we are not really testing spark sql performance, but hiveql instead?

You are testing performance of SparkSql feature with Hive

All the answers are in the overview

http://spark.apache.org/docs/latest/sql-programming-guide.html#overview

SQL

One use of Spark SQL is to execute SQL queries written using either a basic SQL syntax or HiveQL. Spark SQL can also be used to read data from an existing Hive installation. For more on how to configure this feature, please refer to the Hive Tables section. When running SQL from within another programming language the results will be returned as a DataFrame. You can also interact with the SQL interface using thecommand-line or over JDBC/ODBC.

DataFrames

A DataFrame is a distributed collection of data organized into named columns. It is conceptually equivalent to a table in a relational database or a data frame in R/Python, but with richer optimizations under the hood. DataFrames can be constructed from a wide array of sources such as: structured data files, tables in Hive, external databases, or existing RDDs.

The DataFrame API is available in Scala, Java, Python, and R.

Datasets

A Dataset is a new experimental interface added in Spark 1.6 that tries to provide the benefits of RDDs (strong typing, ability to use powerful lambda functions) with the benefits of Spark SQL’s optimized execution engine. A Dataset can be constructed from JVM objects and then manipulated using functional transformations (map, flatMap, filter, etc.).

The unified Dataset API can be used both in Scala and Java. Python does not yet have support for the Dataset API, but due to its dynamic nature many of the benefits are already available (i.e. you can access the field of a row by name naturally row.columnName). Full python support will be added in a future release.

View solution in original post

2 REPLIES 2

avatar
Master Mentor
@Jan J

I would store in HDFS. I would leverage Hive...See this http://spark.apache.org/docs/latest/sql-programming-guide.html#hive-tables

When you use spark to read hive tables then you are using spark features to read data from hive.

In terms of performance, we are not really testing spark sql performance, but hiveql instead?

You are testing performance of SparkSql feature with Hive

All the answers are in the overview

http://spark.apache.org/docs/latest/sql-programming-guide.html#overview

SQL

One use of Spark SQL is to execute SQL queries written using either a basic SQL syntax or HiveQL. Spark SQL can also be used to read data from an existing Hive installation. For more on how to configure this feature, please refer to the Hive Tables section. When running SQL from within another programming language the results will be returned as a DataFrame. You can also interact with the SQL interface using thecommand-line or over JDBC/ODBC.

DataFrames

A DataFrame is a distributed collection of data organized into named columns. It is conceptually equivalent to a table in a relational database or a data frame in R/Python, but with richer optimizations under the hood. DataFrames can be constructed from a wide array of sources such as: structured data files, tables in Hive, external databases, or existing RDDs.

The DataFrame API is available in Scala, Java, Python, and R.

Datasets

A Dataset is a new experimental interface added in Spark 1.6 that tries to provide the benefits of RDDs (strong typing, ability to use powerful lambda functions) with the benefits of Spark SQL’s optimized execution engine. A Dataset can be constructed from JVM objects and then manipulated using functional transformations (map, flatMap, filter, etc.).

The unified Dataset API can be used both in Scala and Java. Python does not yet have support for the Dataset API, but due to its dynamic nature many of the benefits are already available (i.e. you can access the field of a row by name naturally row.columnName). Full python support will be added in a future release.

avatar
New Contributor

@Jan J

You don't need to store your data in a database per say; you can store columnar files in S3 or HDFS and upload them using Spark's API. This will allow you to use the features of Spark SQL as opposed to the HiveQL shim. A sample application would look as follows:

from pyspark import SparkContext 
from pyspark.sql import SparkSession 
sc = SparkContext() 
spark = SparkSession(sc)


#import parquetfiles from HDFS
df_customer = spark.read.parquet("hdfs://customer.parquet") 
df_orders = spark.read.parquet("hdfs://orders.parquet") 

#Create temp-tables for the purpose of querying our now structured data 
df_customer.registerTempTable("customer") 
df_orders.registerTempTable("orders") 

#Define a function to run our query set and highlight the actual differences amongst the filtered predicates and joins 

# init vars 
runtimes = [] 

def runBenchmarkQuery(query,message):
	print("Starting: " + message);   
	#start time
  	queryStartTime = datetime.now()   
	#run the query and show the result   
	spark.sql(query).show()   
	#end time   
	queryStopTime = datetime.now()   
	runTime = (queryStopTime-queryStartTime).seconds   
	print("Runtime: %s seconds" % (runTime))   
	runtimes.append( runTime )
	print ("Finishing: " + message);   
	return

#TPCH Query 1
runBenchmarkQuery("""
    SELECT
    n_name,
    sum(l_extendedprice * (1 - l_discount)) as revenue
FROM
    customer,
    orders,
    lineitem,
    supplier,
    nation,
    region
WHERE
    c_custkey = o_custkey
    AND l_orderkey = o_orderkey
    AND l_suppkey = s_suppkey
    AND c_nationkey = s_nationkey
    AND s_nationkey = n_nationkey
    AND n_regionkey = r_regionkey
    AND r_name = 'AFRICA'
    AND o_orderdate >= cast('1993-01-01' as date)
    AND o_orderdate < add_months(cast('1993-01-01' as date), '12')
GROUP BY
    n_name
ORDER BY
    revenue desc
""", "Run 1")