Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

How spark works to analyze huge databases

avatar

Hi,

Im studing spark, because I read some studies about it and it seems amazing to process large volumes of data. So I was thinking to expriment this, generating 100gb of data with some benchmark like tpc and execute the queries with spark using 2 nodes, but Im with some doubts how to do this.

I need to install hadoop two hadoop nodes to store the tpc tables? And then execute the queries with spark against hdfs? But how we can create the tpc schema and store the tables in hadoop hdfs?Is it possible? Or it´s not necessary install hadoop and we need to use hive instead? I reading some articles about this but but I m getting a bit confused. Thanks for your attention!

1 ACCEPTED SOLUTION

avatar
Master Mentor
@Jan J

I wont start with 2 node cluster ..Minimum 3 to 5 nodes --> This is just a lab env

2 master, 3 DN

You need to deploy a cluster - Link Use ambari to deploy HDP

https://github.com/cartershanklin/hive-testbench - You can generate hive data using testbench

then you can test sparksql

and

https://github.com/databricks/spark-perf

Yes, you should start with Hadoop and take advantage of distributed computing framework.

View solution in original post

11 REPLIES 11

avatar
Master Mentor
@Jan J

I wont start with 2 node cluster ..Minimum 3 to 5 nodes --> This is just a lab env

2 master, 3 DN

You need to deploy a cluster - Link Use ambari to deploy HDP

https://github.com/cartershanklin/hive-testbench - You can generate hive data using testbench

then you can test sparksql

and

https://github.com/databricks/spark-perf

Yes, you should start with Hadoop and take advantage of distributed computing framework.

avatar

Thanks for your help, so first I need to install hadoop cluster and uploade the tables (.tbl) into hadoop? And then also create the schema and store tables into hive?

avatar
Master Mentor

@Jan J

Yes

1) Setup cluster

2) load data - link shared

3) you can use spark git too https://github.com/databricks/spark-perf

The main step is to build a cluster then options are unlimited.

avatar

Thanks again for your help. So ok, the first step is to setup a hadoop cluster. But the link that you share "https://github.com/databricks/spark-perf" has a step that has this title "Running on exinsting spark cluster". So if we want to execute some queries with park it isnt possible create a spark cluster with 4 nodes and store the tables there instead of create an hadoop cluster?

avatar

And also, hive is really necessary? We can´t have only the hadoop cluster with the tables data and execute queries with spark against the hadoop without hive?

avatar
Master Mentor

@Jan J See this http://spark.apache.org/sql/ You have various options to access strutured data.

avatar

Thanks again. I read the link but Im still with doubt. It is really necessary install hadoop, then hive, than create the database schema in hive and load the database data in hive and then use the spark to query the hive database? Its not possible install hadoop, then load the tpc-h schema and data in hadoop and query the hadoop data with spark? Im reading a lot of documentation but I really didnt understand the best solution for this.

avatar

Because I want to test tpch queries with spark not with hive. But, its necessary to use hive as a intermediate to execute queries with spark?

avatar
Master Mentor

@Jan J You have options to access data. As hive/HQL is the industry standard to interact with Hadoop so users are leveraging sparsql+hive

Please read the overview http://spark.apache.org/docs/latest/sql-programming-guide.html#overview