- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
How spark works to analyze huge databases
- Labels:
-
Apache Hadoop
-
Apache Spark
Created ‎02-28-2016 01:19 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Im studing spark, because I read some studies about it and it seems amazing to process large volumes of data. So I was thinking to expriment this, generating 100gb of data with some benchmark like tpc and execute the queries with spark using 2 nodes, but Im with some doubts how to do this.
I need to install hadoop two hadoop nodes to store the tpc tables? And then execute the queries with spark against hdfs? But how we can create the tpc schema and store the tables in hadoop hdfs?Is it possible? Or it´s not necessary install hadoop and we need to use hive instead? I reading some articles about this but but I m getting a bit confused. Thanks for your attention!
Created ‎02-28-2016 01:24 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I wont start with 2 node cluster ..Minimum 3 to 5 nodes --> This is just a lab env
2 master, 3 DN
You need to deploy a cluster - Link Use ambari to deploy HDP
https://github.com/cartershanklin/hive-testbench - You can generate hive data using testbench
then you can test sparksql
and
https://github.com/databricks/spark-perf
Yes, you should start with Hadoop and take advantage of distributed computing framework.
Created ‎02-28-2016 01:24 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I wont start with 2 node cluster ..Minimum 3 to 5 nodes --> This is just a lab env
2 master, 3 DN
You need to deploy a cluster - Link Use ambari to deploy HDP
https://github.com/cartershanklin/hive-testbench - You can generate hive data using testbench
then you can test sparksql
and
https://github.com/databricks/spark-perf
Yes, you should start with Hadoop and take advantage of distributed computing framework.
Created ‎02-28-2016 01:30 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for your help, so first I need to install hadoop cluster and uploade the tables (.tbl) into hadoop? And then also create the schema and store tables into hive?
Created ‎02-28-2016 01:50 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes
1) Setup cluster
2) load data - link shared
3) you can use spark git too https://github.com/databricks/spark-perf
The main step is to build a cluster then options are unlimited.
Created ‎02-28-2016 09:35 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks again for your help. So ok, the first step is to setup a hadoop cluster. But the link that you share "https://github.com/databricks/spark-perf" has a step that has this title "Running on exinsting spark cluster". So if we want to execute some queries with park it isnt possible create a spark cluster with 4 nodes and store the tables there instead of create an hadoop cluster?
Created ‎02-28-2016 11:18 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
And also, hive is really necessary? We can´t have only the hadoop cluster with the tables data and execute queries with spark against the hadoop without hive?
Created ‎02-29-2016 12:49 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Jan J See this http://spark.apache.org/sql/ You have various options to access strutured data.
Created ‎03-05-2016 05:19 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks again. I read the link but Im still with doubt. It is really necessary install hadoop, then hive, than create the database schema in hive and load the database data in hive and then use the spark to query the hive database? Its not possible install hadoop, then load the tpc-h schema and data in hadoop and query the hadoop data with spark? Im reading a lot of documentation but I really didnt understand the best solution for this.
Created ‎03-05-2016 07:39 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Because I want to test tpch queries with spark not with hive. But, its necessary to use hive as a intermediate to execute queries with spark?
Created ‎03-06-2016 01:30 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Jan J You have options to access data. As hive/HQL is the industry standard to interact with Hadoop so users are leveraging sparsql+hive
Please read the overview http://spark.apache.org/docs/latest/sql-programming-guide.html#overview
