Created 02-28-2016 01:19 PM
Hi,
Im studing spark, because I read some studies about it and it seems amazing to process large volumes of data. So I was thinking to expriment this, generating 100gb of data with some benchmark like tpc and execute the queries with spark using 2 nodes, but Im with some doubts how to do this.
I need to install hadoop two hadoop nodes to store the tpc tables? And then execute the queries with spark against hdfs? But how we can create the tpc schema and store the tables in hadoop hdfs?Is it possible? Or it´s not necessary install hadoop and we need to use hive instead? I reading some articles about this but but I m getting a bit confused. Thanks for your attention!
Created 02-28-2016 01:24 PM
I wont start with 2 node cluster ..Minimum 3 to 5 nodes --> This is just a lab env
2 master, 3 DN
You need to deploy a cluster - Link Use ambari to deploy HDP
https://github.com/cartershanklin/hive-testbench - You can generate hive data using testbench
then you can test sparksql
and
https://github.com/databricks/spark-perf
Yes, you should start with Hadoop and take advantage of distributed computing framework.
Created 03-06-2016 03:03 AM
@Jan J Please help me to close the thread if it's useful
Created 03-12-2016 04:33 PM
Thanks for your help. I just not understand why thins link: https://github.com/databricks/spark-perf that you share is needed. Can you explain? In the first step I instal lhadoop, then Install hive and create the schema. Then I can use spark sql to execute queries against the hive schema right? So why that link its necessary? Thanks again!