Im studing spark, because I read some studies about it and it seems amazing to process large volumes of data. So I was thinking to expriment this, generating 100gb of data with some benchmark like tpc and execute the queries with spark using 2 nodes, but Im with some doubts how to do this.
I need to install hadoop two hadoop nodes to store the tpc tables? And then execute the queries with spark against hdfs? But how we can create the tpc schema and store the tables in hadoop hdfs?Is it possible? Or it´s not necessary install hadoop and we need to use hive instead? I reading some articles about this but but I m getting a bit confused. Thanks for your attention!
Thanks again for your help. So ok, the first step is to setup a hadoop cluster. But the link that you share "https://github.com/databricks/spark-perf" has a step that has this title "Running on exinsting spark cluster". So if we want to execute some queries with park it isnt possible create a spark cluster with 4 nodes and store the tables there instead of create an hadoop cluster?
Thanks again. I read the link but Im still with doubt. It is really necessary install hadoop, then hive, than create the database schema in hive and load the database data in hive and then use the spark to query the hive database? Its not possible install hadoop, then load the tpc-h schema and data in hadoop and query the hadoop data with spark? Im reading a lot of documentation but I really didnt understand the best solution for this.