Created 05-25-2016 07:12 PM
There exists some use case that shows how Hadoop and Spark work together? I already read the theory but I want to see something pratical to have a better understand. Thanks!!!
Created 05-25-2016 08:55 PM
Spark and Hadoop go together like peanut butter and jelly.
Check out my slides
I worked at a few places that used Spark and Spark streaming to ingest data into HDFS and HBase. Then Spark + Spark MLib and H20 to run machine learning on the data. Then Hive and Spark SQL for queries. And reporting through Hive Thrift server to Tableau.
Spark without Hadoop really is missing out a lot.
And Spark 1.6 on HDP you get all the benefits of running YARN applications, common security and locality of data access.
I wouldn't run Spark without Hadoop unless you are running Spark standalone for development.
Even there Zeppelin + Spark 1.6 on HDP is an awesome development environment.
Created 05-25-2016 08:24 PM
There is a good blog post over at MapR regarding this. I personally think the Network Security use case is especially compelling.
https://www.mapr.com/blog/game-changing-real-time-use-cases-apache-spark-on-hadoop.
Created 05-25-2016 08:36 PM
If you think Hadoop to be HDFS and YARN, spark can take advantage of HDFS (storage that can be horizontally expanded by adding more nodes) by reading data that is in HDFS, writing final processed data into HDFS and YARN (compute that can be horizontally expanded by adding more nodes) by running on YARN.
If you are looking at usecases, look at MLlib algorithms which cover a lot of use cases that can run on top of spark.
Created 05-28-2016 11:28 PM
Yes when I think about Hadoop I'm saying to storage the data into HDFS. I don't know what type of advantage that can I take with Spark. Data cleansing?
Created 05-25-2016 08:55 PM
Spark and Hadoop go together like peanut butter and jelly.
Check out my slides
I worked at a few places that used Spark and Spark streaming to ingest data into HDFS and HBase. Then Spark + Spark MLib and H20 to run machine learning on the data. Then Hive and Spark SQL for queries. And reporting through Hive Thrift server to Tableau.
Spark without Hadoop really is missing out a lot.
And Spark 1.6 on HDP you get all the benefits of running YARN applications, common security and locality of data access.
I wouldn't run Spark without Hadoop unless you are running Spark standalone for development.
Even there Zeppelin + Spark 1.6 on HDP is an awesome development environment.