- Spark is easy to program and don't require that much hand coding whereas MapReduce is not that easy in terms of programming and requires lots of hand coding
- It has interactive mode whereas in MapReduce there is no built-in interactive mode, MapReduce is developed for batch processing.
- For data processing Spark can use streaming, machine learning, and batch processing whereas Hadoop MapReduce can use the batch engine. Spark is general purpose cluster computation engine.
- Spark executes batch processing jobs about 10 to 100 times faster than Hadoop MapReduce.
- Spark uses an abstraction called RDD which makes Spark feature rich, whereas map reduce doesn't have any abstraction
- Spark uses lower latency by caching partial/complete results across distributed nodes whereas MapReduce is completely disk-based.
For a detailed comparison between Spark & Hadoop-MapReduce, Please refer:
Spark vs Hadoop MapReduce