Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

What is the difference between Partitioner, Combiner, Shuffle and sort phase in Map Reduce. What is the order of execution

avatar
Rising Star

What is the difference between Partitioner, Combiner, Shuffle and sort phase in Map Reduce. What is the order of execution of these phases. My understanding of the process flow is as follows:

1) Each Map Task output is Partitioned and sorted in memory and Combiner functions runs on it. This output is written to local disk called as Intermediate Data.

2) All the intermediate data from all the DataNodes go through a phase called Shuffle and sort and which is taken care by Hadoop Framework.

3) Sorted output is given as input to Reducers.

Please verify if the process flow is correct and provide your valuable inputs.

1 ACCEPTED SOLUTION
11 REPLIES 11

avatar
Master Mentor
@Avinash C

here it is the explanation. Link

avatar
Rising Star

thanks Artem.

avatar
Rising Star

As per Hadoop Definitive Guide - 3rd edition, Chapter 6 - The Map Side says: "Before it writes to disk, the thread first divides the data into partitions corresponding to the reducers that they will ultimately be sent to. Within each partition, the back-ground thread performs an in-memory sort by key, and if there is a combiner function, it is run on the output of the sort." However Yahoo developers tutorial says Combiner runs prior to partitioner. Okay here is why am confused. Can you please look into it and let me know

avatar
Rising Star

Thank you very much. The pictures you posted solved my query. Visual representation makes a clear win in ease of understanding.

avatar
Master Guru

I learned some things as well through the tutorial. If you want to verify your MapReduce knowledge the HDP Java developer certification is actually a good thing to do.

avatar
Rising Star

As per Hadoop Definitive Guide - 3rd edition, Chapter 6 - The Map Side says: "Before it writes to disk, the thread first divides the data into partitions corresponding to the reducers that they will ultimately be sent to. Within each partition, the back-ground thread performs an in-memory sort by key, and if there is a combiner function, it is run on the output of the sort." However Yahoo developers tutorial says Combiner runs prior to partitioner. Okay here is why am confused. Can you please look into it and let me know

avatar
Rising Star

Let’s have an example. MR job is run on a cluster of 10 Datanodes. Image this job needs 10 mappers and 2 reducers. 1) Let’s say 2 map tasks are running concurrently on “5 DataNode”, so we get totally 10 mappers executed simultaneously. 2) The output from each map task (if Combiner is used then Combiner Result) is stored on local filesystem on each Datanode. 3) These intermediate data needs to be exchanged between all nodes (shuffle phase) and sorted and given to “2 reduce tasks”. So we had 5 Datanodes running map tasks. Which node does partition happens & how many partitions will be created?

avatar
Master Guru

Ok that was actually interesting so I had a look into the code. For open source projects always the definitive source:

You can find most of it in the MapTask class.

-> Map Phase is run, output goes into Sorting Collector or DirectCollector ( latter if no reduce tasks )

-> The write already uses the partitioner, i.e. data is partitioned when going into the Sorting Collector

-> Sorting Collector is a class called MapOutputBuffer

-> In here we have a combiner class and a Sorting class. Data is sorted in a memory buffer and then spilled.

-> First data is sorted using the sort buffer then written to disc either directly OR using the CombinerRunner and then writing it. Combiner is also used in the Merge phase when spilled files are merged together into one output file

So the correct answer is They do not happen "after" each other, they happen together. While output data is written it is partitioned sorted, spilled and combined, merged and combined again. Hope that helps. If you want more information just have a look into the code yourself. Its quite readable.

Ben

PS: So we had 5 Datanodes running map tasks. Which node does partition happens & how many partitions will be created?

There is one partitioned output file for each Map Task. So you have 10 Map Task and 2 Reducers. This means that there will be 2 output files for each Map Task one for each reducer (*). Number partitions = number reducers ( for each map task )

When the reducer spins up he starts downloading the output file for his partition from every map task as they finish. And merge sorts it into one input set. In your example Both reducer each will pull 10 datasets ( one from each map task) and merge sort them into a single valid input set.

(*)actually each map task only writes one file with offsets to not create too many small files if I am not mistaken but that doesn't change the basic functionality.