Created 01-22-2017 10:30 AM
Dear folks,
I'm aware that Hadoop 2.0, JobTracker Architecture is restructured from Map reduce. However your help to understand the concept is highly appreciated.
Process:
JobTracker receives job from Client, communicates with NameNode for required information and distribute split input to Task Tracker to get the Task done.
My Question:
1. When input Data (64 MB or 128 MB) is located in dataBlock, does Job Tracker further split into small input records before assign to Map/reduce function or it just forward the data block, assuming the entire data block required for processing?
2. does Job Tracker chooses TaskTracker where Input is located(data block) or it's completely random?
Thanks for your help in advance.
Srujan
Created 02-28-2017 09:01 AM
Hi Srujan,
Hope that answers your questions!