Hello All, In Hadoop MapReduce, By default, the
number of mappers created is depends on number of input splits. For
example, if 192 MB is your inpur file size and 1 block is of 64 MB then
number of input splits will be 3. So number of mappers will be 3. The
same way, I would like to know that, In spark, if i submit an
application in standalone cluster(a sort of pseudo distributed) to
process 750 MB input data, how many executors will be created in Sp