Support Questions
Find answers, ask questions, and share your expertise

Hi, Can some one tell me how many max mappers we can use in a HDP2.3 cluster?

Explorer
 
1 ACCEPTED SOLUTION

Accepted Solutions

Guru

It depends on the memory you have on your cluster.

You have an amount of RAM allocated to YARN (yarn.nodemanager.resource.memory-mb) on a node, then each mapper has a size (in MR it's mapreduce.map.memory.mb), that gives you an idea (you'll also use memory for ApplicationMasters, Reducers,etc.)

View solution in original post

4 REPLIES 4

@Ram

If I am not wrong its not very simple to answer the question. No of mappers depends on your compute power [CPU and memory] and also on the no of containers [when using yarn].

1JVM corresponds to 1 Mapper usually.

Depending upon your compute you need to configure the MR memory settings so hat you can use MAXresources [ie. mappers and reducers]

pls refer below link for -

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_installing_manually_book/content/determi...

http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/

https://cloudcelebrity.wordpress.com/2013/08/14/12-key-steps-to-keep-your-hadoop-cluster-running-str...

I don't think there is any max number Most likely some theoretical int number that you wouldn't reach in any cluster. Or do you mean at the same time? In that case it would be

(RAM * Nodenumber) / *yarn.scheduler.minimum-allocation-mb

So if you have a 100 nodes and your nodes have 96GB yarn memory and your minimum allocation size ( and your map size ) is 2GB it would be 4800.

Guru

It depends on the memory you have on your cluster.

You have an amount of RAM allocated to YARN (yarn.nodemanager.resource.memory-mb) on a node, then each mapper has a size (in MR it's mapreduce.map.memory.mb), that gives you an idea (you'll also use memory for ApplicationMasters, Reducers,etc.)

View solution in original post

Rising Star

As others explained it depends on number of containers available on your cluster.