Archives of Support Questions (Read Only)

This is an archived board for historical reference. Information and links may no longer be available or relevant
Announcements
This board is archived and read-only for historical reference. To ask a new question, please post a new topic on the appropriate active board.

Configuring Yarn/MapReduce2 memory configuration

avatar
Rising Star

I have a hadoop cluster on 2 nodes. Each node has 2 cpus and 24 gb memory.

So I am thinking of having 3 mappers and 3 reducers on each node. That's good?

So in Ambari -> Yarn -> Configs, can I have :

Memory allocated for all YARN containers on a node : 18 gb

Minimum container size (memory) : 1 gb

Maximum container size (memory) : 4 gb

If I configure as per above, then Ambari suggests other configurations based on these values - is that safe to follow?

Likewise in Ambari -> MapReduce2 -> Configs :

Not sure what to configure Map Memory, Reduce Memory, Appmaster Memory.

Appreciate the insights.

1 ACCEPTED SOLUTION

avatar
1 REPLY 1

avatar