Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Configuring Yarn/MapReduce2 memory configuration

avatar
Rising Star

I have a hadoop cluster on 2 nodes. Each node has 2 cpus and 24 gb memory.

So I am thinking of having 3 mappers and 3 reducers on each node. That's good?

So in Ambari -> Yarn -> Configs, can I have :

Memory allocated for all YARN containers on a node : 18 gb

Minimum container size (memory) : 1 gb

Maximum container size (memory) : 4 gb

If I configure as per above, then Ambari suggests other configurations based on these values - is that safe to follow?

Likewise in Ambari -> MapReduce2 -> Configs :

Not sure what to configure Map Memory, Reduce Memory, Appmaster Memory.

Appreciate the insights.

1 ACCEPTED SOLUTION

avatar
1 REPLY 1

avatar