I have a hadoop cluster on 2 nodes. Each node has 2 cpus and 24 gb memory.
So I am thinking of having 3 mappers and 3 reducers on each node. That's good?
So in Ambari -> Yarn -> Configs, can I have :
Memory allocated for all YARN containers on a node : 18 gb
Minimum container size (memory) : 1 gb
Maximum container size (memory) : 4 gb
If I configure as per above, then Ambari suggests other configurations based on these values - is that safe to follow?
Likewise in Ambari -> MapReduce2 -> Configs :
Not sure what to configure Map Memory, Reduce Memory, Appmaster Memory.
Appreciate the insights.