Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Why my cluster memory is less even though physical memory is more..?

avatar
Explorer

We have 2-node cluster(1 master 4 CPU,16 GB RAM + 1 data node 8 CPU,30 GB RAM). However in Ambari console, I could be able to see the Total cluster memory is 22 GB only. Is there a way to allocate more cluster memory(around 36GB ) out of 46 GB physical memory we have together from master + data node. Morever, the number of containers are only 5 whereas the available Vcores are 8 already. I have attached the screenshot for your reference. Please suggest a way to improve the cluster resource utilization. Thank you in advance.

1 ACCEPTED SOLUTION

avatar
Expert Contributor

@Saravanan Ramaraj

I assume the question is around the YARN total memory.

This is because Ambari uses the smallest capacity node to bring with the calculations, as Ambari expects a homogenous cluster.

But in this case, we have heterogenous cluster as : 1 master 4 CPU,16 GB RAM + 1 data node 8 CPU,30 GB RAM

- Thus, Ambari picks the 16 GB one and assumes 2nd one to be of same size and does the calculation for YARN's Node Manager (NM) memory. I assume that both nodes have Node Manager running.

- I believe that you would have 11 GB as value for YARN/yarn.nodemanager.resource.memory-mb. Thus, we have 22 GB (11 * 2) available in this case which is > 16 GB. 16*2 = 32 GB, but Ambari takes out memory required to run other processes outside the YARN workspace (eg: RM, HBase etc). Thus we have memory less than 32 GB available (which is expected).

Its a good idea to have homogeneous clusters.

===================================================================

However, you can make use of Config Groups here in Ambari based on different hardware profiles.

You can creates 2 Config Groups (CG) where each CG has one node. By default, there would be a default CG as seen on YARN configs page having both the nodes.

How to create a CG is exemplified using HBase here : https://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.0/bk_Ambari_Users_Guide/content/_using_host_c...

I did the following testing in order to reduce the memory for one node. You can similarly bump up up the memory for the 30 GB node.

- Starting with 2 node cluster, where Ambari had given 12 GB to each NM, with total capacity being 24 GB.

39984-screen-shot-2017-10-26-at-111536-pm.png

- Created a CG named 'New' and added 2nd node to it. Then changed the YARN/yarn.nodemanager.resource.memory-mb for 2nd node under 'New' from ~ 12 GB to ~8 GB.

39985-screen-shot-2017-10-26-at-103835-pm.png

- State of Node 1 under 'default' CG:

39986-screen-shot-2017-10-26-at-103845-pm.png

- Restarted "Affected components" as prompted by Ambari after the above changes.

- The Total memory changes from 24 GB to 20 GB now.

39987-screen-shot-2017-10-26-at-113313-pm.png

Hope this helps.

View solution in original post

4 REPLIES 4

avatar
@Saravanan Ramaraj

Ideally Ambari should show your total RAM and CPU information, until if you have any issues with Ambari agent.

Can you provide free -m output from your both the nodes and add scree shots of ambari cluster information.??

avatar
Expert Contributor

@Saravanan Ramaraj

I assume the question is around the YARN total memory.

This is because Ambari uses the smallest capacity node to bring with the calculations, as Ambari expects a homogenous cluster.

But in this case, we have heterogenous cluster as : 1 master 4 CPU,16 GB RAM + 1 data node 8 CPU,30 GB RAM

- Thus, Ambari picks the 16 GB one and assumes 2nd one to be of same size and does the calculation for YARN's Node Manager (NM) memory. I assume that both nodes have Node Manager running.

- I believe that you would have 11 GB as value for YARN/yarn.nodemanager.resource.memory-mb. Thus, we have 22 GB (11 * 2) available in this case which is > 16 GB. 16*2 = 32 GB, but Ambari takes out memory required to run other processes outside the YARN workspace (eg: RM, HBase etc). Thus we have memory less than 32 GB available (which is expected).

Its a good idea to have homogeneous clusters.

===================================================================

However, you can make use of Config Groups here in Ambari based on different hardware profiles.

You can creates 2 Config Groups (CG) where each CG has one node. By default, there would be a default CG as seen on YARN configs page having both the nodes.

How to create a CG is exemplified using HBase here : https://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.0/bk_Ambari_Users_Guide/content/_using_host_c...

I did the following testing in order to reduce the memory for one node. You can similarly bump up up the memory for the 30 GB node.

- Starting with 2 node cluster, where Ambari had given 12 GB to each NM, with total capacity being 24 GB.

39984-screen-shot-2017-10-26-at-111536-pm.png

- Created a CG named 'New' and added 2nd node to it. Then changed the YARN/yarn.nodemanager.resource.memory-mb for 2nd node under 'New' from ~ 12 GB to ~8 GB.

39985-screen-shot-2017-10-26-at-103835-pm.png

- State of Node 1 under 'default' CG:

39986-screen-shot-2017-10-26-at-103845-pm.png

- Restarted "Affected components" as prompted by Ambari after the above changes.

- The Total memory changes from 24 GB to 20 GB now.

39987-screen-shot-2017-10-26-at-113313-pm.png

Hope this helps.

avatar
Explorer

Thanks much for your detailed reply, it really helps.!!!

avatar
Expert Contributor

Thanks. Glad to know that it helped.