Support Questions

Find answers, ask questions, and share your expertise
Announcements
Check out our newest addition to the community, the Cloudera Data Analytics (CDA) group hub.

physical memory aware to cloudera manager

Explorer

We installed Cloudera Manager with only 8GB of memory on the datanodes. Later, the memory was bumped up, however Cloudera Manager still shows that each datanode can only utilize 8GB of physical memory (shown in the hosts tab and it will also complain of overcommit if more than 8GB is allocated total). Is there a way to update Cloudera Manager with the newly added memory or is there a configuration to adjust this?

 

Thank you!

1 ACCEPTED SOLUTION

Explorer

After playing around I have found that in order to actually get Cloudera Manager to recognize a change in RAM on the system, you have to restart the cloudera-scm-agent on whichever host got the change in RAM. Once the restart has completed Cloudera Manager will automatically update with an updated value of physical memory. 

View solution in original post

6 REPLIES 6

Contributor

Your query is partial

 

How many nodes you have?

 

1) If you installed CM, CDH, Embedded DB on single node then you can increase CM memory in the default config

    $ sudo vi /etc/default/cloudera-scm-server

2) You can go the each service and increase the java memory settings using advanced configuarion.

 

Explorer

Firstly, thank you for your response and I apologize for the lack of description. I'll attempt to be more descriptive here:

 

Where am I seeing this:

Go to "Hosts" tab, click on any of the datanodes (4 total), in the top left of the datanode status page there is a "Details" section. In there, there is a "Host Agent" row which displays "Physical Memory" as a bar that fills up. (Shown Below)

 

clouderaforums_memory.jpg

 

What it states:

It states that it has 8GB total of memory available. I feel like it should be more now though given the increase in memory we gave to the datanodes.

 

Why I feel this is wrong:

We had initially only had 8GB on each node, but then we bumped up the memory on each node. Now when doing "cat /proc/meminfo" on one of the datanodes, it shows that at least 25GB is free. So, to me, I feel like Cloudera Manager has to somehow reevaluate how much memory is available on the datanodes.

 

 Does this help?

Super Collaborator

Hmm, the images are not showing up.  I'll check with our forums administrator as to why they are not available and hopefully we can evaluate & reply.

Explorer

After playing around I have found that in order to actually get Cloudera Manager to recognize a change in RAM on the system, you have to restart the cloudera-scm-agent on whichever host got the change in RAM. Once the restart has completed Cloudera Manager will automatically update with an updated value of physical memory. 

Super Collaborator

Awesome, thanks for following up and identifying fix!

Explorer
i'am using cloudera manager5.10 on ubuntu 14.04 i want to reduce the memory size of hdfs and kafka , any help please ?
Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.