I have a four node cluster running on Centos 6.5, which are all VMs so adjusting memoery is simple. One node has 20 gb RAM, and the other 4 have 4 gb RAM. I installed everything using the Cloudera Manager and am now getting a configuration warning of "Memory on host xxx.xxx.xxx.xxx is overcommitted. The total memory allocation is 18.3 GiB bytes but there are only 19.5 GiB bytes of RAM (3.9 GiB bytes of which are reserved for the system). Visit the Resources tab on the Host page for allocation details. Reconfigure the roles on the host to lower the overall memory allocation. Note: Java maximum heap sizes are multiplied by 1.3 to approximate JVM overhead."
On my first attempt of building a cluster I got all the web interfaces working but when I restarted my servers to adjust the amount of memory it seemed as though everything stopped working and I couldnt get the web page to laod aagain or see the services.
Ive read through the proper order of stopping services in cloudera manager but how do you restart the namenode and datanode servers as to not stop any functionality?
Cloudera Manager adds up the amount of memory allocated to different roles on your hosts and will give you that warning if it detects that memory is overcommitted. The calculation also includes some memory reserved for the OS, 20% by default. If you feel this needs to be adjusted (in your case I don't think it does), you can do so by changing "Memory Overcommit Validation Threshold" setting in CM: http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cm_props_host.html#conc...
If you have HDFS High Availability enabled, you can do a "rolling restart" to restart the service without affecting uptime. Go to HDFS Service -> Actions -> Rolling Restart.