06-10-2017 04:38 PM
06-10-2017 08:21 PM
I would recommend you to go with the default memory allocation configuration unless you feel that it must be reduced on any particular host. By default, CM will allocate memory depends upon the number of services that you have configured on each node
To know the memory allocated by each service, you can navigate to
CM -> Hosts -> "select each host one by one" -> Resources -> Memory
Note: The node where you have configured CM will allocate additional memory for CM Management services
06-11-2017 02:33 AM
Thank you for your answer.
So If I understand, there is no way to change the default 20%?
Is there a way to see what is the maximum memory impala can consume,
what happens when it collides with the yarn memory ?
06-12-2017 04:40 AM - edited 06-12-2017 04:42 AM
The propertie you search for is named "Memory Overcommit Validation Treshold" I think (In the host configuration).
Per default the value is 0.8 (80%) which means that if more that 80% of memory are allocated to services on a node then Cloudera Manager will raise a Warning.
First : this is only a warning - and informative : this will not interfere with how the cluster works. You can overcommit the memory if you know what you do
Second : you can modify this value to 0.9 (for 10%) for example (or to whatever value is suited for your case)
For the last question. If the memory allocated to yarn and impala is greater than the amount of memory really available AND if both services try to consume the memory at the same time then you can expect issues.
06-16-2017 08:35 AM
Thank for the help.
What happens if I have 100g of RAM and I allocate 60g to yarn,
and someone runs a very intensive impala job? will it take resources from yarn ?
Is there a way to limit impala ? (Is it a soft or hard limit ?)
Thanks again :)
06-19-2017 04:39 AM
For Impala there is a parameter that limit the amount of memory each deamon can use.
Impala > Impala Daemon Default Group > Resource Management > mem_limit.
If you set 10 Go hear then each Impala daemon will use up to 10Go of memory.
This amount of memory is not part of the memory allocated to Yarn. So you should be careful.