Created 12-08-2017 08:08 AM
when I rum Hive query the Yarn memory getting almost full.So for completing job its taking more than 10 mins, I want minimize this delay.Is it possible to increase Yarn memory?
Created 12-08-2017 05:29 PM
You should be able to over-subscribe memory by setting yarn.nodemanager.resource.memory-mb to a value higher than the actual physical memory in your nodes. Alternately, you might want to check the value of yarn.scheduler.minimum-allocation-mb and lower it a bit to accommodate for more containers.
Created 12-11-2017 06:15 PM
Once you go to YARN Configs tab you can search for those properties. In latest versions of Ambari these show up in the Settings tab (not Advanced tab) as sliders. You can increase the values by moving the slider to the right or even click the edit pen to manually enter a value.
Created 12-11-2017 12:32 AM
You should let me know your system memory of your hadoop cluster. If you have three nodes for datanode and nodemanager with 128GB RAM per node, then you can set All YARN containers memory and Min/Max Container Memory from Ambari Web. That depends on the available system memory, preferentially I recommend to set these memory options such as 1024MB or 2048 MB for Min Container Size, 4GB or 8GB or higher for Max Container Size. And All YARN Containers Memory is 90GB ~ 100GB. Of course All YARN Container Memory it depends on the all datanode's available memory for nodemanager.
Created 12-11-2017 01:40 PM
Hi @Peter Kim,
Thanks for response.
Im having 7 node cluste with 128GB RAM,my Yarn memory is 840GB,can I increase this?
Created 12-12-2017 01:08 AM
No. 840GB, that means a single node has almost 120GB RAM, and it's not ideal way to maintain system. Because each nodes need some free memory for other services such os applications or agents which are using by ambari and etc. Just start 90GB to 100GB, then you can slightly change for that.