You should be able to over-subscribe memory by setting yarn.nodemanager.resource.memory-mb to a value higher than the actual physical memory in your nodes. Alternately, you might want to check the value of yarn.scheduler.minimum-allocation-mb and lower it a bit to accommodate for more containers.
Once you go to YARN Configs tab you can search for those properties. In latest versions of Ambari these show up in the Settings tab (not Advanced tab) as sliders. You can increase the values by moving the slider to the right or even click the edit pen to manually enter a value.
You should let me know your system memory of your hadoop cluster. If you have three nodes for datanode and nodemanager with 128GB RAM per node, then you can set All YARN containers memory and Min/Max Container Memory from Ambari Web. That depends on the available system memory, preferentially I recommend to set these memory options such as 1024MB or 2048 MB for Min Container Size, 4GB or 8GB or higher for Max Container Size. And All YARN Containers Memory is 90GB ~ 100GB. Of course All YARN Container Memory it depends on the all datanode's available memory for nodemanager.
No. 840GB, that means a single node has almost 120GB RAM, and it's not ideal way to maintain system. Because each nodes need some free memory for other services such os applications or agents which are using by ambari and etc. Just start 90GB to 100GB, then you can slightly change for that.