Hi,
We have a cluster of 5 nodes currently running HDP2.0. Recently we observed that YARN is using 2000% of the memory.
Currently we allocated 2GB for yarn memory and the metrics showed 40GB was used for our current job. All nodes are still "alive". Will that be a problem? Should we increase the allocated memory for yarn cluster?
![2178-ambari-elephant2-1.png 2178-ambari-elephant2-1.png](https://community.cloudera.com/t5/image/serverpage/image-id/21571iC7A61D811B59DD17/image-size/medium?v=v2&px=400)
![2179-ambari-elephant-1.png 2179-ambari-elephant-1.png](https://community.cloudera.com/t5/image/serverpage/image-id/21572i92C993BCD3F83B31/image-size/medium?v=v2&px=400)