Hi, I have a cluster of 5 servers, with 16 gb of memory. When I submit jobs to yarn (through hue/oozie/pig), it seems as alot of them are getting out of memory exception for java heap memory. Increasing the memory through cloudera configuration did reduce the amount of mappers/reducers failing, however there still are a few. Also note the input working on is only around 10gb, which is very small. Also, if it matters, it seems the cloudera server service is sometimes also crashing when submitting jobs thriugh the same process. I found no logs to this though, and it is also less important to me currently.