Reply
msh
New Contributor
Posts: 1
Registered: ‎10-28-2013

yarn java outofmemory

Hi,
I have a cluster of 5 servers, with 16 gb of memory. When I submit jobs to yarn (through hue/oozie/pig), it seems as alot of them are getting out of memory exception for java heap memory. Increasing the memory through cloudera configuration did reduce the amount of mappers/reducers failing, however there still are a few. Also note the input working on is only around 10gb, which is very small.
Also, if it matters, it seems the cloudera server service is sometimes also crashing when submitting jobs thriugh the same process. I found no logs to this though, and it is also less important to me currently.

Thanks, Matan.
Posts: 1,896
Kudos: 433
Solutions: 303
Registered: ‎07-31-2013

Re: yarn java outofmemory

Could you post a few stack traces of the failed logs here?

What are your Map and Reduce task heap memory settings set to?
Announcements

Our community is getting a little larger. And a lot better.


Learn More about the Cloudera and Hortonworks community merger planned for late July and early August.