Created 09-12-2018 09:15 AM
I am unable to launch more spark jobs on my cluster due to the error message below. I still have 2.22TB free according to YARN UI. I run HDP 2.6.
# # There is insufficient memory for the Java Runtime Environment to continue. # Cannot create GC thread. Out of system resources.
What's the way forward?
The message which you shared normally indicates that your process is not having enough memory. (It is not about the Disk Space)
So can you please share the exact JOB which you are running and where exactly do you see the above kind of message? Like do you see any "hs_err_pid*" file created on the problematic host?
Where exactly do you see this message and can you please share the complete message ?
Also please let us know about the size of your spark executor memory. Can you please try reducing the spark executor memory and then try again because it looks like we see Unable to create new threads means we might need to reduce the Heap a bit.