Created 09-12-2018 09:15 AM
Hi,
I am unable to launch more spark jobs on my cluster due to the error message below. I still have 2.22TB free according to YARN UI. I run HDP 2.6.
# # There is insufficient memory for the Java Runtime Environment to continue. # Cannot create GC thread. Out of system resources.
What's the way forward?
Created 09-12-2018 09:26 AM
The message which you shared normally indicates that your process is not having enough memory. (It is not about the Disk Space)
So can you please share the exact JOB which you are running and where exactly do you see the above kind of message? Like do you see any "hs_err_pid*" file created on the problematic host?
Where exactly do you see this message and can you please share the complete message ?
Also please let us know about the size of your spark executor memory. Can you please try reducing the spark executor memory and then try again because it looks like we see Unable to create new threads means we might need to reduce the Heap a bit.
Created 09-12-2018 09:56 AM
Yes, i see "hs_err_pid*" file. Please find attached the log. Also, the 2.22TB is the un-utilised RAM.
.
Created 09-12-2018 10:24 AM
@Joshua Adeleke: I'm asking this out of curiosity, how does reducing heap space would provide enough memory to create a new thread?
Created 09-12-2018 01:13 PM
@rabbit s Reducing the memory specs for the spark executors will reduce the total memory consumed which should eventually allow for more jobs (new threads) to be spun...