I am getting memory limit to exceed error while running spark job. I have tried a couple of things but still no luck. I want to understand why it's going till 56.3GB and then failing. Any leads with the solution will be really helpful
Executor Memory: 25GB
executor.memoryOverhead = 0.1 <default>
Total Node: 250
32 core nodes
96 GB Memory
Could you please share the Full Error stack trace for further analysis? Also, did u tried to reduce the Executor Memory and tried as a trail basis?