Created 05-23-2019 10:00 AM
Yarn Application status was failed, and displayed follows.
Diagnostics: Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143.
And Yarn Log Entries includes following message.
ERROR CoarseGrainedExecutorBackend: Executor self-exiting due to : Driver XX.XX.XX.XX:46869 disassociated! Shutting down. INFO DiskBlockManager: Shutdown hook called
Could you tell me the checkpoint to solve this problem?
Best Regards.
Created 05-24-2019 04:36 AM
Container killed exit code most of the time is due to memory overhead
If you haven't specified spark.yarn.driver.memoryOverhead or spark.yarn.executor.memoryOverhead these params in your spark submit then add these params (or) if you have specified then increase the already configured value.
Please refer to this link to decide overhead value.
Created 05-24-2019 04:36 AM
Container killed exit code most of the time is due to memory overhead
If you haven't specified spark.yarn.driver.memoryOverhead or spark.yarn.executor.memoryOverhead these params in your spark submit then add these params (or) if you have specified then increase the already configured value.
Please refer to this link to decide overhead value.
Created 05-24-2019 08:11 AM
Shu-san
Thank you for your early response.
I will feedback to my team members about this matter.
I'll let you know if I have any questions.
Best Regards.
Created 09-05-2019 08:34 AM
Hi,
Exit code 143 is related to Memory/GC issues. Your default Mapper/reducer memory setting may not be sufficient to run the large data set. Thus, try setting up higher AM, MAP and REDUCER memory when a large yarn job is invoked.
For more please refer to this link.
Thanks
AKR