Exit code 143 is related to Memory/GC issues. Your default Mapper/reducer memory setting may not be sufficient to run the large data set. Thus, try setting up higher AM, MAP and REDUCER memory when a large yarn job is invoked.
When I try to execute MapReduce Program, It gives me error like "Time out after 600 secs", "Container killed by ApplicationMaster.", "Container killed on request. Exit code is 143". It show also map 100% and reduce stuck at 72%.
Exit code 143 is due to multiple reasons. Yesterday I got the error in sqoop related to timeout. Adding -Dmapreduce.task.timeout=0 in my sqoop job resolved the issue.
18/07/12 06:40:28 INFO mapreduce.Job: Job job_1530133778859_8931 running in uber mode : false 18/07/12 06:40:28 INFO mapreduce.Job: map 0% reduce 0% 18/07/12 06:45:57 INFO mapreduce.Job: Task Id : attempt_1530133778859_8931_m_000005_0, Status : FAILED AttemptID:attempt_1530133778859_8931_m_000005_0 Timed out after 300 secs Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143.