My pyspark yarn-client application got killed by cluster because of this setting, yarn.scheduler.capacity.root.default-application-lifetime. What config should I use to declare my application lifetime and avoid getting killed?
To help you get the best possible solution, I have tagged our Spark experts @Bharati@jagadeesan who may be able to assist you further.
Please keep us updated on your post, and we hope you find a satisfactory solution to your query.
Regards,
Diana Torres, Community Moderator
Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button. Learn more about the Cloudera Community: