Hi @Prav
You're right, it's pretty generic, but this usually occurs if your containers were killed due to memory issues. This can either be a java.lang.OutOfMemorError thrown by the executor running in that container, or possibly the container's JVM process' physical memory grew beyond its memory limits. Meaning, if your application was configured with 1 gb of executor memory (spark.executor.memory) and 1 g for executor memory overhead (spark.executor.memoryOverhead), then the container size request here would be 2 gb. If the process' memory goes beyond 2 gb then YARN is going to kill that process.
Really, the best way of identifying the issue is by collecting the YARN logs for your application and going through that:
yarn logs -applicationId 1564435356568_349499
You would just run that from your edge node or NodeManager machines (assuming you're running Spark on YARN).