Please suggest to slove the issue, I am stuck. What could be wrong. CM Dashboard looks fine doesn't show up any configuration issues. Screenshot below
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 4.0 failed 4 times, most recent failure: Lost task 0.3 in stage 4.0 (TID 75, datanode03.rnd.company.net): ExecutorLostFailure (executor 2 lost)
The driver log does not tell us much other than that it tried to run a task under an executor about 4 times but every attempt failed (it does not include the reason why).
Could you check your executor logs instead? Its usually visible in the Spark History Server Web UI, after you click through to your application and then visit the Executors tab and click stderr. Could you post that log from the failing application? It would have the reason of why the tasks that went onto it may have failed.