The data transformation with spark run sucessfully. But the spark job failed always in the last step when writing data to parquet files.
Below is the example of the error message:
23/01/15 21:10:59 678 ERROR TaskSchedulerImpl: Lost executor 2 on 100.100.18.155:
The executor with id 2 exited with exit code -1(unexpected).
The API gave the following brief reason: Evicted
The API gave the following message: Pod ephemeral local storage usage exceeds the total limit of containers 10Gi.
I think there is no problem with my spark configuration. The problem is the configuration of kubenete ephemeral local storage size limitation, which I do not have the right to change it.
Can some one explain why this happened and what is is possbile solution for it?
Thanks for engaging Cloudera Community. First of all, Thank You for the detailed description of the Problem. I believe your ask is Valid, yet reviewing the same over a Community Post isn't a suitable approach. Feasible for you to engage Cloudera Support to allow our Team to work with you, with the suitability of Screen-Sharing Session as well as Logs exchange, both of which aren't feasible in Community. That would greatly expedite the review of your ask.