Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Job stage cancelled because SparkContext was shut down

Job stage cancelled because SparkContext was shut down

New Contributor

After using the code that suggested in the answer to my question in this link ( Pyspark: Adding new column has the sum of rows values for more than 255 column) and my trial to save the resulting dataframe for some data set I encountered this error :

Py4JJavaError: An error occurred while calling o9325.parquet.
: org.apache.spark.SparkException: Job aborted.
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:224)
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:154)

Caused by: org.apache.spark.SparkException: Job 3 cancelled because SparkContext was shut down

Spark UI

Version
v2.3.0
Master
local[*]
AppName
PySparkShell

How can I overcome this issue??