Support Questions

Find answers, ask questions, and share your expertise
Check out our newest addition to the community, the Cloudera Data Analytics (CDA) group hub.

spark exception when reading a parquet file

New Contributor

when I try to read parquet file from Azure datalake container from databricks, I am getting spark exemption. Below is my query



import pyarrow.parquet as pq
from pyspark.sql.functions import *
from datetime import datetime
data ="/mnt/data/country/abb/countrydata.parquet")



org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 14.0 failed 4 times, most recent failure: Lost task 0.3 in stage 14.0 (TID 35) ( executor 0): org.apache.spark.SparkException: Exception thrown in awaitResult:


what does this mean? What I need to do for this?


Rising Star

@shamly  Can you share full stack trace?

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.