Support Questions

Find answers, ask questions, and share your expertise

spark exception when reading a parquet file

New Contributor

when I try to read parquet file from Azure datalake container from databricks, I am getting spark exemption. Below is my query



import pyarrow.parquet as pq
from pyspark.sql.functions import *
from datetime import datetime
data ="/mnt/data/country/abb/countrydata.parquet")



org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 14.0 failed 4 times, most recent failure: Lost task 0.3 in stage 14.0 (TID 35) ( executor 0): org.apache.spark.SparkException: Exception thrown in awaitResult:


what does this mean? What I need to do for this?



@shamly  Can you share full stack trace?