Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

dataframe to avro read write issue

dataframe to avro read write issue

New Contributor

I am trying to write a dataframe to avro using databricks scala api. The writing is successful. But while reading the data from hive it is throwing exception: 

 

Error: java.io.IOException: org.apache.hadoop.hive.serde2.avro.AvroSerdeException: Failed to obtain scale value from file schema: "bytes" (state=,code=0)

 

In the avsc file I have column wityh type byte:

--> 

{"name":"rate","type":["null",{"type":"bytes","logicalType":"decimal","precision":38,"scale":18}],"default":null}

 

reading 

====================

val df = sqlContext.read.format("com.databricks.spark.avro")
.option("avroSchema", schema.toString)
.option("inferSchema", "true")
.avro(sourceFile)
.filter(preparePartitionFilterClause);

====================

 

writing

=======================

df.write.mode(SaveMode.Append).format("com.databricks.spark.avro").partitionBy(TrlConstants.PARTITION_COLUMN_COUNTRYCODE).save(path);

=======================

 

I am completely clue less please help!!!