Reply
New Contributor
Posts: 1
Registered: ‎03-03-2017

dataframe to avro read write issue

I am trying to write a dataframe to avro using databricks scala api. The writing is successful. But while reading the data from hive it is throwing exception: 

 

Error: java.io.IOException: org.apache.hadoop.hive.serde2.avro.AvroSerdeException: Failed to obtain scale value from file schema: "bytes" (state=,code=0)

 

In the avsc file I have column wityh type byte:

--> 

{"name":"rate","type":["null",{"type":"bytes","logicalType":"decimal","precision":38,"scale":18}],"default":null}

 

reading 

====================

val df = sqlContext.read.format("com.databricks.spark.avro")
.option("avroSchema", schema.toString)
.option("inferSchema", "true")
.avro(sourceFile)
.filter(preparePartitionFilterClause);

====================

 

writing

=======================

df.write.mode(SaveMode.Append).format("com.databricks.spark.avro").partitionBy(TrlConstants.PARTITION_COLUMN_COUNTRYCODE).save(path);

=======================

 

I am completely clue less please help!!!

Announcements
The Kite SDK is a collection of docs, sample code, APIs, and tools to make Hadoop application development faster. Learn more at http://kitesdk.org.