03-03-2017 11:18 AM
I am trying to write a dataframe to avro using databricks scala api. The writing is successful. But while reading the data from hive it is throwing exception:
Error: java.io.IOException: org.apache.hadoop.hive.serde2.avro.AvroSerdeException: Failed to obtain scale value from file schema: "bytes" (state=,code=0)
In the avsc file I have column wityh type byte:
-->
{"name":"rate","type":["null",{"type":"bytes","logicalType":"decimal","precision":38,"scale":18}],"default":null}
reading
====================
val df = sqlContext.read.format("com.databricks.spark.avro")
.option("avroSchema", schema.toString)
.option("inferSchema", "true")
.avro(sourceFile)
.filter(preparePartitionFilterClause);
====================
writing
=======================
df.write.mode(SaveMode.Append).format("com.databricks.spark.avro").partitionBy(TrlConstants.PARTITION_COLUMN_COUNTRYCODE).save(path);
=======================
I am completely clue less please help!!!