- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Writing avro files with a user defined schema in spark
- Labels:
-
Apache Spark
Created ‎05-11-2017 04:12 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have a corpus of structured data stored in HDFS as a set of avro files. I need to do some processing to split this set into multiple sets based on the value of a certain field within the data set. This will involve splitting out the individual records based on the data element, bundling them up as new avro files and storing them into separate directories. I have tested a solution with Spark (v2.1.0) using the databricks spark-avro library (v2_11.3.2.0). It performs well, but when I write the data set into new avro files, it applies a spark-avro generated schema. The data types match, but I miss out an certain schema customizations, such as default values and descriptions.
Has anyone successfully applied a user-defined schema when writing avro files with spark-avro (or another similar Spark library)? I have found surprisingly little while searching.
Created ‎05-15-2017 11:41 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
A simple Spark app demonstrating how to read and write data in the Parquet and Avro formats.
I hope this can help.
Created ‎05-15-2017 11:41 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
A simple Spark app demonstrating how to read and write data in the Parquet and Avro formats.
I hope this can help.
