Created on 10-31-2014 11:29 AM - edited 09-16-2022 08:39 AM
I have a job that calculates some statistics for a short rolling window time period and would like to be able to dump all the data into HDFS. I have come to learn HDFS does not support appends. Attempting to set my Spark app to make a new directory and and write to a new file for every RDD is not viable. After searching around I found an Avro object DataFileWriter which looks like it would work but according to the Spark user group message referenced below the object won't seriealize so it won't make it out to the worker nodes. I have read that SparkSQL can consume from Kafka and then write to a parquet file which seems like it would solve my problem but Cloudera does not include SparkSQL.
Would it be out of the question to try to get SparkSQL and have it write to my CDH HDFS?
I don't think I would be able to hook those two up.
Does anyone know of possible solutions to the problem I have?
Created 10-31-2014 11:39 AM
Yeah, because it makes lots of small files? one option is to have a post-processing job that getmerges the files together.
The general answer to getting an unserializable object to the workers is to create them on the workers instead. You would make your writer or connection object once per partition and do something with it.
Spark SQL is distributed as part of CDH. Lots of stuff can consume from Kafka. You don't need it to write to Parquet files.
Created 10-31-2014 11:39 AM
Yeah, because it makes lots of small files? one option is to have a post-processing job that getmerges the files together.
The general answer to getting an unserializable object to the workers is to create them on the workers instead. You would make your writer or connection object once per partition and do something with it.
Spark SQL is distributed as part of CDH. Lots of stuff can consume from Kafka. You don't need it to write to Parquet files.
Created 10-31-2014 11:43 AM
Thanks!