Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Any good methods for compacting small files in Spark?

Highlighted

Any good methods for compacting small files in Spark?

New Contributor

I'm running into isues with having lots of small Avro and Parquet files being created and stored in my hdfs and I need a way to compact them through Spark and its native libraries.

 

I've seen that the standard methods for this seem to be coalesce and the Impala insert into a new table then insert back, but are there any better methods that have come on to the scene, or anything more Spark-centric?

6 REPLIES 6

Re: Any good methods for compacting small files in Spark?

Master Collaborator

It should be pretty trivial to read the data in format X using Spark into a DataFrame or Dataset, then repartition it to a smaller number of partitions, and write it in format X using Spark. The round-trip ought not change the data, but worth verifying. It should however always result in fewer and therefore larger files.

Re: Any good methods for compacting small files in Spark?

New Contributor

I should have mentioned it in the first post, but I need to maintain existing partitions as they are, so I need to compact files within partitions.

Re: Any good methods for compacting small files in Spark?

Master Collaborator

If you mean partition in the sense of Parquet/Avro partitions by some key, that should be possible to preserve this way. In the general case of things like text files, a file is a partition already.

Re: Any good methods for compacting small files in Spark?

New Contributor

I am only dealing with Parquet and Avro luckily, not text. And yes, I was referring to the key partitions in the files.

 

Sorry for going off topic, but I'm still quite new to Spark and the whole Hadoop ecosystem in general, so I'm still trying to get a feel for everything. To clarify, partitions of a RDDs/Dataframes are different than the key based partitions of the files? I had always thought they were the same.

Re: Any good methods for compacting small files in Spark?

Master Collaborator

Spark deals with arbitrary data, so its notion of partitions is not related to data that contains a key. However it's almost surely true that one key-based partition of data in, say, Parquet will map to one (or more) partitions of data in a DataFrame that just has data with that key.

Re: Any good methods for compacting small files in Spark?

Explorer
The best way to deal with small files is to not have to deal with them at all. You might want to explore using Kudu or HBase as your storage engine instead of HDFS (Parquet).
Don't have an account?
Coming from Hortonworks? Activate your account here