- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Any good methods for compacting small files in Spark?
- Labels:
-
Apache Spark
Created on ‎06-19-2017 06:46 AM - edited ‎09-16-2022 04:47 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'm running into isues with having lots of small Avro and Parquet files being created and stored in my hdfs and I need a way to compact them through Spark and its native libraries.
I've seen that the standard methods for this seem to be coalesce and the Impala insert into a new table then insert back, but are there any better methods that have come on to the scene, or anything more Spark-centric?
Created ‎06-19-2017 06:55 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It should be pretty trivial to read the data in format X using Spark into a DataFrame or Dataset, then repartition it to a smaller number of partitions, and write it in format X using Spark. The round-trip ought not change the data, but worth verifying. It should however always result in fewer and therefore larger files.
Created ‎06-19-2017 07:33 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I should have mentioned it in the first post, but I need to maintain existing partitions as they are, so I need to compact files within partitions.
Created ‎06-19-2017 07:36 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If you mean partition in the sense of Parquet/Avro partitions by some key, that should be possible to preserve this way. In the general case of things like text files, a file is a partition already.
Created ‎06-19-2017 07:44 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am only dealing with Parquet and Avro luckily, not text. And yes, I was referring to the key partitions in the files.
Sorry for going off topic, but I'm still quite new to Spark and the whole Hadoop ecosystem in general, so I'm still trying to get a feel for everything. To clarify, partitions of a RDDs/Dataframes are different than the key based partitions of the files? I had always thought they were the same.
Created ‎06-19-2017 07:47 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Spark deals with arbitrary data, so its notion of partitions is not related to data that contains a key. However it's almost surely true that one key-based partition of data in, say, Parquet will map to one (or more) partitions of data in a DataFrame that just has data with that key.
Created ‎06-19-2017 05:24 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
