Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

How to reduce the small problem in spark using coalesce or otherwise?

avatar

When I insert my dataframe into a table it creates some small files.

One solution I had was to use to coalesce to one file but this greatly slows down the code. I am looking at a way to either improve this by somehow speeding it up while still coalescing to 1.

Like this

df_expl.coalesce(1)
  .write.mode("append")
  .partitionBy("p_id")
  .parquet(expl_hdfs_loc)

Or I am open to another solution.

1 ACCEPTED SOLUTION

avatar
Super Guru

na

Also use DISTRIBUTE BY so data for same partition goes to same reducer.

View solution in original post

5 REPLIES 5

avatar
Super Guru

na

Also use DISTRIBUTE BY so data for same partition goes to same reducer.

avatar

Thanks for the response . I am sorry I dont really understand what you mean could you please provide an example?

avatar
Super Guru

Please see the following link. In your code, you'll need to do a "repartition". What I am trying to say is if you force more data to same reducer, you will create less files. Call repartition function on some key where data for that key will land in same partition.

https://dzone.com/articles/optimize-spark-with-distribute-by-cluster-by

avatar

Thanks for the response and the link is very useful. I had one more question if I do df.repartition(1).write will this only run on one node or will this run in a distributed way but only create one file. Which is the problem I face with coalesce when I right to parquet it writes one file but only on one node and I lose the whole distributed advantages.

Is there any way I could do something like this

joinedDF.repartition(col("partitionCol)).coalesce(1).write.mode("append").partitionBy("partitionCol").parquet(esdLocation) but it should only coalesce per partition.

Thanks

avatar