Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

How to reduce the small problem in spark using coalesce or otherwise?

Solved Go to solution

How to reduce the small problem in spark using coalesce or otherwise?

When I insert my dataframe into a table it creates some small files.

One solution I had was to use to coalesce to one file but this greatly slows down the code. I am looking at a way to either improve this by somehow speeding it up while still coalescing to 1.

Like this

df_expl.coalesce(1)
  .write.mode("append")
  .partitionBy("p_id")
  .parquet(expl_hdfs_loc)

Or I am open to another solution.

1 ACCEPTED SOLUTION

Accepted Solutions
Highlighted

Re: How to reduce the small problem in spark using coalesce or otherwise?

Super Guru

na

Also use DISTRIBUTE BY so data for same partition goes to same reducer.

View solution in original post

5 REPLIES 5
Highlighted

Re: How to reduce the small problem in spark using coalesce or otherwise?

Super Guru

na

Also use DISTRIBUTE BY so data for same partition goes to same reducer.

View solution in original post

Highlighted

Re: How to reduce the small problem in spark using coalesce or otherwise?

Thanks for the response . I am sorry I dont really understand what you mean could you please provide an example?

Highlighted

Re: How to reduce the small problem in spark using coalesce or otherwise?

Super Guru

Please see the following link. In your code, you'll need to do a "repartition". What I am trying to say is if you force more data to same reducer, you will create less files. Call repartition function on some key where data for that key will land in same partition.

https://dzone.com/articles/optimize-spark-with-distribute-by-cluster-by

Highlighted

Re: How to reduce the small problem in spark using coalesce or otherwise?

Thanks for the response and the link is very useful. I had one more question if I do df.repartition(1).write will this only run on one node or will this run in a distributed way but only create one file. Which is the problem I face with coalesce when I right to parquet it writes one file but only on one node and I lose the whole distributed advantages.

Is there any way I could do something like this

joinedDF.repartition(col("partitionCol)).coalesce(1).write.mode("append").partitionBy("partitionCol").parquet(esdLocation) but it should only coalesce per partition.

Thanks

Highlighted

Re: How to reduce the small problem in spark using coalesce or otherwise?

Don't have an account?
Coming from Hortonworks? Activate your account here