- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
How to reduce the small problem in spark using coalesce or otherwise?
- Labels:
-
Apache Hadoop
-
Apache Spark
Created ‎07-17-2017 09:11 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
When I insert my dataframe into a table it creates some small files.
One solution I had was to use to coalesce to one file but this greatly slows down the code. I am looking at a way to either improve this by somehow speeding it up while still coalescing to 1.
Like this
df_expl.coalesce(1) .write.mode("append") .partitionBy("p_id") .parquet(expl_hdfs_loc)
Or I am open to another solution.
Created ‎07-18-2017 01:46 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Also use DISTRIBUTE BY so data for same partition goes to same reducer.
Created ‎07-18-2017 01:46 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Also use DISTRIBUTE BY so data for same partition goes to same reducer.
Created ‎07-18-2017 05:08 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for the response . I am sorry I dont really understand what you mean could you please provide an example?
Created ‎07-18-2017 05:39 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Please see the following link. In your code, you'll need to do a "repartition". What I am trying to say is if you force more data to same reducer, you will create less files. Call repartition function on some key where data for that key will land in same partition.
https://dzone.com/articles/optimize-spark-with-distribute-by-cluster-by
Created ‎08-09-2017 01:49 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for the response and the link is very useful. I had one more question if I do df.repartition(1).write will this only run on one node or will this run in a distributed way but only create one file. Which is the problem I face with coalesce when I right to parquet it writes one file but only on one node and I lose the whole distributed advantages.
Is there any way I could do something like this
joinedDF.repartition(col("partitionCol)).coalesce(1).write.mode("append").partitionBy("partitionCol").parquet(esdLocation) but it should only coalesce per partition.
Thanks
Created ‎05-25-2018 01:04 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
this solution seems better...
