Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Spark Streaming Creating Small files in Hive

avatar
Expert Contributor

Hi,

I have a spark streaming application which analysis log files and processes them. Eventually it dumps the processed results in a Hive Table (Internal). But the problem with this is that when spark loads the data, it creates small files and I have all the options in Hive configuration with regards to merging set to True. But still merging isnt happening. Please check the image of the config parameters attached. Any help will be greatly appreciated.

Thanks,

Chandra

43417-hive-config.png

1 ACCEPTED SOLUTION

avatar
Expert Contributor

Merge is not happening because you are writing with Spark, not through Hive, thus all these configurations don't apply.

Here you might have two reasons causing the big amount of files:

1 - Spark has a default parallelism of 200 and it writes one file per partition, thus each Spark minibatch will write 200 files. This can be easily solved, especially if you are not writing a lot of data at each minibatch reducing the parallelism before writing using `coalesce` (eventually using 1 to write only 1 file per minibatch).

2 - Spark will anyway write (at least) one file per minibatch and this depends on the frequency you are scheduling them. In this case, the solution is to schedule periodically a CONCATENATE job (but be careful you might encounter HIVE-17280->HIVE-17403) or you can write your own application with your logic to do the concatenation.

View solution in original post

2 REPLIES 2

avatar
Expert Contributor

Merge is not happening because you are writing with Spark, not through Hive, thus all these configurations don't apply.

Here you might have two reasons causing the big amount of files:

1 - Spark has a default parallelism of 200 and it writes one file per partition, thus each Spark minibatch will write 200 files. This can be easily solved, especially if you are not writing a lot of data at each minibatch reducing the parallelism before writing using `coalesce` (eventually using 1 to write only 1 file per minibatch).

2 - Spark will anyway write (at least) one file per minibatch and this depends on the frequency you are scheduling them. In this case, the solution is to schedule periodically a CONCATENATE job (but be careful you might encounter HIVE-17280->HIVE-17403) or you can write your own application with your logic to do the concatenation.

avatar
Expert Contributor

Thanks very much. I see now whats going on. I tried both of your suggestions and seem to work well