Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

CONTROL SIZE OUTPUT FILE SIZE WITHOUT ADDING MERGE PROPERTY IN HIVE

Solved Go to solution

CONTROL SIZE OUTPUT FILE SIZE WITHOUT ADDING MERGE PROPERTY IN HIVE

New Contributor

hI,

I am trying to reduce the small files but adding merge property affects the performance; since seperate job is triggered for this merge. Is there any way, to control the size of output file by mapper or reducer?

Thanks in advance!\

Mithuun

1 ACCEPTED SOLUTION

Accepted Solutions
Highlighted

Re: CONTROL SIZE OUTPUT FILE SIZE WITHOUT ADDING MERGE PROPERTY IN HIVE

Hello Mithun

Having a merge step is definitely more full proof approach. Otherwise you will to know more of your data and distribution and set yourself. A first step would be hive.merge.smallfiles.avgsize that would add the extra step only of the average is not respected.

You can also set the number of reducers yourself either statically or dynamically based on the volume of data coming in and if you know your workload this will allow you to calculate the file output size roughly.

Seems like a trade off between a more generic approach with a merge step and a more granular approach in which you know your workload.

hope this helps

1 REPLY 1
Highlighted

Re: CONTROL SIZE OUTPUT FILE SIZE WITHOUT ADDING MERGE PROPERTY IN HIVE

Hello Mithun

Having a merge step is definitely more full proof approach. Otherwise you will to know more of your data and distribution and set yourself. A first step would be hive.merge.smallfiles.avgsize that would add the extra step only of the average is not respected.

You can also set the number of reducers yourself either statically or dynamically based on the volume of data coming in and if you know your workload this will allow you to calculate the file output size roughly.

Seems like a trade off between a more generic approach with a merge step and a more granular approach in which you know your workload.

hope this helps