Support Questions

Find answers, ask questions, and share your expertise

reducer tasks long time

avatar
Master Collaborator

Hi: what can i do to improve the time for the reducer???

I have 107 mapper and just 1 reduce, so, which parameters could i change??

maybe thoste?

mapreduce.job.counters.max

Thanks

1 ACCEPTED SOLUTION

avatar
Master Mentor

@Roberto Sancho

I would look at setting intermediate compression from map tasks and ouput compression from reduce tasks

You can also look at using combiner class.

mapreduce.map.output.compress
mapreduce.map.output.compress.codec

and output compression

mapreduce.output.fileoutputformat.compress.codec
mapreduce.output.fileoutputformat.compress.type
mapreduce.output.fileoutputformat.compress

View solution in original post

9 REPLIES 9

avatar
Master Mentor

avatar
Master Mentor

@Roberto Sancho

Do you have support contract ?

Please install smartsense for better utilization of your cluster .

avatar
Master Collaborator

Hi, we dont have yet, but the smartsense is free??

Thanks

avatar
Master Mentor

avatar
Master Guru

Its mapred.reduce.tasks, if you run a mapreduce program from the hadoop client you would set it like this:

-Dmapred.reduce.tasks=x

Pig and Hive have different ways to predict reducer numbers.

avatar
Master Collaborator

this -Dmapred.reduce.tasks=x is for mapreduce1 iam using mapreduce2 and yarn and i dont know how to change this parameter.

anny suggestion??

Thanks

avatar
Master Guru

Still works on yarn, the official new one is mapreduce.job.reduces but I always used the one above and he still takes it.

avatar
Master Mentor

@Roberto Sancho

I would look at setting intermediate compression from map tasks and ouput compression from reduce tasks

You can also look at using combiner class.

mapreduce.map.output.compress
mapreduce.map.output.compress.codec

and output compression

mapreduce.output.fileoutputformat.compress.codec
mapreduce.output.fileoutputformat.compress.type
mapreduce.output.fileoutputformat.compress

avatar
Master Mentor

@Roberto Sancho here's a list of all deprecated mapred properties and new properties,

https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/DeprecatedProperties.html

the property you're looking for is called mapreduce.job.reduces