Reply
Explorer
Posts: 17
Registered: ‎06-02-2014
Accepted Solution

How do I delete MultipleOutput files from a map task when the task performs a retry?

I'm using a map-only hadoop task to transfer files from S3 into a local cluster. Along the way, I split the lines into their own directories based on record type using MultipleOutputs. When a map task dies due to S3 connection issues it leaves its MultipleOutput directories, making retries impossible.

 

Is there a way to avoid this? Can I ask a Map what file a named MultipleOutput will write to and delete them in the setup call?

Explorer
Posts: 17
Registered: ‎06-02-2014

Re: How do I delete MultipleOutput files from a map task when the task performs a retry?

[ Edited ]

This turned out to be an issue with speculative execution. (e.g. conf.set("mapred.map.tasks.speculative.execution", "false"); ) It was causing the job to create a new task before the previous task had cleaned up after itself. It turns out that MultipleOutputs doesn't handle speculative execution very well.

Announcements