Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

hive job failing with below error

Highlighted

hive job failing with below error

New Contributor

when last 50 reducers are remaining then reducers are failing and getting killed:

--------------------------------------------------------------------------------

VERTICES STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED

--------------------------------------------------------------------------------

Map 1 ......... RUNNING 1215 1128 0 87 5 0

Reducer 2 ..... RUNNING 1009 983 0 26 10 5

--------------------------------------------------------------------------------

VERTICES: 00/02 [========================>>--] 94% ELAPSED TIME: 2417.20 s

--------------------------------------------------------------------------------

Status: Failed

Vertex re-running, vertexName=Map 1, vertexId=vertex_1557224797954_0084_1_00

Vertex re-running, vertexName=Map 1, vertexId=vertex_1557224797954_0084_1_00

Vertex re-running, vertexName=Map 1, vertexId=vertex_1557224797954_0084_1_00

Vertex failed, vertexName=Reducer 2, vertexId=vertex_1557224797954_0084_1_01, diagnostics=[Task failed, taskId=task_1557224797954_0084_1_01_000470, diagnostics=[TaskAttempt 0 failed, info=[Container container_e380_1557224797954_0084_01_000125 finished with diagnostics set to [Container failed, exitCode=-100. Container released on a *lost* node]], TaskAttempt 1 failed, info=[Error: exceptionThrown=org.apache.tez.runtime.library.common.shuffle.orderedgrouped.Shuffle$ShuffleError: error in shuffle in DiskToDiskMerger [Map_1]

at org.apache.tez.runtime.library.common.shuffle.orderedgrouped.Shuffle$RunShuffleCallable.callInternal(Shuffle.java:357)

at org.apache.tez.runtime.library.common.shuffle.orderedgrouped.Shuffle$RunShuffleCallable.callInternal(Shuffle.java:334)

at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)

at java.util.concurrent.FutureTask.run(FutureTask.java:266)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

at java.lang.Thread.run(Thread.java:745)

Caused by: org.apache.hadoop.fs.FSError: java.io.IOException: No space left on device

at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(RawLocalFileSystem.java:261)

at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)

at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)

at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)

at java.io.DataOutputStream.write(DataOutputStream.java:107)

at org.apache.tez.runtime.library.common.sort.impl.IFileOutputStream.write(IFileOutputStream.java:120)

at org.apache.hadoop.io.compress.BlockCompressorStream.compress(BlockCompressorStream.java:153)

at org.apache.hadoop.io.compress.BlockCompressorStream.finish(BlockCompressorStream.java:142)

at org.apache.hadoop.io.compress.BlockCompressorStream.write(BlockCompressorStream.java:100)

at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)

at java.io.DataOutputStream.write(DataOutputStream.java:107)

at org.apache.tez.runtime.library.common.sort.impl.IFile$Writer.writeValue(IFile.java:402)

at org.apache.tez.runtime.library.common.sort.impl.IFile$Writer.append(IFile.java:393)

at org.apache.tez.runtime.library.common.sort.impl.TezMerger.writeFile(TezMerger.java:207)

at org.apache.tez.runtime.library.common.shuffle.orderedgrouped.MergeManager$OnDiskMerger.merge(MergeManager.java:863)

at org.apache.tez.runtime.library.common.shuffle.orderedgrouped.MergeThread.run(MergeThread.java:89)