Reply
Highlighted
Expert Contributor
Posts: 76
Registered: ‎05-09-2017

Reducers failing with error "java.lang.IllegalArgumentException: Self-suppression not permitted"

java.lang.IllegalArgumentException: Self-suppression not permitted
at java.lang.Throwable.addSuppressed(Throwable.java:1043)
at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
at org.apache.hadoop.mapred.TextOutputFormat$LineRecordWriter.close(TextOutputFormat.java:108)
at org.apache.hadoop.mapred.lib.MultipleOutputFormat$1.close(MultipleOutputFormat.java:114)
at org.apache.hadoop.mapred.ReduceTask$OldTrackingRecordWriter.close(ReduceTask.java:502)
at org.apache.hadoop.mapred.ReduceTask.closeQuietly(ReduceTask.java:637)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:460)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.io.IOException: Failing write. Tried pipeline recovery 5 times without success.
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1230)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:721)
2018-09-25 18:59:52,335 WARN [main] org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:sassrv (auth:SIMPLE) cause:java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[10.239.121.254:50010,DS-e8c0d19e-cdeb-4ffe-8def-2f237d748ac7,DISK]], original=[DatanodeInfoWithStorage[10.239.121.254:50010,DS-e8c0d19e-cdeb-4ffe-8def-2f237d748ac7,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
2018-09-25 18:59:52,335 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[10.239.121.254:50010,DS-e8c0d19e-cdeb-4ffe-8def-2f237d748ac7,DISK]], original=[DatanodeInfoWithStorage[10.239.121.254:50010,DS-e8c0d19e-cdeb-4ffe-8def-2f237d748ac7,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:1280)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1354)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1512)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1236)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:721)

2018-09-25 18:59:52,337 INFO [main] org.apache.hadoop.mapred.Task: Runnning cleanup for the task
2018-09-25 18:59:52,445 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping ReduceTask metrics system...
2018-09-25 18:59:52,445 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: ReduceTask metrics system stopped.
2018-09-25 18:59:52,445 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: ReduceTask metrics system shutdown complete.

Announcements