Support Questions

Find answers, ask questions, and share your expertise
Check out our newest addition to the community, the Cloudera Data Analytics (CDA) group hub.

Map reduce jobs getting fail

while loading 30 gb file to hbase using map reduce program, 29 mappers are launched and failed.

errors :

Error: java.lang.ArrayIndexOutOfBoundsException at at org.apache.hadoop.util.DataChecksum.update( at org.apache.hadoop.mapred.IFileOutputStream.write( at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write( at at org.apache.hadoop.mapred.IFile$Writer.append( at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill( at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush( at org.apache.hadoop.mapred.MapTask$NewOutputCollector.close( at org.apache.hadoop.mapred.MapTask.runNewMapper( at at org.apache.hadoop.mapred.YarnChild$ at Method) at at at org.apache.hadoop.mapred.YarnChild.main( Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143

what could be reason of getting fail mapred program ?


@Anurag Mishra

The issue could be due to incorrect value for io.sort.mb. Try reducing the value for the property and run the program again.

@Sindhu could you please explain the reason, and little more about io.sort.mb , and why is it recommended to reduce the value ?

as i see io.sort.mb : The total amount of buffer memory to use while sorting files, in megabytes, so if i reduce value then efficiency will go down? i could not understand reducing the value for this property

yarn.scheduler.minimum-allocation-mb = 1024

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.