Support Questions
Find answers, ask questions, and share your expertise mapreduce jobs failed when querying large HDFS sequence files



Hi all,

I've been stuck on this problem for over a day and I can't seem to resolve it. I messed around with increasing yarn and mapreduce memory as well as breaking down my timerange for smaller window, but nothing seems to work.

I'm trying to export out all my pcap data for two days and stuck on a 1hour window. I even broke it down to 30min and 15mins windows.

sudo -su hdfs ./bin/ fixed -df yyyyMMdd-HHmm -rpf '10000' -st 20180306-1730 -et 20180306-1800 -nr 200

The problem is my mapper tasks keep getting killed by ApplicationMaster. Each of the killed task lasted about 40s seconds and the success ones finished at about 30s second.

Please help with any suggestion is appreciated.

Container killed by the ApplicationMaster.Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143.
18/03/23 17:19:12 INFO mapreduce.Job: Task Id : attempt_1521838071529_0003_m_000022_2, Status : FAILED
org.apache.hadoop.mapred.YarnChild$ Method)at 

18/03/23 17:19:17 INFO mapreduce.Job:map 100% reduce 100%
18/03/23 17:19:17 INFO mapreduce.Job: Job job_1521838071529_0003 failed with state FAILED due to: Task failed task_1521838071529_0003_m_000020Job failed as tasks failed. failed Maps:1 failed Reduces:0
18/03/23 17:19:17 INFO mapreduce.Job: Counters: 41
File System CountersFILE: Number of bytes read=0
FILE: Number of bytes written=25707046
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=
HDFS: Number of bytes read=4320257853
HDFS: Number of bytes written=0
HDFS: Number of read operations=32
HDFS: Number of large read operations=0
HDFS: Number of write operations=0
Job CountersFailed map tasks=12
Killed map tasks=24
Killed reduce tasks=200
Launched map tasks=26
Other local map tasks=11Data-local map tasks=13
Rack-local map tasks=2Total time spent by all maps in occupied slots (ms)=174434
Total time spent by all reduces in occupied slots (ms)=0Total time spent by all map tasks (ms)=174434Total time spent by all reduce tasks (ms)=0Total vcore-milliseconds taken by all map tasks=174434Total vcore-milliseconds taken by all reduce tasks=0
Total megabyte-milliseconds taken by all map tasks=1428963328
Total megabyte-milliseconds taken by all reduce tasks=0
Map-Reduce FrameworkMap input records=26522800Map output records=119240Map output bytes=24087217Map output materialized bytes=24454354Input split bytes=1240
Combine input records=0
Spilled Records=119240
Failed Shuffles=0
Merged Map outputs=0GC time elapsed (ms)=3410
CPU time spent (ms)=113580Physical memory (bytes) snapshot=21083152384
Virtual memory (bytes) snapshot=72407220224
Total committed heap usage (bytes)=22689087488
Exception in thread "main" java.lang.RuntimeException: Unable to complete query due to errors.Please check logs for full 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at 

Re: mapreduce jobs failed when querying large HDFS sequence files

@Arian Trayen Can you please format your logs? They are literally unreadable!

Re: mapreduce jobs failed when querying large HDFS sequence files


@Rahul Soni My apology, I overlooked and didn't realize how messy the log looked. Thank you for reviewing my question.

I read that exit code 143 is related to memory, and I'm running the job at Xmx5120m. I tried to modify the memory setting for HDFS, yarn, and mapreduce from Ambari and I got the HDFS corrupted. Then I fixed it by reverting configuration changes through Ambari.

Within one hour, there were so many sequence files created and I can't seem to query and export the pcap for that one hour.

My next resort is to look at the pcap-backend code and write a tool to export individual sequence file into pcap because I have a requirement to dump out the pcap files for our two days exercise.