28730
DISCUSSIONS
101735
MEMBERS
3157
ARTICLES
Created 10-29-2014 12:43 AM
i have upgraded cloudera from 5.1 to 5.2 last week, and i am going to test cloudera example mapreduce. but it's failed,
the errors like below:
14/10/29 15:26:36 INFO mapreduce.Job: Job job_1414567123382_0001 failed with state FAILED due to: Application application_1414567123382_0001 failed 2 times due to AM Container for appattempt_1414567123382_0001_000002 exited with exitCode: -1000 due to: Application application_1414567123382_0001 initialization failed (exitCode=139) with output:
.Failing this attempt.. Failing the application.
14/10/29 15:26:36 INFO mapreduce.Job: Counters: 0
then i was going to check mapreduce logs, the errors is :
java.lang.Exception: Container is not yet running. Current state is NEW
here is test details:
[hdfs@datanode03 hadoop-0.20-mapreduce]$ hadoop jar hadoop-examples.jar pi 100 100
Number of Maps = 100
Samples per Map = 100
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Wrote input for Map #10
Wrote input for Map #11
Wrote input for Map #12
Wrote input for Map #13
Wrote input for Map #14
Wrote input for Map #15
Wrote input for Map #16
Wrote input for Map #17
Wrote input for Map #18
Wrote input for Map #19
Wrote input for Map #20
Wrote input for Map #21
Wrote input for Map #22
Wrote input for Map #23
Wrote input for Map #24
Wrote input for Map #25
Wrote input for Map #26
Wrote input for Map #27
Wrote input for Map #28
Wrote input for Map #29
Wrote input for Map #30
Wrote input for Map #31
Wrote input for Map #32
Wrote input for Map #33
Wrote input for Map #34
Wrote input for Map #35
Wrote input for Map #36
Wrote input for Map #37
Wrote input for Map #38
Wrote input for Map #39
Wrote input for Map #40
Wrote input for Map #41
Wrote input for Map #42
Wrote input for Map #43
Wrote input for Map #44
Wrote input for Map #45
Wrote input for Map #46
Wrote input for Map #47
Wrote input for Map #48
Wrote input for Map #49
Wrote input for Map #50
Wrote input for Map #51
Wrote input for Map #52
Wrote input for Map #53
Wrote input for Map #54
Wrote input for Map #55
Wrote input for Map #56
Wrote input for Map #57
Wrote input for Map #58
Wrote input for Map #59
Wrote input for Map #60
Wrote input for Map #61
Wrote input for Map #62
Wrote input for Map #63
Wrote input for Map #64
Wrote input for Map #65
Wrote input for Map #66
Wrote input for Map #67
Wrote input for Map #68
Wrote input for Map #69
Wrote input for Map #70
Wrote input for Map #71
Wrote input for Map #72
Wrote input for Map #73
Wrote input for Map #74
Wrote input for Map #75
Wrote input for Map #76
Wrote input for Map #77
Wrote input for Map #78
Wrote input for Map #79
Wrote input for Map #80
Wrote input for Map #81
Wrote input for Map #82
Wrote input for Map #83
Wrote input for Map #84
Wrote input for Map #85
Wrote input for Map #86
Wrote input for Map #87
Wrote input for Map #88
Wrote input for Map #89
Wrote input for Map #90
Wrote input for Map #91
Wrote input for Map #92
Wrote input for Map #93
Wrote input for Map #94
Wrote input for Map #95
Wrote input for Map #96
Wrote input for Map #97
Wrote input for Map #98
Wrote input for Map #99
Starting Job
14/10/29 15:40:23 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 47 for hdfs on ha-hdfs:cluster
14/10/29 15:40:23 INFO security.TokenCache: Got dt for hdfs://cluster; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:cluster, Ident: (HDFS_DELEGATION_TOKEN token 47 for hdfs)
14/10/29 15:40:23 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm398
14/10/29 15:40:53 INFO input.FileInputFormat: Total input paths to process : 100
14/10/29 15:40:53 INFO mapreduce.JobSubmitter: number of splits:100
14/10/29 15:40:53 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1414567123382_0002
14/10/29 15:40:53 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:cluster, Ident: (HDFS_DELEGATION_TOKEN token 47 for hdfs)
14/10/29 15:40:55 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1414567123382_0002 is still in NEW
14/10/29 15:40:58 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1414567123382_0002 is still in NEW
14/10/29 15:41:00 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1414567123382_0002 is still in NEW
14/10/29 15:41:02 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1414567123382_0002 is still in NEW
14/10/29 15:41:04 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1414567123382_0002 is still in NEW
14/10/29 15:41:04 INFO impl.YarnClientImpl: Submitted application application_1414567123382_0002
14/10/29 15:41:04 INFO mapreduce.Job: The url to track the job: http://namenode02.hadoop:8088/proxy/application_1414567123382_0002/
14/10/29 15:41:04 INFO mapreduce.Job: Running job: job_1414567123382_0002
14/10/29 15:41:28 INFO mapreduce.Job: Job job_1414567123382_0002 running in uber mode : false
14/10/29 15:41:28 INFO mapreduce.Job: map 0% reduce 0%
14/10/29 15:41:28 INFO mapreduce.Job: Job job_1414567123382_0002 failed with state FAILED due to: Application application_1414567123382_0002 failed 2 times due to AM Container for appattempt_1414567123382_0002_000002 exited with exitCode: -1000 due to: Application application_1414567123382_0002 initialization failed (exitCode=139) with output:
.Failing this attempt.. Failing the application.
14/10/29 15:41:28 INFO mapreduce.Job: Counters: 0
Job Finished in 75.652 seconds
java.io.FileNotFoundException: File does not exist: hdfs://cluster/user/hdfs/QuasiMonteCarlo_1414568398477_1366982913/out/reduce-out
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1083)
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1075)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1075)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1749)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1773)
at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:145)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
i have searched this errors from google, but did find nothing. who can give me advises. thanks.