Reply
Highlighted
Explorer
Posts: 24
Registered: ‎05-22-2016

simple mapreduce job fails on Secured Cluster

[ Edited ]

Hello ~

 

I can't solve the below issue, plz help me.

 

 

hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/hadoop-examples.jar pi 10 10000



Number of Maps = 10
Samples per Map = 10000
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
18/02/09 15:23:14 INFO client.RMProxy: Connecting to ResourceManager at kerberos001-hbase-db.testinfra-dev.com/10.127.86.45:8032
18/02/09 15:23:14 INFO hdfs.DFSClient: Created token for vincent: HDFS_DELEGATION_TOKEN owner=vincent@TEST.COM, renewer=yarn, realUser=, issueDate=1518157394188, maxDate=1518762194188, sequenceNumber=9, masterKeyId=117 on 10.127.86.45:8020
18/02/09 15:23:14 INFO security.TokenCache: Got dt for hdfs://kerberos001-hbase-db.testinfra-dev.com:8020; Kind: HDFS_DELEGATION_TOKEN, Service: 10.127.86.45:8020, Ident: (token for vincent: HDFS_DELEGATION_TOKEN owner=vincent@TEST.COM, renewer=yarn, realUser=, issueDate=1518157394188, maxDate=1518762194188, sequenceNumber=9, masterKeyId=117)
18/02/09 15:23:14 INFO input.FileInputFormat: Total input paths to process : 10
18/02/09 15:23:14 INFO mapreduce.JobSubmitter: number of splits:10
18/02/09 15:23:14 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1518080546056_0001
18/02/09 15:23:14 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: 10.127.86.45:8020, Ident: (token for vincent: HDFS_DELEGATION_TOKEN owner=vincent@TEST.COM, renewer=yarn, realUser=, issueDate=1518157394188, maxDate=1518762194188, sequenceNumber=9, masterKeyId=117)
18/02/09 15:23:15 INFO impl.YarnClientImpl: Submitted application application_1518080546056_0001
18/02/09 15:23:15 INFO mapreduce.Job: The url to track the job: http://kerberos001-hbase-db.testinfra-dev.com:8088/proxy/application_1518080546056_0001/
18/02/09 15:23:15 INFO mapreduce.Job: Running job: job_1518080546056_0001
18/02/09 15:23:27 INFO mapreduce.Job: Job job_1518080546056_0001 running in uber mode : false
18/02/09 15:23:27 INFO mapreduce.Job: map 0% reduce 0%
18/02/09 15:23:27 INFO mapreduce.Job: Job job_1518080546056_0001 failed with state FAILED due to: Application application_1518080546056_0001 failed 2 times due to AM Container for appattempt_1518080546056_0001_000002 exited with exitCode: 1
For more detailed output, check application tracking page:http://kerberos001-hbase-db.testinfra-dev.com:8088/proxy/application_1518080546056_0001/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1518080546056_0001_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:601)
at org.apache.hadoop.util.Shell.run(Shell.java:504)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:786)
at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:373)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Shell output: main : command provided 1
main : run as user is vincent
main : requested yarn user is vincent
Writing to tmp file /dfs/yarn/nm/nmPrivate/application_1518080546056_0001/container_1518080546056_0001_02_000001/container_1518080546056_0001_02_000001.pid.tmp


Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
18/02/09 15:23:27 INFO mapreduce.Job: Counters: 0
Job Finished in 13.602 seconds
java.io.FileNotFoundException: File does not exist: hdfs://kerberos001-hbase-db.testinfra-dev.com:8020/user/vincent/QuasiMonteCarlo_1518157392185_124408720/out/reduce-out
at org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1266)
at org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1258)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1258)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1820)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1844)
at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

 

 

user : vincent

 

-- os account

cat /etc/passwd |grep vincent
vincent:x:11002:11102::/home/vincent:/bin/bash

 

-- hdfs user directory

hdfs dfs -ls /user |grep vincent
drwxr-xr-x - vincent supergroup 0 2018-02-09 15:23 /user/vincent

 

klist
Ticket cache: FILE:/tmp/krb5cc_p29242
Default principal: vincent@TEST.COM

Valid starting Expires Service principal
02/09/18 15:22:57 02/10/18 15:22:57 krbtgt/TEST.COM@TEST.COM
renew until 02/16/18 15:22:57


Kerberos 4 ticket cache: /tmp/tkt0
klist: You have no tickets cached

 

New Contributor
Posts: 5
Registered: ‎01-27-2018

Re: simple mapreduce job fails on Secured Cluster

Did you create vincent user on each host as well?
Announcements