Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Trying to execute YARN capacity scheduler. I have set Maximum Applications = 10000. But still getting error.

Trying to execute YARN capacity scheduler. I have set Maximum Applications = 10000. But still getting error.

New Contributor

[sales01@namenode2 hadoop-mapreduce]$ yarn jar hadoop-mapreduce-examples.jar pi 2 1

Number of Maps = 2

Samples per Map = 1

Wrote input for Map #0 Wrote input for Map #1

Starting Job

18/06/26 23:19:28 INFO impl.TimelineClientImpl: Timeline service address: http://datanode2.hdp.com:8188/ws/v1/timeline/ 18/06/26 23:19:28 INFO client.RMProxy: Connecting to ResourceManager at namenode1.hdp.com/192.168.1.191:8050 18/06/26 23:19:29 INFO client.AHSProxy: Connecting to Application History server at datanode2.hdp.com/192.168.1.194:10200 18/06/26 23:19:29 INFO input.FileInputFormat: Total input paths to process : 2

18/06/26 23:19:29 INFO mapreduce.JobSubmitter: number of splits:2 18/06/26 23:19:30 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1530063650065_0028

18/06/26 23:19:30 INFO mapreduce.JobSubmitter: Cleaning up the staging area /user/sales01/.staging/job_1530063650065_0028 java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1530063650065_0028 to YARN : org.apache.hadoop.security.AccessControlException: Queue root.default already has 0 applications, cannot accept submission of application: application_1530063650065_0028 at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:306) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:240) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308) at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:306) at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71) at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144) at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:233) at org.apache.hadoop.util.RunJar.main(RunJar.java:148) Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1530063650065_0028 to YARN : org.apache.hadoop.security.AccessControlException: Queue root.default already has 0 applications, cannot accept submission of application: application_1530063650065_0028 at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:276) at org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:291) at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:290) ... 25 more

Don't have an account?
Coming from Hortonworks? Activate your account here