Archives of Support Questions (Read Only)

This is an archived board for historical reference. Information and links may no longer be available or relevant
Announcements
This board is archived and read-only for historical reference. To ask a new question, please post a new topic on the appropriate active board.

Service check failing for YARN & Mapreduce...

avatar
Expert Contributor

Please help me with the errors in service checks:

YARN Service check:

stderr: Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py", line 146, in <module> ServiceCheck().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute method(env) File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py", line 108, in service_check user=params.smokeuser, File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner result = function(command, **kwargs) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call tries=tries, try_sleep=try_sleep) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call raise Fail(err_msg) resource_management.core.exceptions.Fail: Execution of 'yarn org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command ls -num_containers 1 -jar /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar -jar /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar -timeout 300000' returned 1. 17/01/28 16:40:36 INFO impl.TimelineClientImpl: Timeline service address: http://str:8188/ws/v1/timeline/ 17/01/28 16:40:36 INFO distributedshell.Client: Initializing Client 17/01/28 16:40:36 INFO distributedshell.Client: Running Client 17/01/28 16:40:36 INFO distributedshell.Client: Got Cluster metric info from ASM, numNodeManagers=27 17/01/28 16:40:36 INFO distributedshell.Client: Got Cluster node info from ASM 17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str31:45454, nodeAddressstr31:8042, nodeRackName/default-rack, nodeNumContainers0 17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str38:45454, nodeAddressstr38:8042, nodeRackName/default-rack, nodeNumContainers0 17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str36:45454, nodeAddressstr36:8042, nodeRackName/default-rack, nodeNumContainers0 17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str33:45454, nodeAddressstr33:8042, nodeRackName/default-rack, nodeNumContainers0 17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str35:45454, nodeAddressstr35:8042, nodeRackName/default-rack, nodeNumContainers0 17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str41:45454, nodeAddressstr41:8042, nodeRackName/default-rack, nodeNumContainers0 17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str28.:45454, nodeAddressstr28:8042, nodeRackName/default-rack, nodeNumContainers0 17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str47:45454, nodeAddressstr47:8042, nodeRackName/default-rack, nodeNumContainers0 17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str22:45454, nodeAddressstr22:8042, nodeRackName/default-rack, nodeNumContainers0 17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str26:45454, nodeAddressstr26:8042, nodeRackName/default-rack, nodeNumContainers0 17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str24:45454, nodeAddressstr:8042, nodeRackName/default-rack, nodeNumContainers0 17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str43:45454, nodeAddressstr43:8042, nodeRackName/default-rack, nodeNumContainers0 17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str45:45454, nodeAddressstr45:8042, nodeRackName/default-rack, nodeNumContainers0 17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str30:45454, nodeAddressstr30:8042, nodeRackName/default-rack, nodeNumContainers0 17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str18:45454, nodeAddressstr18:8042, nodeRackName/default-rack, nodeNumContainers0 17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str39:45454, nodeAddressstr39:8042, nodeRackName/default-rack, nodeNumContainers0 17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str37:45454, nodeAddressstr37:8042, nodeRackName/default-rack, nodeNumContainers0 17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str32:45454, nodeAddressstr32:8042, nodeRackName/default-rack, nodeNumContainers0 17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str34:45454, nodeAddressstr34:8042, nodeRackName/default-rack, nodeNumContainers0 17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str29:45454, nodeAddressstr29:8042, nodeRackName/default-rack, nodeNumContainers0 17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str40:45454, nodeAddressstr40:8042, nodeRackName/default-rack, nodeNumContainers0 17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str48:45454, nodeAddressstr48:8042, nodeRackName/default-rack, nodeNumContainers0 17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str23:45454, nodeAddressstr23:8042, nodeRackName/default-rack, nodeNumContainers0 17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str27:45454, nodeAddressstr27:8042, nodeRackName/default-rack, nodeNumContainers0 17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str42:45454, nodeAddressstr42:8042, nodeRackName/default-rack, nodeNumContainers0 17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str44:45454, nodeAddressstr44:8042, nodeRackName/default-rack, nodeNumContainers0 17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str25:45454, nodeAddressstr25:8042, nodeRackName/default-rack, nodeNumContainers0 17/01/28 16:40:36 FATAL distributedshell.Client: Error running Client java.lang.NullPointerException at org.apache.hadoop.yarn.applications.distributedshell.Client.run(Client.java:462) at org.apache.hadoop.yarn.applications.distributedshell.Client.main(Client.java:215) stdout: 2017-01-28 16:40:34,859 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2017-01-28 16:40:34,881 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2017-01-28 16:40:34,884 - checked_call['yarn org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command ls -num_containers 1 -jar /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar -jar /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar -timeout 300000'] {'path': '/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin', 'user': 'ambari-qa'}

Mapreduce service check:

stderr: Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/mapred_service_check.py", line 160, in <module> MapReduce2ServiceCheck().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute method(env) File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/mapred_service_check.py", line 149, in service_check logoutput=True File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/execute_hadoop.py", line 54, in action_run environment = self.resource.environment, File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 238, in action_run tries=self.resource.tries, try_sleep=self.resource.try_sleep) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner result = function(command, **kwargs) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call tries=tries, try_sleep=try_sleep) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call raise Fail(err_msg) resource_management.core.exceptions.Fail: Execution of 'hadoop --config /usr/hdp/current/hadoop-client/conf jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples-2.*.jar wordcount /user/ambari-qa/mapredsmokeinput /user/ambari-qa/mapredsmokeoutput' returned 255. WARNING: Use "yarn jar" to launch YARN applications. 17/01/28 16:40:31 INFO impl.TimelineClientImpl: Timeline service address: http://str20:8188/ws/v1/timeline/ 17/01/28 16:40:32 INFO input.FileInputFormat: Total input paths to process : 1 17/01/28 16:40:32 INFO mapreduce.JobSubmitter: number of splits:1 17/01/28 16:40:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1485551260013_0334 17/01/28 16:40:32 INFO mapreduce.JobSubmitter: Cleaning up the staging area /user/ambari-qa/.staging/job_1485551260013_0334 java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1485551260013_0334 to YARN : Application application_1485551260013_0334 submitted by user ambari-qa to unknown queue: default at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:306) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:240) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308) at org.apache.hadoop.examples.WordCount.main(WordCount.java:87) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71) at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144) at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1485551260013_0334 to YARN : Application application_1485551260013_0334 submitted by user ambari-qa to unknown queue: default at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:271) at org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:291) at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:290) ... 22 more stdout: 2017-01-28 16:40:28,916 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2017-01-28 16:40:28,938 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2017-01-28 16:40:28,943 - HdfsResource['/user/ambari-qa/mapredsmokeoutput'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://nochdpprod02', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'action': ['delete_on_execute'], 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'immutable_paths': [u'/apps/hive/warehouse', u'/mr-history/done', u'/app-logs', u'/tmp']} 2017-01-28 16:40:28,947 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -s '"'"'http://str19:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem'"'"' 1>/tmp/tmpkRpdLg 2>/tmp/tmp36qEa8''] {'quiet': False} 2017-01-28 16:40:28,991 - call returned (0, '') 2017-01-28 16:40:28,992 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -s '"'"'http://str20.:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem'"'"' 1>/tmp/tmpW2xjW8 2>/tmp/tmpu_8JFa''] {'quiet': False} 2017-01-28 16:40:29,034 - call returned (0, '') 2017-01-28 16:40:29,034 - NameNode HA states: active_namenodes = [('nn1', 'str19t:50070')], standby_namenodes = [('nn2', 'str20.t:50070')], unknown_namenodes = [] 2017-01-28 16:40:29,035 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -s '"'"'http://str19.t:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem'"'"' 1>/tmp/tmpHrfrjU 2>/tmp/tmpq5wo5S''] {'quiet': False} 2017-01-28 16:40:29,077 - call returned (0, '') 2017-01-28 16:40:29,078 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -s '"'"'http://str20t:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem'"'"' 1>/tmp/tmpzh6PrW 2>/tmp/tmplHxK7m''] {'quiet': False} 2017-01-28 16:40:29,120 - call returned (0, '') 2017-01-28 16:40:29,120 - NameNode HA states: active_namenodes = [('nn1', 'str19.:50070')], standby_namenodes = [('nn2', 'str20.t:50070')], unknown_namenodes = [] 2017-01-28 16:40:29,121 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://str19:50070/webhdfs/v1/user/ambari-qa/mapredsmokeoutput?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpjeo93v 2>/tmp/tmpQINfSx''] {'logoutput': None, 'quiet': False} 2017-01-28 16:40:29,165 - call returned (0, '') 2017-01-28 16:40:29,166 - HdfsResource['/user/ambari-qa/mapredsmokeinput'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'source': '/etc/passwd', 'dfs_type': '', 'default_fs': 'hdfs://nochdpprod02', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'action': ['create_on_execute'], 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'file', 'immutable_paths': [u'/apps/hive/warehouse', u'/mr-history/done', u'/app-logs', u'/tmp']} 2017-01-28 16:40:29,167 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -s '"'"'http://str19.t:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem'"'"' 1>/tmp/tmpuGTeEA 2>/tmp/tmp5J5wJ8''] {'quiet': False} 2017-01-28 16:40:29,209 - call returned (0, '') 2017-01-28 16:40:29,209 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -s '"'"'http://str20.:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem'"'"' 1>/tmp/tmpdlX8Sg 2>/tmp/tmpjkEfUU''] {'quiet': False} 2017-01-28 16:40:29,251 - call returned (0, '') 2017-01-28 16:40:29,252 - NameNode HA states: active_namenodes = [('nn1', 'str19.:50070')], standby_namenodes = [('nn2', 'str20.:50070')], unknown_namenodes = [] 2017-01-28 16:40:29,252 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -s '"'"'http://str19.t:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem'"'"' 1>/tmp/tmpyXcD4Z 2>/tmp/tmpZHnHLK''] {'quiet': False} 2017-01-28 16:40:29,296 - call returned (0, '') 2017-01-28 16:40:29,296 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -s '"'"'http://str20.t:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem'"'"' 1>/tmp/tmpqHY4XC 2>/tmp/tmpokKOzY''] {'quiet': False} 2017-01-28 16:40:29,340 - call returned (0, '') 2017-01-28 16:40:29,340 - NameNode HA states: active_namenodes = [('nn1', 'str19.:50070')], standby_namenodes = [('nn2', 'str20.t:50070')], unknown_namenodes = [] 2017-01-28 16:40:29,341 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://str19.:50070/webhdfs/v1/user/ambari-qa/mapredsmokeinput?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpaGdxBX 2>/tmp/tmpEyHJra''] {'logoutput': None, 'quiet': False} 2017-01-28 16:40:29,387 - call returned (0, '') 2017-01-28 16:40:29,388 - Creating new file /user/ambari-qa/mapredsmokeinput in DFS 2017-01-28 16:40:29,389 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT -T /etc/passwd '"'"'http://str19.:50070/webhdfs/v1/user/ambari-qa/mapredsmokeinput?op=CREATE&user.name=hdfs&overwrite=True'"'"' 1>/tmp/tmp9IbyUH 2>/tmp/tmpgbRlCJ''] {'logoutput': None, 'quiet': False} 2017-01-28 16:40:29,828 - call returned (0, '') 2017-01-28 16:40:29,828 - HdfsResource[None] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://nochdpprod02', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': [u'/apps/hive/warehouse', u'/mr-history/done', u'/app-logs', u'/tmp']} 2017-01-28 16:40:29,829 - ExecuteHadoop['jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples-2.*.jar wordcount /user/ambari-qa/mapredsmokeinput /user/ambari-qa/mapredsmokeoutput'] {'bin_dir': '/usr/sbin:/sbin:/usr/lib/ambari-server/*:/sbin:/usr/sbin:/bin:/usr/bin:/var/lib/ambari-agent:/usr/hdp/current/hadoop-client/bin:/usr/hdp/current/hadoop-yarn-client/bin', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'logoutput': True, 'try_sleep': 5, 'tries': 1, 'user': 'ambari-qa'} 2017-01-28 16:40:29,830 - Execute['hadoop --config /usr/hdp/current/hadoop-client/conf jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples-2.*.jar wordcount /user/ambari-qa/mapredsmokeinput /user/ambari-qa/mapredsmokeoutput'] {'logoutput': True, 'try_sleep': 5, 'environment': {}, 'tries': 1, 'user': 'ambari-qa', 'path': ['/usr/sbin:/sbin:/usr/lib/ambari-server/*:/sbin:/usr/sbin:/bin:/usr/bin:/var/lib/ambari-agent:/usr/hdp/current/hadoop-client/bin:/usr/hdp/current/hadoop-yarn-client/bin']} WARNING: Use "yarn jar" to launch YARN applications. 17/01/28 16:40:31 INFO impl.TimelineClientImpl: Timeline service address: http://str20.:8188/ws/v1/timeline/ 17/01/28 16:40:32 INFO input.FileInputFormat: Total input paths to process : 1 17/01/28 16:40:32 INFO mapreduce.JobSubmitter: number of splits:1 17/01/28 16:40:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1485551260013_0334 17/01/28 16:40:32 INFO mapreduce.JobSubmitter: Cleaning up the staging area /user/ambari-qa/.staging/job_1485551260013_0334 java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1485551260013_0334 to YARN : Application application_1485551260013_0334 submitted by user ambari-qa to unknown queue: default at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:306) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:240) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308) at org.apache.hadoop.examples.WordCount.main(WordCount.java:87) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71) at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144) at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1485551260013_0334 to YARN : Application application_1485551260013_0334 submitted by user ambari-qa to unknown queue: default at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:271) at org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:291) at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:290) ... 22 more

1 ACCEPTED SOLUTION

avatar
Expert Contributor

FYI: there was no default queue in the capacity scheduler ... just added it... and it worked.

View solution in original post

1 REPLY 1

avatar
Expert Contributor

FYI: there was no default queue in the capacity scheduler ... just added it... and it worked.