Member since
10-24-2015
207
Posts
18
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4444 | 03-04-2018 08:18 PM | |
4333 | 09-19-2017 04:01 PM | |
1809 | 01-28-2017 10:31 PM | |
977 | 12-08-2016 03:04 PM |
02-07-2017
04:51 PM
@smohanty
@apappu I just checked that Ambari 1.6.0 Blueprnt doesn't support HA configuration, it is supported from Ambari 2.0 only ... Any other way around?
... View more
02-06-2017
10:51 PM
How to copy configurations of a stable cluster and apply them to a new cluster you are building ?? Here the data and jobs are the same on both the clusters... I am doing an upgrade from HDP 2.1 to 2.5.3, I was wondering should i do a clean install or do hops of upgrade which is risky. What are your experiences? If I need to do a clean install, I want to copy the configurations and apply them to the new one.... how practical this is? Any tools I can use?
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
02-06-2017
10:48 PM
What is the best practice to copy data and hive managed and external tables from one cluster to another? Any good tools for that to automate process or scripts to validate the data?
... View more
Labels:
- Labels:
-
Apache Hadoop
02-06-2017
08:13 PM
@Predrag Minovic I understand what you are saying but how can I change this to contact the active RM first? And how come this worked in 2.4.2 and not in 2.5.3, there should be some parameter changes?? Also, everytime it contacts resource manager, it is wasting some time checking which is active..
... View more
02-06-2017
08:13 PM
[root@jtldjob ~]# yarn application -list
17/02/04 10:33:34 INFO impl.TimelineClientImpl: Timeline service address: http://str20:8188/ws/v1/timeline/
17/02/04 10:33:34 INFO client.AHSProxy: Connecting to Application History server at str20/10.5.168.121:10200
17/02/04 10:33:35 WARN ipc.Client: Failed to connect to server: str20:8032: retries get failed due to exceeded maximum allowed retries number: 0
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:650)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:745)
at org.apache.hadoop.ipc.Client$Connection.access$3200(Client.java:397)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1618)
at org.apache.hadoop.ipc.Client.call(Client.java:1449)
at org.apache.hadoop.ipc.Client.call(Client.java:1396)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy17.getApplications(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getApplications(ApplicationClientProtocolPBClientImpl.java:251)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
at com.sun.proxy.$Proxy18.getApplications(Unknown Source)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getApplications(YarnClientImpl.java:484)
at org.apache.hadoop.yarn.client.cli.ApplicationCLI.listApplications(ApplicationCLI.java:401)
at org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:207)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:83)
17/02/04 10:33:35 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm2
Total number of applications (application-types: [] and states: [SUBMITTED, ACCEPTED, RUNNING]):1
Application-Id Application-Name Application-Type User Queue State Final-State Progress Tracking-URL
application_1485795502013_1714 HIVE-8dc9a187-2c8c-44b3-92a9-eec0662e524b TEZ talend ServAssure RUNNING UNDEFINED 77.83% http://str44/ui/
... View more
Labels:
02-02-2017
03:34 PM
@Madan Gudi I am facing same issue after upgrading to 2.5.3 from 2.4.2..did anybody find a solution?
... View more
02-02-2017
03:34 PM
@Donghoon Kang I am facing same issue after upgrading to 2.5.3 from 2.4.2..did anybody find a solution?
... View more
01-28-2017
10:31 PM
FYI: there was no default queue in the capacity scheduler ... just added it... and it worked.
... View more
01-28-2017
10:15 PM
Please help me with the errors in service checks: YARN Service check: stderr:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py", line 146, in <module>
ServiceCheck().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py", line 108, in service_check
user=params.smokeuser,
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'yarn org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command ls -num_containers 1 -jar /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar -jar /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar -timeout 300000' returned 1. 17/01/28 16:40:36 INFO impl.TimelineClientImpl: Timeline service address: http://str:8188/ws/v1/timeline/
17/01/28 16:40:36 INFO distributedshell.Client: Initializing Client
17/01/28 16:40:36 INFO distributedshell.Client: Running Client
17/01/28 16:40:36 INFO distributedshell.Client: Got Cluster metric info from ASM, numNodeManagers=27
17/01/28 16:40:36 INFO distributedshell.Client: Got Cluster node info from ASM
17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str31:45454, nodeAddressstr31:8042, nodeRackName/default-rack, nodeNumContainers0
17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str38:45454, nodeAddressstr38:8042, nodeRackName/default-rack, nodeNumContainers0
17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str36:45454, nodeAddressstr36:8042, nodeRackName/default-rack, nodeNumContainers0
17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str33:45454, nodeAddressstr33:8042, nodeRackName/default-rack, nodeNumContainers0
17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str35:45454, nodeAddressstr35:8042, nodeRackName/default-rack, nodeNumContainers0
17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str41:45454, nodeAddressstr41:8042, nodeRackName/default-rack, nodeNumContainers0
17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str28.:45454, nodeAddressstr28:8042, nodeRackName/default-rack, nodeNumContainers0
17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str47:45454, nodeAddressstr47:8042, nodeRackName/default-rack, nodeNumContainers0
17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str22:45454, nodeAddressstr22:8042, nodeRackName/default-rack, nodeNumContainers0
17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str26:45454, nodeAddressstr26:8042, nodeRackName/default-rack, nodeNumContainers0
17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str24:45454, nodeAddressstr:8042, nodeRackName/default-rack, nodeNumContainers0
17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str43:45454, nodeAddressstr43:8042, nodeRackName/default-rack, nodeNumContainers0
17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str45:45454, nodeAddressstr45:8042, nodeRackName/default-rack, nodeNumContainers0
17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str30:45454, nodeAddressstr30:8042, nodeRackName/default-rack, nodeNumContainers0
17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str18:45454, nodeAddressstr18:8042, nodeRackName/default-rack, nodeNumContainers0
17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str39:45454, nodeAddressstr39:8042, nodeRackName/default-rack, nodeNumContainers0
17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str37:45454, nodeAddressstr37:8042, nodeRackName/default-rack, nodeNumContainers0
17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str32:45454, nodeAddressstr32:8042, nodeRackName/default-rack, nodeNumContainers0
17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str34:45454, nodeAddressstr34:8042, nodeRackName/default-rack, nodeNumContainers0
17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str29:45454, nodeAddressstr29:8042, nodeRackName/default-rack, nodeNumContainers0
17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str40:45454, nodeAddressstr40:8042, nodeRackName/default-rack, nodeNumContainers0
17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str48:45454, nodeAddressstr48:8042, nodeRackName/default-rack, nodeNumContainers0
17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str23:45454, nodeAddressstr23:8042, nodeRackName/default-rack, nodeNumContainers0
17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str27:45454, nodeAddressstr27:8042, nodeRackName/default-rack, nodeNumContainers0
17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str42:45454, nodeAddressstr42:8042, nodeRackName/default-rack, nodeNumContainers0
17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str44:45454, nodeAddressstr44:8042, nodeRackName/default-rack, nodeNumContainers0
17/01/28 16:40:36 INFO distributedshell.Client: Got node report from ASM for, nodeId=str25:45454, nodeAddressstr25:8042, nodeRackName/default-rack, nodeNumContainers0
17/01/28 16:40:36 FATAL distributedshell.Client: Error running Client
java.lang.NullPointerException
at org.apache.hadoop.yarn.applications.distributedshell.Client.run(Client.java:462)
at org.apache.hadoop.yarn.applications.distributedshell.Client.main(Client.java:215)
stdout:
2017-01-28 16:40:34,859 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-01-28 16:40:34,881 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-01-28 16:40:34,884 - checked_call['yarn org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command ls -num_containers 1 -jar /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar -jar /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar -timeout 300000'] {'path': '/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin', 'user': 'ambari-qa'} Mapreduce service check: stderr:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/mapred_service_check.py", line 160, in <module>
MapReduce2ServiceCheck().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/mapred_service_check.py", line 149, in service_check
logoutput=True
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/execute_hadoop.py", line 54, in action_run
environment = self.resource.environment,
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 238, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'hadoop --config /usr/hdp/current/hadoop-client/conf jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples-2.*.jar wordcount /user/ambari-qa/mapredsmokeinput /user/ambari-qa/mapredsmokeoutput' returned 255. WARNING: Use "yarn jar" to launch YARN applications.
17/01/28 16:40:31 INFO impl.TimelineClientImpl: Timeline service address: http://str20:8188/ws/v1/timeline/
17/01/28 16:40:32 INFO input.FileInputFormat: Total input paths to process : 1
17/01/28 16:40:32 INFO mapreduce.JobSubmitter: number of splits:1
17/01/28 16:40:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1485551260013_0334
17/01/28 16:40:32 INFO mapreduce.JobSubmitter: Cleaning up the staging area /user/ambari-qa/.staging/job_1485551260013_0334
java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1485551260013_0334 to YARN : Application application_1485551260013_0334 submitted by user ambari-qa to unknown queue: default
at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:306)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:240)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1485551260013_0334 to YARN : Application application_1485551260013_0334 submitted by user ambari-qa to unknown queue: default
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:271)
at org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:291)
at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:290)
... 22 more
stdout:
2017-01-28 16:40:28,916 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-01-28 16:40:28,938 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-01-28 16:40:28,943 - HdfsResource['/user/ambari-qa/mapredsmokeoutput'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://nochdpprod02', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'action': ['delete_on_execute'], 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'immutable_paths': [u'/apps/hive/warehouse', u'/mr-history/done', u'/app-logs', u'/tmp']}
2017-01-28 16:40:28,947 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -s '"'"'http://str19:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem'"'"' 1>/tmp/tmpkRpdLg 2>/tmp/tmp36qEa8''] {'quiet': False}
2017-01-28 16:40:28,991 - call returned (0, '')
2017-01-28 16:40:28,992 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -s '"'"'http://str20.:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem'"'"' 1>/tmp/tmpW2xjW8 2>/tmp/tmpu_8JFa''] {'quiet': False}
2017-01-28 16:40:29,034 - call returned (0, '')
2017-01-28 16:40:29,034 - NameNode HA states: active_namenodes = [('nn1', 'str19t:50070')], standby_namenodes = [('nn2', 'str20.t:50070')], unknown_namenodes = []
2017-01-28 16:40:29,035 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -s '"'"'http://str19.t:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem'"'"' 1>/tmp/tmpHrfrjU 2>/tmp/tmpq5wo5S''] {'quiet': False}
2017-01-28 16:40:29,077 - call returned (0, '')
2017-01-28 16:40:29,078 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -s '"'"'http://str20t:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem'"'"' 1>/tmp/tmpzh6PrW 2>/tmp/tmplHxK7m''] {'quiet': False}
2017-01-28 16:40:29,120 - call returned (0, '')
2017-01-28 16:40:29,120 - NameNode HA states: active_namenodes = [('nn1', 'str19.:50070')], standby_namenodes = [('nn2', 'str20.t:50070')], unknown_namenodes = []
2017-01-28 16:40:29,121 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://str19:50070/webhdfs/v1/user/ambari-qa/mapredsmokeoutput?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpjeo93v 2>/tmp/tmpQINfSx''] {'logoutput': None, 'quiet': False}
2017-01-28 16:40:29,165 - call returned (0, '')
2017-01-28 16:40:29,166 - HdfsResource['/user/ambari-qa/mapredsmokeinput'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'source': '/etc/passwd', 'dfs_type': '', 'default_fs': 'hdfs://nochdpprod02', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'action': ['create_on_execute'], 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'file', 'immutable_paths': [u'/apps/hive/warehouse', u'/mr-history/done', u'/app-logs', u'/tmp']}
2017-01-28 16:40:29,167 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -s '"'"'http://str19.t:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem'"'"' 1>/tmp/tmpuGTeEA 2>/tmp/tmp5J5wJ8''] {'quiet': False}
2017-01-28 16:40:29,209 - call returned (0, '')
2017-01-28 16:40:29,209 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -s '"'"'http://str20.:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem'"'"' 1>/tmp/tmpdlX8Sg 2>/tmp/tmpjkEfUU''] {'quiet': False}
2017-01-28 16:40:29,251 - call returned (0, '')
2017-01-28 16:40:29,252 - NameNode HA states: active_namenodes = [('nn1', 'str19.:50070')], standby_namenodes = [('nn2', 'str20.:50070')], unknown_namenodes = []
2017-01-28 16:40:29,252 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -s '"'"'http://str19.t:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem'"'"' 1>/tmp/tmpyXcD4Z 2>/tmp/tmpZHnHLK''] {'quiet': False}
2017-01-28 16:40:29,296 - call returned (0, '')
2017-01-28 16:40:29,296 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -s '"'"'http://str20.t:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem'"'"' 1>/tmp/tmpqHY4XC 2>/tmp/tmpokKOzY''] {'quiet': False}
2017-01-28 16:40:29,340 - call returned (0, '')
2017-01-28 16:40:29,340 - NameNode HA states: active_namenodes = [('nn1', 'str19.:50070')], standby_namenodes = [('nn2', 'str20.t:50070')], unknown_namenodes = []
2017-01-28 16:40:29,341 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://str19.:50070/webhdfs/v1/user/ambari-qa/mapredsmokeinput?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpaGdxBX 2>/tmp/tmpEyHJra''] {'logoutput': None, 'quiet': False}
2017-01-28 16:40:29,387 - call returned (0, '')
2017-01-28 16:40:29,388 - Creating new file /user/ambari-qa/mapredsmokeinput in DFS
2017-01-28 16:40:29,389 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT -T /etc/passwd '"'"'http://str19.:50070/webhdfs/v1/user/ambari-qa/mapredsmokeinput?op=CREATE&user.name=hdfs&overwrite=True'"'"' 1>/tmp/tmp9IbyUH 2>/tmp/tmpgbRlCJ''] {'logoutput': None, 'quiet': False}
2017-01-28 16:40:29,828 - call returned (0, '')
2017-01-28 16:40:29,828 - HdfsResource[None] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://nochdpprod02', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': [u'/apps/hive/warehouse', u'/mr-history/done', u'/app-logs', u'/tmp']}
2017-01-28 16:40:29,829 - ExecuteHadoop['jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples-2.*.jar wordcount /user/ambari-qa/mapredsmokeinput /user/ambari-qa/mapredsmokeoutput'] {'bin_dir': '/usr/sbin:/sbin:/usr/lib/ambari-server/*:/sbin:/usr/sbin:/bin:/usr/bin:/var/lib/ambari-agent:/usr/hdp/current/hadoop-client/bin:/usr/hdp/current/hadoop-yarn-client/bin', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'logoutput': True, 'try_sleep': 5, 'tries': 1, 'user': 'ambari-qa'}
2017-01-28 16:40:29,830 - Execute['hadoop --config /usr/hdp/current/hadoop-client/conf jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples-2.*.jar wordcount /user/ambari-qa/mapredsmokeinput /user/ambari-qa/mapredsmokeoutput'] {'logoutput': True, 'try_sleep': 5, 'environment': {}, 'tries': 1, 'user': 'ambari-qa', 'path': ['/usr/sbin:/sbin:/usr/lib/ambari-server/*:/sbin:/usr/sbin:/bin:/usr/bin:/var/lib/ambari-agent:/usr/hdp/current/hadoop-client/bin:/usr/hdp/current/hadoop-yarn-client/bin']}
WARNING: Use "yarn jar" to launch YARN applications.
17/01/28 16:40:31 INFO impl.TimelineClientImpl: Timeline service address: http://str20.:8188/ws/v1/timeline/
17/01/28 16:40:32 INFO input.FileInputFormat: Total input paths to process : 1
17/01/28 16:40:32 INFO mapreduce.JobSubmitter: number of splits:1
17/01/28 16:40:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1485551260013_0334
17/01/28 16:40:32 INFO mapreduce.JobSubmitter: Cleaning up the staging area /user/ambari-qa/.staging/job_1485551260013_0334
java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1485551260013_0334 to YARN : Application application_1485551260013_0334 submitted by user ambari-qa to unknown queue: default
at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:306)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:240)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1485551260013_0334 to YARN : Application application_1485551260013_0334 submitted by user ambari-qa to unknown queue: default
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:271)
at org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:291)
at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:290)
... 22 more
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
01-25-2017
03:26 PM
Yes, I applied those parameter settings and all alerts are gone within few minutes. Thanks all.
... View more