Member since
03-08-2018
23
Posts
0
Kudos Received
0
Solutions
05-15-2019
04:58 PM
2) hive> Insert into new_table partition( new_partition1,new_partition2, new_partition3) > select col1, > col2 , > col3 , > colX , > new_partition1 , > new_partition2 , > new_partition3 from old_table; Query ID = hdfs_20190514114256_81a413f7-49eb-4460-a16f-4bef38f7954a Total jobs = 1 Launching Job 1 out of 1 Tez session was closed. Reopening... Session re-established. Status: Running (Executing on YARN cluster with App id application_1538560024513_0256) -------------------------------------------------------------------------------- VERTICES STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED -------------------------------------------------------------------------------- Map 1 FAILED 10 0 0 10 4 0 Reducer 2 KILLED 302 0 0 302 0 0 -------------------------------------------------------------------------------- VERTICES: 00/02 [>>--------------------------] 0% ELAPSED TIME: 5.35 s -------------------------------------------------------------------------------- Status: Failed Vertex failed, vertexName=Map 1, vertexId=vertex_1538560024513_0256_1_00, diagnostics=[Task failed, taskId=task_1538560024513_0256_1_00_000002, diagnostics=[TaskAttempt 0 failed, info=[Container launch failed for container_e486_1538560024513_0256_02_000002 : org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container. This token is expired. current time is 1557835643652 found 1557834788261 Note: System times on machines may be out of sync. Check system time and time zones. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:422) at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168) at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106) at org.apache.tez.dag.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:180) at org.apache.tez.dag.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:384) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) ], TaskAttempt 1 failed, info=[Container launch failed for container_e486_1538560024513_0256_02_000009 : org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container. This token is expired. current time is 1557835644617 found 1557834789325 Note: System times on machines may be out of sync. Check system time and time zones. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:422) at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168) at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106) at org.apache.tez.dag.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:180) at org.apache.tez.dag.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:384) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) ], TaskAttempt 2 failed, info=[Container launch failed for container_e486_1538560024513_0256_02_000010 : org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container. This token is expired. current time is 1557835645700 found 1557834790432 Note: System times on machines may be out of sync. Check system time and time zones. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:422) at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168) at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106) at org.apache.tez.dag.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:180) at org.apache.tez.dag.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:384) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) ], TaskAttempt 3 failed, info=[Container launch failed for container_e486_1538560024513_0256_02_000013 : org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container. This token is expired. current time is 1557835646991 found 1557834791443 Note: System times on machines may be out of sync. Check system time and time zones. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:422) at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168) at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106) at org.apache.tez.dag.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:180) at org.apache.tez.dag.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:384) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) ]], Vertex did not succeed due to OWN_TASK_FAILURE, failedTasks:1 killedTasks:9, Vertex vertex_1538560024513_0256_1_00 [Map 1] killed/failed due to:OWN_TASK_FAILURE] Vertex killed, vertexName=Reducer 2, vertexId=vertex_1538560024513_0256_1_01, diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not succeed due to OTHER_VERTEX_FAILURE, failedTasks:0 killedTasks:302, Vertex vertex_1538560024513_0256_1_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE] DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1 FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, vertexId=vertex_1538560024513_0256_1_00, diagnostics=[Task failed, taskId=task_1538560024513_0256_1_00_000002, diagnostics=[TaskAttempt 0 failed, info=[Container launch failed for container_e486_1538560024513_0256_02_000002 : org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container. This token is expired. current time is 1557835643652 found 1557834788261 Note: System times on machines may be out of sync. Check system time and time zones. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:422) at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168) at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106) at org.apache.tez.dag.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:180) at org.apache.tez.dag.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:384) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) ], TaskAttempt 1 failed, info=[Container launch failed for container_e486_1538560024513_0256_02_000009 : org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container. This token is expired. current time is 1557835644617 found 1557834789325 Note: System times on machines may be out of sync. Check system time and time zones. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:422) at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168) at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106) at org.apache.tez.dag.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:180) at org.apache.tez.dag.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:384) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) ], TaskAttempt 2 failed, info=[Container launch failed for container_e486_1538560024513_0256_02_000010 : org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container. This token is expired. current time is 1557835645700 found 1557834790432 Note: System times on machines may be out of sync. Check system time and time zones. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:422) at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168) at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106) at org.apache.tez.dag.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:180) at org.apache.tez.dag.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:384) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) ], TaskAttempt 3 failed, info=[Container launch failed for container_e486_1538560024513_0256_02_000013 : org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container. This token is expired. current time is 1557835646991 found 1557834791443 Note: System times on machines may be out of sync. Check system time and time zones. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:422) at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168) at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106) at org.apache.tez.dag.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:180) at org.apache.tez.dag.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:384) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) ]], Vertex did not succeed due to OWN_TASK_FAILURE, failedTasks:1 killedTasks:9, Vertex vertex_1538560024513_0256_1_00 [Map 1] killed/failed due to:OWN_TASK_FAILURE]Vertex killed, vertexName=Reducer 2, vertexId=vertex_1538560024513_0256_1_01, diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not succeed due to OTHER_VERTEX_FAILURE, failedTasks:0 killedTasks:302, Vertex vertex_1538560024513_0256_1_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1
... View more
05-15-2019
08:59 AM
Hi @Shu we tried the approach and got following error 1) hive> Insert into new_table partition( new_partition1,new_partition2, new_partition3) > select col1, > col2 , > col3 , > colX , > new_partition1 , > new_partition2 , > new_partition3 from old_table where createdate='2016-11-09' ; FAILED: ArrayIndexOutOfBoundsException -1
... View more
05-13-2019
10:53 AM
We have a table in Hive which is partitioned on one column . It is holding over 2 TBs of data . We want to create a new table which has 3 partitions and we want add data from the older table to new one. What approach should we take ? P.S.- We are using HDP 2.3
... View more
- Tags:
- Data Processing
- Hive
Labels:
- Labels:
-
Apache Hive
08-01-2018
07:53 AM
HI I have configured hue on my HDP 2.3 . it is not managed on ambari. The port mentioned in hue.ini file is # Webserver listens on this address and port http_host=0.0.0.0 http_port=443 whenever I put my ipaddress:443 in the browser , it redirects to only ip address. additionally hue is also accessible only by putting ip address in the browser. My cluster is hosted on aws. I want hue web UI to be accessible only through the specified port and not by only ip address. Any help will be appreciated
... View more
Labels:
06-11-2018
06:16 AM
Hi , thanks for the response , I have already added the jar in the hive-site.xml under hive.aux.jars property. and have restarted the hive shell as well as HiveServer2 . yet the error seems to be the same "FAILED: SemanticException Cannot find class 'com.mongodb.hadoop.hive.MongoStorageHandler'"
... View more
06-08-2018
10:18 AM
I have added mongo-hadoop-hive and mongo-hadoop core and java connector jars in the hive yet it throws FAILED: SemanticException Cannot find class 'com.mongodb.hadoop.hive.MongoStorageHandler' error. can anyone help?
... View more
Labels:
- Labels:
-
Apache Hadoop
04-06-2018
12:37 PM
The CPU(cores) and memory (RAM) assigned for particular processes. Try rearranging them and observe the perfomance difference
... View more
04-05-2018
08:17 AM
It worked. Thanks a ton Jay Kumar SenSharma . appreciate the patience from your side
... View more
04-05-2018
07:33 AM
the ambari-metric-monitor has following log 2018-04-05 00:07:04,749 [INFO] controller.py:110 - Adding event to cache, : {u'metrics': [], u'collect_every': u'15'}
2018-04-05 00:07:04,749 [INFO] main.py:65 - Starting Server RPC Thread: /usr/lib/python2.6/site-packages/resource_monitoring/main.py start
2018-04-05 00:07:04,749 [INFO] controller.py:57 - Running Controller thread: Thread-1
2018-04-05 00:07:04,750 [INFO] emitter.py:45 - Running Emitter thread: Thread-2
2018-04-05 00:07:04,750 [INFO] emitter.py:65 - Nothing to emit, resume waiting.
2018-04-05 00:08:04,752 [WARNING] emitter.py:74 - Error sending metrics to server. 'NoneType' object has no attribute 'strip'
2018-04-05 00:08:04,752 [WARNING] emitter.py:80 - Retrying after 5 ...
... View more
04-05-2018
06:37 AM
Jay Kumar SenSharma ♦ Hi jay . So I removed the service and added it again through teh ambari web ui. Still one of the hosts does not generate any metrics. other host displays the graphs and everything. What might be the problem here?
... View more
04-04-2018
02:22 PM
Jay Kumar SenSharma there is no option of delete even after service is stopped, Just restart and move
... View more
04-04-2018
11:38 AM
Yep of course. I do not have much ams data and I think to delete it completely and reinstall it will be the better option. Not sure how to do it though
... View more
04-04-2018
11:25 AM
HI I tried the curl command for removing it it returned the following error {
"status" : 500,
"message" : "org.apache.ambari.server.controller.spi.SystemException: An inter nal system exception occurred: Host Component cannot be removed, clusterName=Qc, serviceName=AMBARI_METRICS, componentName=METRICS_MONITOR, hostname= itxcqchdp01.catmdev.com, request={ clusterName=CardtronicsQc, serviceName=AMBARI _METRICS, componentName=METRICS_MONITOR, hostname=itxcqchdp01.catmdev.com, desir edState=null, state=null, desiredStackId=null, staleConfig=null, adminState=null }" The problem seems to be on the main host itself
... View more
04-04-2018
10:47 AM
Hi jay , The delete option for ams monitor is disabled in the ambari UI . How should I go about deleting it and reinstalling it again?
... View more
04-03-2018
02:21 PM
One other thing , The data for metrics is unavailable for only one master host. For the other host the graphs are visible
... View more
04-03-2018
01:55 PM
I tried the commannd you mentioned and it returned following value ambari-metrics-collector-2.1.0-1470.x86_64
ambari-metrics-monitor-2.1.0-1470.x86_64
ambari-server-2.2.2.0-460.x86_64 ambari-metrics-hadoop-sink-2.1.0-1470.x86_64 ambari-agent-2.2.2.0-460.x86_64 And if i had to remove the process and try installing again, How would I go about doing it?
... View more
04-03-2018
01:37 PM
HI so the ambari-metrics-monitor.ini file seem to be pointing at the right host here is the log for ambari-metrics-monitor.out 2018-04-03 07:10:43,905 [INFO] host_info.py:294 - hostname_script: None
2018-04-03 07:10:43,970 [INFO] host_info.py:306 - Cached hostname: itxcqchdp01.catmdev.com
2018-04-03 07:10:43,970 [INFO] controller.py:102 - Adding event to cache, all : {u'metrics': [{u'value_threshold': u'128', u'name': u'bytes_out'}], u'collect_every': u'10'}
2018-04-03 07:10:43,970 [INFO] controller.py:110 - Adding event to cache, : {u'metrics': [], u'collect_every': u'15'}
2018-04-03 07:10:43,970 [INFO] main.py:65 - Starting Server RPC Thread: /usr/lib/python2.6/site-packages/resource_monitoring/main.py start
2018-04-03 07:10:43,971 [INFO] controller.py:57 - Running Controller thread: Thread-1
2018-04-03 07:10:43,971 [INFO] emitter.py:45 - Running Emitter thread: Thread-2
2018-04-03 07:10:43,971 [INFO] emitter.py:65 - Nothing to emit, resume waiting.
2018-04-03 07:11:43,973 [WARNING] emitter.py:74 - Error sending metrics to server. 'NoneType' object has no attribute 'strip'
2018-04-03 07:11:43,973 [WARNING] emitter.py:80 - Retrying after 5 ...
2018-04-03 07:11:48,974 [WARNING] emitter.py:74 - Error sending metrics to server. 'NoneType' object has no attribute 'strip'
2018-04-03 07:11:48,974 [WARNING] emitter.py:80 - Retrying after 5 ...
... View more
04-03-2018
01:16 PM
hi Jay , SO the repo file was missing so I copied that file on the host and the metrics monitor started, But it still doesnt show any data on the metrics GUI . The square boxes prompt "no data available"
... View more
04-03-2018
12:03 PM
Hi Jay , Thanks for responding , I tried the commands and it showed # yum remove ambari-metrics-monitor Loaded plugins: fastestmirror, refresh-packagekit, security
Setting up Remove Process
No Match for argument: ambari-metrics-monitor
Loading mirror speeds from cached hostfile
* base: mirror.den1.denvercolo.net
* extras: mirror.raystedman.net
* updates: mirror.compevo.com
No Packages marked for removal it seems the packages are unavailable
... View more
04-03-2018
11:52 AM
ambari metrics monitor failed to start on one my 2 hosts. here is the error stderr:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_monitor.py", line 58, in <module>
AmsMonitor().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_monitor.py", line 28, in install
self.install_packages(env, exclude_packages = ['ambari-metrics-collector', 'ambari-metrics-grafana'])
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 410, in install_packages
retry_count=agent_stack_retry_count)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 54, in action_install
self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 49, in install_package
self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput())
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 83, in checked_call_with_retries
return self._call_with_retries(cmd, is_checked=True, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 91, in _call_with_retries
code, out = func(cmd, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of '/usr/bin/yum -d 0 -e 0 -y install ambari-metrics-monitor' returned 1. Error: Nothing to do
stdout:
2018-04-03 05:48:24,805 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.3.6.0-3796
2018-04-03 05:48:24,805 - Checking if need to create versioned conf dir /etc/hadoop/2.3.6.0-3796/0
2018-04-03 05:48:24,805 - call['conf-select create-conf-dir --package hadoop --stack-version 2.3.6.0-3796 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2018-04-03 05:48:24,837 - call returned (1, '/etc/hadoop/2.3.6.0-3796/0 exist already', '')
2018-04-03 05:48:24,837 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.3.6.0-3796 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2018-04-03 05:48:24,868 - checked_call returned (0, '')
2018-04-03 05:48:24,868 - Ensuring that hadoop has the correct symlink structure
2018-04-03 05:48:24,869 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2018-04-03 05:48:24,871 - Group['hadoop'] {}
2018-04-03 05:48:24,873 - Group['users'] {}
2018-04-03 05:48:24,873 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2018-04-03 05:48:24,875 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2018-04-03 05:48:24,876 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2018-04-03 05:48:24,877 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2018-04-03 05:48:24,878 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2018-04-03 05:48:24,879 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2018-04-03 05:48:24,880 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2018-04-03 05:48:24,881 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2018-04-03 05:48:24,882 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2018-04-03 05:48:24,883 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2018-04-03 05:48:24,884 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2018-04-03 05:48:24,885 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-04-03 05:48:24,888 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2018-04-03 05:48:24,893 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2018-04-03 05:48:24,894 - Group['hdfs'] {}
2018-04-03 05:48:24,894 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2018-04-03 05:48:24,895 - FS Type:
2018-04-03 05:48:24,895 - Directory['/etc/hadoop'] {'mode': 0755}
2018-04-03 05:48:24,919 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2018-04-03 05:48:24,920 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
2018-04-03 05:48:24,937 - Repository['HDP-2.3'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.3.6.0', 'action': ['create'], 'components': ['HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2018-04-03 05:48:24,949 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.3]\nname=HDP-2.3\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.3.6.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-04-03 05:48:24,950 - Repository['HDP-UTILS-1.1.0.20'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos6', 'action': ['create'], 'components': ['HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2018-04-03 05:48:24,955 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.20]\nname=HDP-UTILS-1.1.0.20\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos6\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-04-03 05:48:24,956 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-04-03 05:48:25,089 - Skipping installation of existing package unzip
2018-04-03 05:48:25,089 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-04-03 05:48:25,109 - Skipping installation of existing package curl
2018-04-03 05:48:25,110 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-04-03 05:48:25,130 - Skipping installation of existing package hdp-select
2018-04-03 05:48:25,355 - Package['ambari-metrics-monitor'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-04-03 05:48:25,488 - Installing package ambari-metrics-monitor ('/usr/bin/yum -d 0 -e 0 -y install ambari-metrics-monitor')
... View more
Labels:
- Labels:
-
Apache Ambari