INFO 2016-10-26 08:08:15,262 Heartbeat.py:90 - Adding host info/state to heartbeat message. INFO 2016-10-26 08:08:15,394 logger.py:71 - call[['test', '-w', '/']] {'sudo': True, 'timeout': 5} INFO 2016-10-26 08:08:15,404 logger.py:71 - call returned (0, '') INFO 2016-10-26 08:08:15,405 logger.py:71 - call[['test', '-w', '/dev/shm']] {'sudo': True, 'timeout': 5} INFO 2016-10-26 08:08:15,415 logger.py:71 - call returned (0, '') INFO 2016-10-26 08:08:15,416 logger.py:71 - call[['test', '-w', '/boot']] {'sudo': True, 'timeout': 5} INFO 2016-10-26 08:08:15,427 logger.py:71 - call returned (0, '') INFO 2016-10-26 08:08:15,427 logger.py:71 - call[['test', '-w', '/u01']] {'sudo': True, 'timeout': 5} INFO 2016-10-26 08:08:15,437 logger.py:71 - call returned (0, '') INFO 2016-10-26 08:08:23,950 Controller.py:277 - Heartbeat with server is running... ERROR 2016-10-26 08:08:38,614 script_alert.py:119 - [Alert][yarn_nodemanager_health] Failed with result CRITICAL: ['Connection failed to http://node09.example.com:8042/ws/v1/node/info (Traceback (most recent call last):\n File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/alerts/alert_nodemanager_health.py", line 171, in execute\n url_response = urllib2.urlopen(query, timeout=connection_timeout)\n File "/usr/lib64/python2.6/urllib2.py", line 126, in urlopen\n return _opener.open(url, data, timeout)\n File "/usr/lib64/python2.6/urllib2.py", line 391, in open\n response = self._open(req, data)\n File "/usr/lib64/python2.6/urllib2.py", line 409, in _open\n \'_open\', req)\n File "/usr/lib64/python2.6/urllib2.py", line 369, in _call_chain\n result = func(*args)\n File "/usr/lib64/python2.6/urllib2.py", line 1190, in http_open\n return self.do_open(httplib.HTTPConnection, req)\n File "/usr/lib64/python2.6/urllib2.py", line 1165, in do_open\n raise URLError(err)\nURLError: \n)'] INFO 2016-10-26 08:08:38,692 logger.py:71 - Host contains mounts: ['/', '/proc', '/sys', '/dev/pts', '/dev/shm', '/boot', '/u01', '/proc/sys/fs/binfmt_misc']. INFO 2016-10-26 08:08:38,695 logger.py:71 - Mount point for directory /u01/hadoop/hdfs/data is /u01 INFO 2016-10-26 08:08:38,859 logger.py:71 - Execute['export HIVE_CONF_DIR='/usr/hdp/current/hive-metastore/conf/conf.server' ; hive --hiveconf hive.metastore.uris=thrift://node09.example.com:9083 --hiveconf hive.metastore.client.connect.retry.delay=1 --hiveconf hive.metastore.failure.retries=1 --hiveconf hive.metastore.connect.retries=1 --hiveconf hive.metastore.client.socket.timeout=14 --hiveconf hive.execution.engine=mr -e 'show databases;''] {'path': ['/bin/', '/usr/bin/', '/usr/sbin/', '/usr/hdp/current/hive-metastore/bin'], 'user': 'ambari-qa', 'timeout': 60} INFO 2016-10-26 08:08:38,868 logger.py:71 - Execute['! beeline -u 'jdbc:hive2://node09.example.com:10000/;transportMode=binary' -e '' 2>&1| awk '{print}'|grep -i -e 'Connection refused' -e 'Invalid URL''] {'path': ['/bin/', '/usr/bin/', '/usr/lib/hive/bin/', '/usr/sbin/'], 'user': 'ambari-qa', 'timeout': 60} ERROR 2016-10-26 08:08:42,676 script_alert.py:119 - [Alert][hive_server_process] Failed with result CRITICAL: ['Connection failed on host node09.example.com:10000 (Traceback (most recent call last):\n File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/alerts/alert_hive_thrift_port.py", line 200, in execute\n check_command_timeout=int(check_command_timeout))\n File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/hive_check.py", line 74, in check_thrift_port_sasl\n timeout=check_command_timeout)\n File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__\n self.env.run()\n File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run\n self.run_action(resource, action)\n File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action\n provider_action()\n File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 273, in action_run\n tries=self.resource.tries, try_sleep=self.resource.try_sleep)\n File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 71, in inner\n result = function(command, **kwargs)\n File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 93, in checked_call\n tries=tries, try_sleep=try_sleep)\n File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 141, in _call_wrapper\n result = _call(command, **kwargs_copy)\n File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 294, in _call\n raise Fail(err_msg)\nFail: Execution of \'! beeline -u \'jdbc:hive2://node09.example.com:10000/;transportMode=binary\' -e \'\' 2>&1| awk \'{print}\'|grep -i -e \'Connection refused\' -e \'Invalid URL\'\' returned 1. Error: Could not open client transport with JDBC Uri: jdbc:hive2://node09.example.com:10000/;transportMode=binary: java.net.ConnectException: Connection refused (state=08S01,code=0)\nError: Could not open client transport with JDBC Uri: jdbc:hive2://node09.example.com:10000/;transportMode=binary: java.net.ConnectException: Connection refused (state=08S01,code=0)\n)'] INFO 2016-10-26 08:08:45,767 ClusterConfiguration.py:119 - Updating cached configurations for cluster N109HDP24 INFO 2016-10-26 08:08:45,776 ActionQueue.py:118 - Adding EXECUTION_COMMAND for role NODEMANAGER for service YARN of cluster N109HDP24 to the queue. INFO 2016-10-26 08:08:45,789 ActionQueue.py:254 - Executing command with id = 60-0, taskId = 476 for role = NODEMANAGER of cluster N109HDP24. INFO 2016-10-26 08:08:45,789 ActionQueue.py:295 - Command execution metadata - taskId = 476, retry enabled = False, max retry duration (sec) = 0, log_output = True WARNING 2016-10-26 08:08:45,802 CommandStatusDict.py:128 - [Errno 2] No such file or directory: '/var/lib/ambari-agent/data/output-476.txt' INFO 2016-10-26 08:08:48,062 ActionQueue.py:104 - Adding STATUS_COMMAND for component NAMENODE of service HDFS of cluster N109HDP24 to the queue. INFO 2016-10-26 08:08:48,200 ActionQueue.py:104 - Adding STATUS_COMMAND for component SECONDARY_NAMENODE of service HDFS of cluster N109HDP24 to the queue. ERROR 2016-10-26 08:08:48,189 script_alert.py:119 - [Alert][hive_metastore_process] Failed with result CRITICAL: ['Metastore on node09.example.com failed (Traceback (most recent call last):\n File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/alerts/alert_hive_metastore.py", line 198, in execute\n timeout=int(check_command_timeout) )\n File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__\n self.env.run()\n File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run\n self.run_action(resource, action)\n File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action\n provider_action()\n File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 273, in action_run\n tries=self.resource.tries, try_sleep=self.resource.try_sleep)\n File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 71, in inner\n result = function(command, **kwargs)\n File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 93, in checked_call\n tries=tries, try_sleep=try_sleep)\n File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 141, in _call_wrapper\n result = _call(command, **kwargs_copy)\n File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 294, in _call\n raise Fail(err_msg)\nFail: Execution of \'export HIVE_CONF_DIR=\'/usr/hdp/current/hive-metastore/conf/conf.server\' ; hive --hiveconf hive.metastore.uris=thrift://node09.example.com:9083 --hiveconf hive.metastore.client.connect.retry.delay=1 --hiveconf hive.metastore.failure.retries=1 --hiveconf hive.metastore.connect.retries=1 --hiveconf hive.metastore.client.socket.timeout=14 --hiveconf hive.execution.engine=mr -e \'show databases;\'\' returned 12. WARNING: Use "yarn jar" to launch YARN applications.\n\nLogging initialized using configuration in file:/etc/hive/2.4.3.0-227/0/conf.server/hive-log4j.properties\nhive.exec.post.hooks Class not found:org.apache.atlas.hive.hook.HiveHook\nFAILED: Hive Internal Error: java.lang.ClassNotFoundException(org.apache.atlas.hive.hook.HiveHook)\njava.lang.ClassNotFoundException: org.apache.atlas.hive.hook.HiveHook\n\tat java.net.URLClassLoader.findClass(URLClassLoader.java:381)\n\tat java.lang.ClassLoader.loadClass(ClassLoader.java:424)\n\tat java.lang.ClassLoader.loadClass(ClassLoader.java:357)\n\tat java.lang.Class.forName0(Native Method)\n\tat java.lang.Class.forName(Class.java:348)\n\tat org.apache.hadoop.hive.ql.hooks.HookUtils.getHooks(HookUtils.java:60)\n\tat org.apache.hadoop.hive.ql.Driver.getHooks(Driver.java:1389)\n\tat org.apache.hadoop.hive.ql.Driver.getHooks(Driver.java:1373)\n\tat org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1602)\n\tat org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1275)\n\tat org.apache.hadoop.hive.ql.Driver.run(Driver.java:1139)\n\tat org.apache.hadoop.hive.ql.Driver.run(Driver.java:1129)\n\tat org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:216)\n\tat org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:168)\n\tat org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:379)\n\tat org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:314)\n\tat org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:711)\n\tat org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684)\n\tat org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:624)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.lang.reflect.Method.invoke(Method.java:498)\n\tat org.apache.hadoop.util.RunJar.run(RunJar.java:221)\n\tat org.apache.hadoop.util.RunJar.main(RunJar.java:136)\n)'] INFO 2016-10-26 08:08:48,352 ActionQueue.py:104 - Adding STATUS_COMMAND for component RESOURCEMANAGER of service YARN of cluster N109HDP24 to the queue. INFO 2016-10-26 08:08:48,484 ActionQueue.py:104 - Adding STATUS_COMMAND for component APP_TIMELINE_SERVER of service YARN of cluster N109HDP24 to the queue. INFO 2016-10-26 08:08:48,615 ActionQueue.py:104 - Adding STATUS_COMMAND for component HISTORYSERVER of service MAPREDUCE2 of cluster N109HDP24 to the queue. INFO 2016-10-26 08:08:48,761 ActionQueue.py:104 - Adding STATUS_COMMAND for component HIVE_SERVER of service HIVE of cluster N109HDP24 to the queue. INFO 2016-10-26 08:08:48,933 ActionQueue.py:337 - Quit retrying for command id 476. Status: COMPLETED, retryAble: False, retryDuration (sec): -1, last delay (sec): 1 INFO 2016-10-26 08:08:48,934 ActionQueue.py:342 - Command 476 completed successfully! INFO 2016-10-26 08:08:48,943 ActionQueue.py:104 - Adding STATUS_COMMAND for component WEBHCAT_SERVER of service HIVE of cluster N109HDP24 to the queue. INFO 2016-10-26 08:08:49,095 ActionQueue.py:104 - Adding STATUS_COMMAND for component HIVE_METASTORE of service HIVE of cluster N109HDP24 to the queue. INFO 2016-10-26 08:08:49,253 ActionQueue.py:104 - Adding STATUS_COMMAND for component HBASE_MASTER of service HBASE of cluster N109HDP24 to the queue. INFO 2016-10-26 08:08:49,413 ActionQueue.py:104 - Adding STATUS_COMMAND for component ZOOKEEPER_SERVER of service ZOOKEEPER of cluster N109HDP24 to the queue. INFO 2016-10-26 08:08:49,565 ActionQueue.py:104 - Adding STATUS_COMMAND for component DRPC_SERVER of service STORM of cluster N109HDP24 to the queue. INFO 2016-10-26 08:08:49,739 ActionQueue.py:104 - Adding STATUS_COMMAND for component NIMBUS of service STORM of cluster N109HDP24 to the queue. INFO 2016-10-26 08:08:53,076 ActionQueue.py:104 - Adding STATUS_COMMAND for component SPARK_CLIENT of service SPARK of cluster N109HDP24 to the queue. INFO 2016-10-26 08:09:16,412 Heartbeat.py:90 - Adding host info/state to heartbeat message. INFO 2016-10-26 08:09:16,550 logger.py:71 - call[['test', '-w', '/']] {'sudo': True, 'timeout': 5} INFO 2016-10-26 08:09:16,561 logger.py:71 - call returned (0, '') INFO 2016-10-26 08:09:16,562 logger.py:71 - call[['test', '-w', '/dev/shm']] {'sudo': True, 'timeout': 5} INFO 2016-10-26 08:09:16,572 logger.py:71 - call returned (0, '') INFO 2016-10-26 08:09:16,573 logger.py:71 - call[['test', '-w', '/boot']] {'sudo': True, 'timeout': 5} INFO 2016-10-26 08:09:16,584 logger.py:71 - call returned (0, '') INFO 2016-10-26 08:09:16,585 logger.py:71 - call[['test', '-w', '/u01']] {'sudo': True, 'timeout': 5} INFO 2016-10-26 08:09:16,595 logger.py:71 - call returned (0, '') INFO 2016-10-26 08:09:24,164 Controller.py:277 - Heartbeat with server is running... ERROR 2016-10-26 08:09:38,619 script_alert.py:119 - [Alert][yarn_nodemanager_health] Failed with result CRITICAL: ['Connection failed to http://node09.example.com:8042/ws/v1/node/info (Traceback (most recent call last):\n File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/alerts/alert_nodemanager_health.py", line 171, in execute\n url_response = urllib2.urlopen(query, timeout=connection_timeout)\n File "/usr/lib64/python2.6/urllib2.py", line 126, in urlopen\n return _opener.open(url, data, timeout)\n File "/usr/lib64/python2.6/urllib2.py", line 391, in open\n response = self._open(req, data)\n File "/usr/lib64/python2.6/urllib2.py", line 409, in _open\n \'_open\', req)\n File "/usr/lib64/python2.6/urllib2.py", line 369, in _call_chain\n result = func(*args)\n File "/usr/lib64/python2.6/urllib2.py", line 1190, in http_open\n return self.do_open(httplib.HTTPConnection, req)\n File "/usr/lib64/python2.6/urllib2.py", line 1165, in do_open\n raise URLError(err)\nURLError: \n)'] INFO 2016-10-26 08:09:48,158 ActionQueue.py:104 - Adding STATUS_COMMAND for component NAMENODE of service HDFS of cluster N109HDP24 to the queue. INFO 2016-10-26 08:09:48,280 ActionQueue.py:104 - Adding STATUS_COMMAND for component SECONDARY_NAMENODE of service HDFS of cluster N109HDP24 to the queue. INFO 2016-10-26 08:09:48,404 ActionQueue.py:104 - Adding STATUS_COMMAND for component RESOURCEMANAGER of service YARN of cluster N109HDP24 to the queue. INFO 2016-10-26 08:09:48,528 ActionQueue.py:104 - Adding STATUS_COMMAND for component APP_TIMELINE_SERVER of service YARN of cluster N109HDP24 to the queue. INFO 2016-10-26 08:09:48,653 ActionQueue.py:104 - Adding STATUS_COMMAND for component HISTORYSERVER of service MAPREDUCE2 of cluster N109HDP24 to the queue. INFO 2016-10-26 08:09:48,776 ActionQueue.py:104 - Adding STATUS_COMMAND for component HIVE_SERVER of service HIVE of cluster N109HDP24 to the queue. INFO 2016-10-26 08:09:48,907 ActionQueue.py:104 - Adding STATUS_COMMAND for component WEBHCAT_SERVER of service HIVE of cluster N109HDP24 to the queue. INFO 2016-10-26 08:09:49,033 ActionQueue.py:104 - Adding STATUS_COMMAND for component HIVE_METASTORE of service HIVE of cluster N109HDP24 to the queue. INFO 2016-10-26 08:09:49,164 ActionQueue.py:104 - Adding STATUS_COMMAND for component HBASE_MASTER of service HBASE of cluster N109HDP24 to the queue. INFO 2016-10-26 08:09:49,305 ActionQueue.py:104 - Adding STATUS_COMMAND for component ZOOKEEPER_SERVER of service ZOOKEEPER of cluster N109HDP24 to the queue. INFO 2016-10-26 08:09:49,435 ActionQueue.py:104 - Adding STATUS_COMMAND for component DRPC_SERVER of service STORM of cluster N109HDP24 to the queue. INFO 2016-10-26 08:09:49,562 ActionQueue.py:104 - Adding STATUS_COMMAND for component NIMBUS of service STORM of cluster N109HDP24 to the queue. INFO 2016-10-26 08:09:49,689 ActionQueue.py:104 - Adding STATUS_COMMAND for component STORM_UI_SERVER of service STORM of cluster N109HDP24 to the queue. INFO 2016-10-26 08:10:17,402 Heartbeat.py:90 - Adding host info/state to heartbeat message. INFO 2016-10-26 08:10:17,535 logger.py:71 - call[['test', '-w', '/']] {'sudo': True, 'timeout': 5} INFO 2016-10-26 08:10:17,544 logger.py:71 - call returned (0, '') INFO 2016-10-26 08:10:17,545 logger.py:71 - call[['test', '-w', '/dev/shm']] {'sudo': True, 'timeout': 5} INFO 2016-10-26 08:10:17,555 logger.py:71 - call returned (0, '') INFO 2016-10-26 08:10:17,556 logger.py:71 - call[['test', '-w', '/boot']] {'sudo': True, 'timeout': 5} INFO 2016-10-26 08:10:17,567 logger.py:71 - call returned (0, '') INFO 2016-10-26 08:10:17,567 logger.py:71 - call[['test', '-w', '/u01']] {'sudo': True, 'timeout': 5} INFO 2016-10-26 08:10:17,577 logger.py:71 - call returned (0, '') INFO 2016-10-26 08:10:24,199 Controller.py:277 - Heartbeat with server is running... ERROR 2016-10-26 08:10:38,629 script_alert.py:119 - [Alert][yarn_nodemanager_health] Failed with result CRITICAL: ['Connection failed to http://node09.example.com:8042/ws/v1/node/info (Traceback (most recent call last):\n File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/alerts/alert_nodemanager_health.py", line 171, in execute\n url_response = urllib2.urlopen(query, timeout=connection_timeout)\n File "/usr/lib64/python2.6/urllib2.py", line 126, in urlopen\n return _opener.open(url, data, timeout)\n File "/usr/lib64/python2.6/urllib2.py", line 391, in open\n response = self._open(req, data)\n File "/usr/lib64/python2.6/urllib2.py", line 409, in _open\n \'_open\', req)\n File "/usr/lib64/python2.6/urllib2.py", line 369, in _call_chain\n result = func(*args)\n File "/usr/lib64/python2.6/urllib2.py", line 1190, in http_open\n return self.do_open(httplib.HTTPConnection, req)\n File "/usr/lib64/python2.6/urllib2.py", line 1165, in do_open\n raise URLError(err)\nURLError: \n)'] INFO 2016-10-26 08:10:38,713 logger.py:71 - Host contains mounts: ['/', '/proc', '/sys', '/dev/pts', '/dev/shm', '/boot', '/u01', '/proc/sys/fs/binfmt_misc']. INFO 2016-10-26 08:10:38,720 logger.py:71 - Mount point for directory /u01/hadoop/hdfs/data is /u01 WARNING 2016-10-26 08:10:38,821 base_alert.py:134 - [Alert][mapreduce_history_server_rpc_latency] Unable to execute alert. [Alert][mapreduce_history_server_rpc_latency] Unable to extract JSON from JMX response WARNING 2016-10-26 08:10:38,829 base_alert.py:134 - [Alert][mapreduce_history_server_cpu] Unable to execute alert. [Alert][mapreduce_history_server_cpu] Unable to extract JSON from JMX response INFO 2016-10-26 08:10:48,178 ActionQueue.py:104 - Adding STATUS_COMMAND for component NAMENODE of service HDFS of cluster N109HDP24 to the queue. INFO 2016-10-26 08:10:48,302 ActionQueue.py:104 - Adding STATUS_COMMAND for component SECONDARY_NAMENODE of service HDFS of cluster N109HDP24 to the queue. INFO 2016-10-26 08:10:48,425 ActionQueue.py:104 - Adding STATUS_COMMAND for component RESOURCEMANAGER of service YARN of cluster N109HDP24 to the queue. INFO 2016-10-26 08:10:48,549 ActionQueue.py:104 - Adding STATUS_COMMAND for component APP_TIMELINE_SERVER of service YARN of cluster N109HDP24 to the queue. INFO 2016-10-26 08:10:48,676 ActionQueue.py:104 - Adding STATUS_COMMAND for component HISTORYSERVER of service MAPREDUCE2 of cluster N109HDP24 to the queue. INFO 2016-10-26 08:10:48,799 ActionQueue.py:104 - Adding STATUS_COMMAND for component HIVE_SERVER of service HIVE of cluster N109HDP24 to the queue. INFO 2016-10-26 08:10:48,920 ActionQueue.py:104 - Adding STATUS_COMMAND for component WEBHCAT_SERVER of service HIVE of cluster N109HDP24 to the queue. INFO 2016-10-26 08:10:49,040 ActionQueue.py:104 - Adding STATUS_COMMAND for component HIVE_METASTORE of service HIVE of cluster N109HDP24 to the queue. INFO 2016-10-26 08:10:49,161 ActionQueue.py:104 - Adding STATUS_COMMAND for component HBASE_MASTER of service HBASE of cluster N109HDP24 to the queue. INFO 2016-10-26 08:10:49,282 ActionQueue.py:104 - Adding STATUS_COMMAND for component ZOOKEEPER_SERVER of service ZOOKEEPER of cluster N109HDP24 to the queue. INFO 2016-10-26 08:10:49,403 ActionQueue.py:104 - Adding STATUS_COMMAND for component DRPC_SERVER of service STORM of cluster N109HDP24 to the queue. INFO 2016-10-26 08:10:49,525 ActionQueue.py:104 - Adding STATUS_COMMAND for component NIMBUS of service STORM of cluster N109HDP24 to the queue. INFO 2016-10-26 08:10:49,649 ActionQueue.py:104 - Adding STATUS_COMMAND for component STORM_UI_SERVER of service STORM of cluster N109HDP24 to the queue. INFO 2016-10-26 08:10:49,803 ActionQueue.py:104 - Adding STATUS_COMMAND for component INFRA_SOLR of service AMBARI_INFRA of cluster N109HDP24 to the queue. INFO 2016-10-26 08:10:52,096 ActionQueue.py:104 - Adding STATUS_COMMAND for component HBASE_CLIENT of service HBASE of cluster N109HDP24 to the queue.