Support Questions

Find answers, ask questions, and share your expertise

hcat client install failure with Ambari 2.6

avatar
New Contributor

I use public repository.

But "No package found for hive2_${stack_version}(hive2_(\d|_)+$) " was appeared in the ambari-server.log and failed to install hcat client.

stderr: Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hcat_client.py", line 79, in <module> HCatClient().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 367, in execute method(env) File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hcat_client.py", line 35, in install self.install_packages(env) File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 803, in install_packages name = self.format_package_name(package['name']) File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 538, in format_package_name raise Fail("Cannot match package for regexp name {0}. Available packages: {1}".format(name, self.available_packages_in_repos)) resource_management.core.exceptions.Fail: Cannot match package for regexp name hive2_${stack_version}. Available packages: ['accumulo', 'accumulo-conf-standalone', 'accumulo-source', 'accumulo_2_6_3_0_235', 'accumulo_2_6_3_0_235-conf-standalone', 'accumulo_2_6_3_0_235-source', 'atlas-metadata', 'atlas-metadata-falcon-plugin', 'atlas-metadata-hive-plugin', 'atlas-metadata-sqoop-plugin', 'atlas-metadata-storm-plugin', 'atlas-metadata_2_6_3_0_235', 'atlas-metadata_2_6_3_0_235-falcon-plugin', 'atlas-metadata_2_6_3_0_235-sqoop-plugin', 'atlas-metadata_2_6_3_0_235-storm-plugin', 'bigtop-tomcat', 'datafu', 'datafu_2_6_3_0_235', 'druid', 'druid_2_6_3_0_235', 'falcon', 'falcon-doc', 'falcon_2_6_3_0_235', 'falcon_2_6_3_0_235-doc', 'flume', 'flume-agent', 'flume_2_6_3_0_235', 'flume_2_6_3_0_235-agent', 'hadoop', 'hadoop-client', 'hadoop-conf-pseudo', 'hadoop-doc', 'hadoop-hdfs', 'hadoop-hdfs-datanode', 'hadoop-hdfs-fuse', 'hadoop-hdfs-journalnode', 'hadoop-hdfs-namenode', 'hadoop-hdfs-secondarynamenode', 'hadoop-hdfs-zkfc', 'hadoop-httpfs', 'hadoop-httpfs-server', 'hadoop-libhdfs', 'hadoop-mapreduce', 'hadoop-mapreduce-historyserver', 'hadoop-source', 'hadoop-yarn', 'hadoop-yarn-nodemanager', 'hadoop-yarn-proxyserver', 'hadoop-yarn-resourcemanager', 'hadoop-yarn-timelineserver', 'hadoop_2_6_3_0_235-conf-pseudo', 'hadoop_2_6_3_0_235-doc', 'hadoop_2_6_3_0_235-hdfs-datanode', 'hadoop_2_6_3_0_235-hdfs-fuse', 'hadoop_2_6_3_0_235-hdfs-journalnode', 'hadoop_2_6_3_0_235-hdfs-namenode', 'hadoop_2_6_3_0_235-hdfs-secondarynamenode', 'hadoop_2_6_3_0_235-hdfs-zkfc', 'hadoop_2_6_3_0_235-httpfs', 'hadoop_2_6_3_0_235-httpfs-server', 'hadoop_2_6_3_0_235-libhdfs', 'hadoop_2_6_3_0_235-mapreduce-historyserver', 'hadoop_2_6_3_0_235-source', 'hadoop_2_6_3_0_235-yarn-nodemanager', 'hadoop_2_6_3_0_235-yarn-proxyserver', 'hadoop_2_6_3_0_235-yarn-resourcemanager', 'hadoop_2_6_3_0_235-yarn-timelineserver', 'hadooplzo', 'hadooplzo-native', 'hadooplzo_2_6_3_0_235', 'hadooplzo_2_6_3_0_235-native', 'hbase', 'hbase-doc', 'hbase-master', 'hbase-regionserver', 'hbase-rest', 'hbase-thrift', 'hbase-thrift2', 'hbase_2_6_3_0_235', 'hbase_2_6_3_0_235-doc', 'hbase_2_6_3_0_235-master', 'hbase_2_6_3_0_235-regionserver', 'hbase_2_6_3_0_235-rest', 'hbase_2_6_3_0_235-thrift', 'hbase_2_6_3_0_235-thrift2', 'hive', 'hive-hcatalog', 'hive-hcatalog-server', 'hive-jdbc', 'hive-metastore', 'hive-server', 'hive-server2', 'hive-webhcat', 'hive-webhcat-server', 'hue', 'hue-beeswax', 'hue-common', 'hue-hcatalog', 'hue-oozie', 'hue-pig', 'hue-server', 'kafka', 'kafka_2_6_3_0_235', 'knox', 'knox_2_6_3_0_235', 'livy', 'livy2', 'livy2_2_6_3_0_235', 'livy_2_6_3_0_235', 'mahout', 'mahout-doc', 'mahout_2_6_3_0_235', 'mahout_2_6_3_0_235-doc', 'oozie', 'oozie-client', 'oozie-common', 'oozie-sharelib', 'oozie-sharelib-distcp', 'oozie-sharelib-hcatalog', 'oozie-sharelib-hive', 'oozie-sharelib-hive2', 'oozie-sharelib-mapreduce-streaming', 'oozie-sharelib-pig', 'oozie-sharelib-spark', 'oozie-sharelib-sqoop', 'oozie-webapp', 'oozie_2_6_3_0_235', 'oozie_2_6_3_0_235-client', 'oozie_2_6_3_0_235-common', 'oozie_2_6_3_0_235-sharelib', 'oozie_2_6_3_0_235-sharelib-distcp', 'oozie_2_6_3_0_235-sharelib-hcatalog', 'oozie_2_6_3_0_235-sharelib-hive', 'oozie_2_6_3_0_235-sharelib-hive2', 'oozie_2_6_3_0_235-sharelib-mapreduce-streaming', 'oozie_2_6_3_0_235-sharelib-pig', 'oozie_2_6_3_0_235-sharelib-spark', 'oozie_2_6_3_0_235-sharelib-sqoop', 'oozie_2_6_3_0_235-webapp', 'phoenix', 'phoenix_2_6_3_0_235', 'pig', 'ranger-admin', 'ranger-atlas-plugin', 'ranger-hbase-plugin', 'ranger-hdfs-plugin', 'ranger-hive-plugin', 'ranger-kafka-plugin', 'ranger-kms', 'ranger-knox-plugin', 'ranger-solr-plugin', 'ranger-storm-plugin', 'ranger-tagsync', 'ranger-usersync', 'ranger-yarn-plugin', 'ranger_2_6_3_0_235-admin', 'ranger_2_6_3_0_235-atlas-plugin', 'ranger_2_6_3_0_235-hbase-plugin', 'ranger_2_6_3_0_235-kafka-plugin', 'ranger_2_6_3_0_235-kms', 'ranger_2_6_3_0_235-knox-plugin', 'ranger_2_6_3_0_235-solr-plugin', 'ranger_2_6_3_0_235-storm-plugin', 'ranger_2_6_3_0_235-tagsync', 'ranger_2_6_3_0_235-usersync', 'shc', 'shc_2_6_3_0_235', 'slider', 'slider_2_6_3_0_235', 'spark', 'spark-master', 'spark-python', 'spark-worker', 'spark-yarn-shuffle', 'spark2', 'spark2-master', 'spark2-python', 'spark2-worker', 'spark2-yarn-shuffle', 'spark2_2_6_3_0_235', 'spark2_2_6_3_0_235-master', 'spark2_2_6_3_0_235-python', 'spark2_2_6_3_0_235-worker', 'spark_2_6_3_0_235', 'spark_2_6_3_0_235-master', 'spark_2_6_3_0_235-python', 'spark_2_6_3_0_235-worker', 'spark_llap', 'spark_llap_2_6_3_0_235', 'sqoop', 'sqoop-metastore', 'sqoop_2_6_3_0_235', 'sqoop_2_6_3_0_235-metastore', 'storm', 'storm-slider-client', 'storm_2_6_3_0_235', 'storm_2_6_3_0_235-slider-client', 'superset', 'superset_2_6_3_0_235', 'tez', 'tez_hive2', 'zeppelin', 'zeppelin_2_6_3_0_235', 'zookeeper', 'zookeeper-server', 'zookeeper_2_6_3_0_235-server', 'R', 'R-core', 'R-core-devel', 'R-devel', 'R-java', 'R-java-devel', 'compat-readline5', 'extjs', 'fping', 'ganglia-debuginfo', 'ganglia-devel', 'ganglia-gmetad', 'ganglia-gmond', 'ganglia-gmond-modules-python', 'ganglia-web', 'hadoop-lzo', 'hadoop-lzo-native', 'libRmath', 'libRmath-devel', 'libconfuse', 'libganglia', 'libgenders', 'lua-rrdtool', 'lucidworks-hdpsearch', 'lzo-debuginfo', 'lzo-devel', 'lzo-minilzo', 'mysql-community-release', 'mysql-connector-java', 'nagios', 'nagios-debuginfo', 'nagios-devel', 'nagios-plugins', 'nagios-plugins-debuginfo', 'nagios-www', 'openblas', 'openblas-Rblas', 'openblas-devel', 'openblas-openmp', 'openblas-openmp64', 'openblas-openmp64_', 'openblas-serial64', 'openblas-serial64_', 'openblas-static', 'openblas-threads', 'openblas-threads64', 'openblas-threads64_', 'pdsh', 'perl-Crypt-DES', 'perl-Net-SNMP', 'perl-rrdtool', 'python-rrdtool', 'rrdtool', 'rrdtool-debuginfo', 'rrdtool-devel', 'ruby-rrdtool', 'snappy', 'snappy-devel', 'snappy-devel', 'tcl-rrdtool', 'hdp-select', 'hive2', 'hive2-jdbc', 'hive_2_6_3_0_235', 'hive_2_6_3_0_235-hcatalog', 'hive_2_6_3_0_235-hcatalog-server', 'hive_2_6_3_0_235-jdbc', 'hive_2_6_3_0_235-metastore', 'hive_2_6_3_0_235-server', 'hive_2_6_3_0_235-server2', 'hive_2_6_3_0_235-webhcat', 'hive_2_6_3_0_235-webhcat-server', 'tez_2_6_3_0_235'] stdout: 2017-11-09 10:05:49,139 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=None -> 2.6 2017-11-09 10:05:49,144 - Using hadoop conf dir: /usr/hdp/2.6.3.0-235/hadoop/conf 2017-11-09 10:05:49,145 - Group['hdfs'] {} 2017-11-09 10:05:49,146 - Group['hadoop'] {} 2017-11-09 10:05:49,146 - Group['users'] {} 2017-11-09 10:05:49,147 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2017-11-09 10:05:49,147 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2017-11-09 10:05:49,148 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2017-11-09 10:05:49,149 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None} 2017-11-09 10:05:49,149 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None} 2017-11-09 10:05:49,150 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None} 2017-11-09 10:05:49,151 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2017-11-09 10:05:49,151 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2017-11-09 10:05:49,152 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2017-11-09 10:05:49,152 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-11-09 10:05:49,154 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2017-11-09 10:05:49,158 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if 2017-11-09 10:05:49,158 - Group['hdfs'] {} 2017-11-09 10:05:49,159 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', u'hdfs']} 2017-11-09 10:05:49,159 - FS Type: 2017-11-09 10:05:49,159 - Directory['/etc/hadoop'] {'mode': 0755} 2017-11-09 10:05:49,173 - File['/usr/hdp/2.6.3.0-235/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2017-11-09 10:05:49,174 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777} 2017-11-09 10:05:49,187 - Repository['HDP-2.6-repo-1'] {'append_to_file': False, 'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.3.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None} 2017-11-09 10:05:49,193 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-2.6-repo-1]\nname=HDP-2.6-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.3.0\n\npath=/\nenabled=1\ngpgcheck=0'} 2017-11-09 10:05:49,194 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match 2017-11-09 10:05:49,194 - Repository['HDP-UTILS-1.1.0.21-repo-1'] {'append_to_file': True, 'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None} 2017-11-09 10:05:49,197 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-2.6-repo-1]\nname=HDP-2.6-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.3.0\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.21-repo-1]\nname=HDP-UTILS-1.1.0.21-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'} 2017-11-09 10:05:49,197 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match 2017-11-09 10:05:49,198 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2017-11-09 10:05:49,375 - Skipping installation of existing package unzip 2017-11-09 10:05:49,375 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2017-11-09 10:05:49,469 - Skipping installation of existing package curl 2017-11-09 10:05:49,469 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2017-11-09 10:05:49,566 - Skipping installation of existing package hdp-select 2017-11-09 10:05:49,570 - The repository with version 2.6.3.0-235 for this command has been marked as resolved. It will be used to report the version of the component which was installed 2017-11-09 10:05:49,781 - MariaDB RedHat Support: false 2017-11-09 10:05:49,785 - Using hadoop conf dir: /usr/hdp/2.6.3.0-235/hadoop/conf 2017-11-09 10:05:49,796 - call['ambari-python-wrap /usr/bin/hdp-select status hive-server2'] {'timeout': 20} 2017-11-09 10:05:49,818 - call returned (0, 'hive-server2 - 2.6.3.0-235') 2017-11-09 10:05:49,818 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=None -> 2.6 2017-11-09 10:05:49,850 - Command repositories: HDP-2.6-repo-1, HDP-UTILS-1.1.0.21-repo-1 2017-11-09 10:05:49,850 - Applicable repositories: HDP-2.6-repo-1, HDP-UTILS-1.1.0.21-repo-1 2017-11-09 10:05:49,852 - Looking for matching packages in the following repositories: HDP-2.6-repo-1, HDP-UTILS-1.1.0.21-repo-1 2017-11-09 10:05:51,537 - Package['hive_2_6_3_0_235'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2017-11-09 10:05:51,707 - Skipping installation of existing package hive_2_6_3_0_235 2017-11-09 10:05:51,710 - Package['hive_2_6_3_0_235-hcatalog'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2017-11-09 10:05:51,804 - Skipping installation of existing package hive_2_6_3_0_235-hcatalog 2017-11-09 10:05:51,806 - Package['hive_2_6_3_0_235-webhcat'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2017-11-09 10:05:51,907 - Skipping installation of existing package hive_2_6_3_0_235-webhcat 2017-11-09 10:05:51,909 - No package found for hive2_${stack_version}(hive2_(\d|_)+$) 2017-11-09 10:05:51,911 - The repository with version 2.6.3.0-235 for this command has been marked as resolved. It will be used to report the version of the component which was installed Command failed after 1 tries

1 ACCEPTED SOLUTION

avatar
Contributor

Try running below on the node you're installing hive2.

1)yum list installed |grep hive -- make sure repo is listed as @HDP-2.6-repo-1. If it says "installed" then do below steps.

2)yum-complete-transaction -- this is important to run.

3)yum remove hive2_ .... -- all components which are having "installed" but without proper repo.

4)Goto Ambari and install again.

This is a issue is happening for almost to any component due to break/killed yum and quasi installed status.

View solution in original post

16 REPLIES 16

avatar
Super Guru

@shin matsuura,

Can you paste the content of /etc/yum.repos.d/ambari-hdp-1.repo

Also, can you try running the below where hcat is installed

yum clean all
yum install -y hive2_2_6_3_0_235

Thanks,

Aditya

avatar
Explorer

Aditya,

I am having the problem. The first attempt at installing failed with stage timeout errors. Now, I am getting this same error each time I try to "retry" the install.. any ideas?

First attempt at install, I got the following error:

The 'hive-webhcat' component did not advertise a version. This may indicate a problem with the component packaging. However, the stack-select tool was able to report a single version installed (2.6.3.0-235). This is the version that will be reported. Command aborted. Reason: 'Stage timeout'

then everything else "stage timeout" too

Now every time I click "retry" I get this hive2 / hcat error

[root@namenode01 ambari-server]# 
[root@namenode01 ambari-server]# cat /etc/yum.repos.d/ambari-hdp-1.repo
[HDP-2.6-repo-1]
name=HDP-2.6-repo-1
baseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.3.0

path=/
enabled=1
gpgcheck=0
[HDP-UTILS-1.1.0.21-repo-1]
name=HDP-UTILS-1.1.0.21-repo-1
baseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7

path=/
enabled=1
gpgcheck=0[root@namenode01 ambari-server]#     yum clean all
Loaded plugins: fastestmirror, langpacks
Cleaning repos: HDP-2.6-repo-1 HDP-UTILS-1.1.0.21-repo-1 ambari-2.6.0.0 base centosplus centosplus-source cr extras extras-source fasttrack updates
Cleaning up everything
Cleaning up list of fastest mirrors
[root@namenode01 ambari-server]#     yum install -y hive2_2_6_3_0_235
Loaded plugins: fastestmirror, langpacks
HDP-2.6-repo-1                                                                                                                         | 2.9 kB  00:00:00     
HDP-UTILS-1.1.0.21-repo-1                                                                                                              | 2.9 kB  00:00:00     
ambari-2.6.0.0                                                                                                                         | 2.9 kB  00:00:00     
base                                                                                                                                   | 3.6 kB  00:00:00     
centosplus                                                                                                                             | 3.4 kB  00:00:00     
centosplus-source                                                                                                                      | 2.9 kB  00:00:00     
cr                                                                                                                                     | 3.3 kB  00:00:00     
extras                                                                                                                                 | 3.4 kB  00:00:00     
extras-source                                                                                                                          | 2.9 kB  00:00:00     
fasttrack                                                                                                                              | 3.3 kB  00:00:00     
updates                                                                                                                                | 3.4 kB  00:00:00     
(1/12): HDP-2.6-repo-1/primary_db                                                                                                      | 100 kB  00:00:00     
(2/12): ambari-2.6.0.0/primary_db                                                                                                      | 8.6 kB  00:00:00     
(3/12): HDP-UTILS-1.1.0.21-repo-1/primary_db                                                                                           |  38 kB  00:00:01     
(4/12): base/7/x86_64/group_gz                                                                                                         | 156 kB  00:00:01     
(5/12): centosplus-source/7/primary_db                                                                                                 | 6.8 kB  00:00:00     
(6/12): base/7/x86_64/primary_db                                                                                                       | 5.7 MB  00:00:02     
(7/12): extras-source/7/primary_db                                                                                                     |  41 kB  00:00:00     
(8/12): centosplus/7/x86_64/primary_db                                                                                                 | 1.8 MB  00:00:01     
(9/12): extras/7/x86_64/primary_db                                                                                                     | 130 kB  00:00:01     
(10/12): cr/7/x86_64/primary_db                                                                                                        | 1.2 kB  00:00:01     
(11/12): fasttrack/7/x86_64/primary_db                                                                                                 | 1.2 kB  00:00:00     
(12/12): updates/7/x86_64/primary_db                                                                                                   | 3.6 MB  00:00:01     
Determining fastest mirrors
 * base: repo1.dal.innoscale.net
 * centosplus: mirror.teklinks.com
 * extras: dallas.tx.mirror.xygenhosting.com
 * fasttrack: mirror.wdc1.us.leaseweb.net
 * updates: repos.dfw.quadranet.com
Package hive2_2_6_3_0_235-2.1.0.2.6.3.0-235.noarch already installed and latest version
Nothing to do
[root@namenode01 ambari-server]# rpm -qa | grep hcat
hive_2_6_3_0_235-hcatalog-1.2.1000.2.6.3.0-235.noarch
hive_2_6_3_0_235-webhcat-1.2.1000.2.6.3.0-235.noarch
[root@namenode01 ambari-server]# ^C
[root@namenode01 ambari-server]# 



avatar
Super Guru

@thomas cook,

Looks like the RPMs got installed. Can you try installing the client manually. Run the below command

curl -k -u {username}:{password} -H "X-Requested-By:ambari" -i -X PUT -d '{"HostRoles": {"state": "INSTALLED"}}' http://{ambari-host}:{ambari-port}/api/v1/clusters/{clustername}/hosts/{hostname}/host_components/HC...

Replace the placeholders like username,password etc with original values.

You can check the progress of the installation in Ambari UI.

Thanks,

Aditya

avatar
Explorer

So, I couldn't get that work because it was complaining the cluster didn't exist. I did an ambari-server reset and tried again.. Now, this is the first error in the install:

stderr: 
2017-12-05 06:20:37,671 - The 'hive-webhcat' component did not advertise a version. This may indicate a problem with the component packaging. However, the stack-select tool was able to report a single version installed (2.6.3.0-235). This is the version that will be reported.
2017-12-05 06:20:48,974 - The 'hive-webhcat' component did not advertise a version. This may indicate a problem with the component packaging. However, the stack-select tool was able to report a single version installed (2.6.3.0-235). This is the version that will be reported.
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hcat_client.py", line 79, in <module>
    HCatClient().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 367, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hcat_client.py", line 35, in install
    self.install_packages(env)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 803, in install_packages
    name = self.format_package_name(package['name'])
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 538, in format_package_name
    raise Fail("Cannot match package for regexp name {0}. Available packages: {1}".format(name, self.available_packages_in_repos))
resource_management.core.exceptions.Fail: Cannot match package for regexp name hive2_${stack_version}. Available packages: ['accumulo', 'accumulo-conf-standalone', 'accumulo-source', 'accumulo_2_6_3_0_235-conf-standalone', 'accumulo_2_6_3_0_235-source', 'atlas-metadata', 'atlas-metadata-falcon-plugin', 'atlas-metadata-hive-plugin', 'atlas-metadata-sqoop-plugin', 'atlas-metadata-storm-plugin', 'atlas-metadata_2_6_3_0_235-sqoop-plugin', 'bigtop-tomcat', 'datafu', 'datafu_2_6_3_0_235', 'druid', 'falcon', 'falcon-doc', 'falcon_2_6_3_0_235-doc', 'flume', 'flume-agent', 'flume_2_6_3_0_235-agent', 'hadoop', 'hadoop-client', 'hadoop-conf-pseudo', 'hadoop-doc', 'hadoop-hdfs', 'hadoop-hdfs-datanode', 'hadoop-hdfs-fuse', 'hadoop-hdfs-journalnode', 'hadoop-hdfs-namenode', 'hadoop-hdfs-secondarynamenode', 'hadoop-hdfs-zkfc', 'hadoop-httpfs', 'hadoop-httpfs-server', 'hadoop-libhdfs', 'hadoop-mapreduce', 'hadoop-mapreduce-historyserver', 'hadoop-source', 'hadoop-yarn', 'hadoop-yarn-nodemanager', 'hadoop-yarn-proxyserver', 'hadoop-yarn-resourcemanager', 'hadoop-yarn-timelineserver', 'hadoop_2_6_3_0_235-conf-pseudo', 'hadoop_2_6_3_0_235-doc', 'hadoop_2_6_3_0_235-hdfs-datanode', 'hadoop_2_6_3_0_235-hdfs-fuse', 'hadoop_2_6_3_0_235-hdfs-journalnode', 'hadoop_2_6_3_0_235-hdfs-namenode', 'hadoop_2_6_3_0_235-hdfs-secondarynamenode', 'hadoop_2_6_3_0_235-hdfs-zkfc', 'hadoop_2_6_3_0_235-httpfs', 'hadoop_2_6_3_0_235-httpfs-server', 'hadoop_2_6_3_0_235-mapreduce-historyserver', 'hadoop_2_6_3_0_235-source', 'hadoop_2_6_3_0_235-yarn-nodemanager', 'hadoop_2_6_3_0_235-yarn-proxyserver', 'hadoop_2_6_3_0_235-yarn-resourcemanager', 'hadoop_2_6_3_0_235-yarn-timelineserver', 'hadooplzo', 'hadooplzo-native', 'hadooplzo_2_6_3_0_235', 'hadooplzo_2_6_3_0_235-native', 'hbase', 'hbase-doc', 'hbase-master', 'hbase-regionserver', 'hbase-rest', 'hbase-thrift', 'hbase-thrift2', 'hbase_2_6_3_0_235-doc', 'hbase_2_6_3_0_235-master', 'hbase_2_6_3_0_235-regionserver', 'hbase_2_6_3_0_235-rest', 'hbase_2_6_3_0_235-thrift', 'hbase_2_6_3_0_235-thrift2', 'hive', 'hive-hcatalog', 'hive-hcatalog-server', 'hive-jdbc', 'hive-metastore', 'hive-server', 'hive-server2', 'hive-webhcat', 'hive-webhcat-server', 'hive2', 'hive2-jdbc', 'hive_2_6_3_0_235-hcatalog-server', 'hive_2_6_3_0_235-metastore', 'hive_2_6_3_0_235-server', 'hive_2_6_3_0_235-server2', 'hive_2_6_3_0_235-webhcat-server', 'hue', 'hue-beeswax', 'hue-common', 'hue-hcatalog', 'hue-oozie', 'hue-pig', 'hue-server', 'kafka', 'knox', 'knox_2_6_3_0_235', 'livy', 'livy2', 'livy2_2_6_3_0_235', 'livy_2_6_3_0_235', 'mahout', 'mahout-doc', 'mahout_2_6_3_0_235', 'mahout_2_6_3_0_235-doc', 'oozie', 'oozie-client', 'oozie-common', 'oozie-sharelib', 'oozie-sharelib-distcp', 'oozie-sharelib-hcatalog', 'oozie-sharelib-hive', 'oozie-sharelib-hive2', 'oozie-sharelib-mapreduce-streaming', 'oozie-sharelib-pig', 'oozie-sharelib-spark', 'oozie-sharelib-sqoop', 'oozie-webapp', 'oozie_2_6_3_0_235', 'oozie_2_6_3_0_235-client', 'oozie_2_6_3_0_235-common', 'oozie_2_6_3_0_235-sharelib', 'oozie_2_6_3_0_235-sharelib-distcp', 'oozie_2_6_3_0_235-sharelib-hcatalog', 'oozie_2_6_3_0_235-sharelib-hive', 'oozie_2_6_3_0_235-sharelib-hive2', 'oozie_2_6_3_0_235-sharelib-mapreduce-streaming', 'oozie_2_6_3_0_235-sharelib-pig', 'oozie_2_6_3_0_235-sharelib-spark', 'oozie_2_6_3_0_235-sharelib-sqoop', 'oozie_2_6_3_0_235-webapp', 'phoenix', 'phoenix_2_6_3_0_235', 'pig', 'ranger-admin', 'ranger-atlas-plugin', 'ranger-hbase-plugin', 'ranger-hdfs-plugin', 'ranger-hive-plugin', 'ranger-kafka-plugin', 'ranger-kms', 'ranger-knox-plugin', 'ranger-solr-plugin', 'ranger-storm-plugin', 'ranger-tagsync', 'ranger-usersync', 'ranger-yarn-plugin', 'ranger_2_6_3_0_235-admin', 'ranger_2_6_3_0_235-kms', 'ranger_2_6_3_0_235-knox-plugin', 'ranger_2_6_3_0_235-solr-plugin', 'ranger_2_6_3_0_235-tagsync', 'ranger_2_6_3_0_235-usersync', 'shc', 'shc_2_6_3_0_235', 'slider', 'slider_2_6_3_0_235', 'spark', 'spark-master', 'spark-python', 'spark-worker', 'spark-yarn-shuffle', 'spark2', 'spark2-master', 'spark2-python', 'spark2-worker', 'spark2-yarn-shuffle', 'spark2_2_6_3_0_235', 'spark2_2_6_3_0_235-master', 'spark2_2_6_3_0_235-python', 'spark2_2_6_3_0_235-worker', 'spark_2_6_3_0_235', 'spark_2_6_3_0_235-master', 'spark_2_6_3_0_235-python', 'spark_2_6_3_0_235-worker', 'spark_llap', 'spark_llap_2_6_3_0_235', 'sqoop', 'sqoop-metastore', 'sqoop_2_6_3_0_235', 'sqoop_2_6_3_0_235-metastore', 'storm', 'storm-slider-client', 'storm_2_6_3_0_235-slider-client', 'superset', 'superset_2_6_3_0_235', 'tez', 'tez_hive2', 'zeppelin', 'zeppelin_2_6_3_0_235', 'zookeeper', 'zookeeper-server', 'zookeeper_2_6_3_0_235-server', 'R', 'R-core', 'R-core-devel', 'R-devel', 'R-java', 'R-java-devel', 'compat-readline5', 'epel-release', 'extjs', 'fping', 'ganglia-debuginfo', 'ganglia-devel', 'ganglia-gmetad', 'ganglia-gmond', 'ganglia-gmond-modules-python', 'ganglia-web', 'hadoop-lzo', 'hadoop-lzo-native', 'libRmath', 'libRmath-devel', 'libconfuse', 'libganglia', 'libgenders', 'lua-rrdtool', 'lucidworks-hdpsearch', 'lzo-debuginfo', 'lzo-devel', 'lzo-minilzo', 'mysql-community-release', 'nagios', 'nagios-debuginfo', 'nagios-devel', 'nagios-plugins', 'nagios-plugins-debuginfo', 'nagios-www', 'openblas', 'openblas-Rblas', 'openblas-devel', 'openblas-openmp', 'openblas-openmp64', 'openblas-openmp64_', 'openblas-serial64', 'openblas-serial64_', 'openblas-static', 'openblas-threads', 'openblas-threads64', 'openblas-threads64_', 'pdsh', 'perl-Crypt-DES', 'perl-Net-SNMP', 'perl-rrdtool', 'python-rrdtool', 'rrdtool', 'rrdtool-debuginfo', 'rrdtool-devel', 'ruby-rrdtool', 'snappy', 'snappy-devel', 'tcl-rrdtool', 'accumulo_2_6_3_0_235', 'atlas-metadata_2_6_3_0_235', 'atlas-metadata_2_6_3_0_235-falcon-plugin', 'atlas-metadata_2_6_3_0_235-hive-plugin', 'atlas-metadata_2_6_3_0_235-storm-plugin', 'bigtop-jsvc', 'druid_2_6_3_0_235', 'falcon_2_6_3_0_235', 'flume_2_6_3_0_235', 'hadoop_2_6_3_0_235', 'hadoop_2_6_3_0_235-client', 'hadoop_2_6_3_0_235-hdfs', 'hadoop_2_6_3_0_235-libhdfs', 'hadoop_2_6_3_0_235-mapreduce', 'hadoop_2_6_3_0_235-yarn', 'hbase_2_6_3_0_235', 'hdp-select', 'hive_2_6_3_0_235', 'hive_2_6_3_0_235-hcatalog', 'hive_2_6_3_0_235-jdbc', 'hive_2_6_3_0_235-webhcat', 'kafka_2_6_3_0_235', 'pig_2_6_3_0_235', 'ranger_2_6_3_0_235-atlas-plugin', 'ranger_2_6_3_0_235-hbase-plugin', 'ranger_2_6_3_0_235-hdfs-plugin', 'ranger_2_6_3_0_235-hive-plugin', 'ranger_2_6_3_0_235-kafka-plugin', 'ranger_2_6_3_0_235-storm-plugin', 'ranger_2_6_3_0_235-yarn-plugin', 'spark2_2_6_3_0_235-yarn-shuffle', 'spark_2_6_3_0_235-yarn-shuffle', 'storm_2_6_3_0_235', 'tez_2_6_3_0_235', 'zookeeper_2_6_3_0_235', 'snappy-devel']
 stdout:
2017-12-05 06:20:35,323 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=None -> 2.6
2017-12-05 06:20:35,372 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-12-05 06:20:35,387 - Group['livy'] {}
2017-12-05 06:20:35,393 - Group['spark'] {}
2017-12-05 06:20:35,396 - Group['hdfs'] {}
2017-12-05 06:20:35,397 - Group['zeppelin'] {}
2017-12-05 06:20:35,397 - Group['hadoop'] {}
2017-12-05 06:20:35,400 - Group['users'] {}
2017-12-05 06:20:35,401 - Group['knox'] {}
2017-12-05 06:20:35,409 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-12-05 06:20:35,417 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-12-05 06:20:35,430 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-12-05 06:20:35,445 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2017-12-05 06:20:35,475 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-12-05 06:20:35,505 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2017-12-05 06:20:35,533 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'zeppelin', u'hadoop'], 'uid': None}
2017-12-05 06:20:35,557 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-12-05 06:20:35,577 - User['druid'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-12-05 06:20:35,591 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-12-05 06:20:35,611 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2017-12-05 06:20:35,625 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-12-05 06:20:35,645 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-12-05 06:20:35,659 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2017-12-05 06:20:35,675 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-12-05 06:20:35,679 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-12-05 06:20:35,692 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-12-05 06:20:35,707 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-12-05 06:20:35,716 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-12-05 06:20:35,719 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-12-05 06:20:35,740 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-12-05 06:20:35,784 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2017-12-05 06:20:35,785 - Group['hdfs'] {}
2017-12-05 06:20:35,786 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', u'hdfs']}
2017-12-05 06:20:35,788 - FS Type: 
2017-12-05 06:20:35,789 - Directory['/etc/hadoop'] {'mode': 0755}
2017-12-05 06:20:35,790 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-12-05 06:20:35,892 - Repository['HDP-2.6-repo-1'] {'append_to_file': False, 'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.3.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2017-12-05 06:20:35,936 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-2.6-repo-1]\nname=HDP-2.6-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.3.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-12-05 06:20:35,938 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2017-12-05 06:20:35,950 - Repository['HDP-UTILS-1.1.0.21-repo-1'] {'append_to_file': True, 'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2017-12-05 06:20:35,968 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-2.6-repo-1]\nname=HDP-2.6-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.3.0\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.21-repo-1]\nname=HDP-UTILS-1.1.0.21-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-12-05 06:20:35,969 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2017-12-05 06:20:35,976 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-12-05 06:20:36,647 - Skipping installation of existing package unzip
2017-12-05 06:20:36,652 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-12-05 06:20:36,787 - Skipping installation of existing package curl
2017-12-05 06:20:36,788 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-12-05 06:20:36,915 - Skipping installation of existing package hdp-select
2017-12-05 06:20:37,492 - call[('ambari-python-wrap', u'/usr/bin/hdp-select', 'versions')] {}
2017-12-05 06:20:37,671 - call returned (0, '2.6.3.0-235')
2017-12-05 06:20:37,671 - The 'hive-webhcat' component did not advertise a version. This may indicate a problem with the component packaging. However, the stack-select tool was able to report a single version installed (2.6.3.0-235). This is the version that will be reported.
2017-12-05 06:20:39,363 - MariaDB RedHat Support: false
2017-12-05 06:20:39,403 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-12-05 06:20:39,501 - call['ambari-python-wrap /usr/bin/hdp-select status hive-server2'] {'timeout': 20}
2017-12-05 06:20:39,584 - call returned (0, 'hive-server2 - None')
2017-12-05 06:20:39,586 - Failed to get extracted version with /usr/bin/hdp-select
2017-12-05 06:20:39,587 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=None -> 2.6
2017-12-05 06:20:39,789 - Command repositories: HDP-2.6-repo-1, HDP-UTILS-1.1.0.21-repo-1
2017-12-05 06:20:39,790 - Applicable repositories: HDP-2.6-repo-1, HDP-UTILS-1.1.0.21-repo-1
2017-12-05 06:20:39,800 - Looking for matching packages in the following repositories: HDP-2.6-repo-1, HDP-UTILS-1.1.0.21-repo-1
2017-12-05 06:20:48,097 - Package['hive_2_6_3_0_235'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-12-05 06:20:48,591 - Skipping installation of existing package hive_2_6_3_0_235
2017-12-05 06:20:48,597 - Package['hive_2_6_3_0_235-hcatalog'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-12-05 06:20:48,651 - Skipping installation of existing package hive_2_6_3_0_235-hcatalog
2017-12-05 06:20:48,657 - Package['hive_2_6_3_0_235-webhcat'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-12-05 06:20:48,711 - Skipping installation of existing package hive_2_6_3_0_235-webhcat
2017-12-05 06:20:48,716 - No package found for hive2_${stack_version}(hive2_(\d|_)+$)
2017-12-05 06:20:48,904 - call[('ambari-python-wrap', u'/usr/bin/hdp-select', 'versions')] {}
2017-12-05 06:20:48,974 - call returned (0, '2.6.3.0-235')
2017-12-05 06:20:48,974 - The 'hive-webhcat' component did not advertise a version. This may indicate a problem with the component packaging. However, the stack-select tool was able to report a single version installed (2.6.3.0-235). This is the version that will be reported.

Command failed after 1 tries




What should try next ?

avatar
Explorer

@Aditya Sirna

Any ideas on this one?

avatar
Explorer

Here's the ambari-agent.log and server.log outputs:

INFO 2017-12-11 21:31:04,726 main.py:145 - loglevel=logging.INFO
INFO 2017-12-11 21:31:04,727 main.py:145 - loglevel=logging.INFO
INFO 2017-12-11 21:31:04,727 main.py:145 - loglevel=logging.INFO
INFO 2017-12-11 21:31:04,733 DataCleaner.py:39 - Data cleanup thread started
INFO 2017-12-11 21:31:04,741 DataCleaner.py:120 - Data cleanup started
INFO 2017-12-11 21:31:04,782 DataCleaner.py:122 - Data cleanup finished
INFO 2017-12-11 21:31:04,852 PingPortListener.py:50 - Ping port listener started on port: 8670
INFO 2017-12-11 21:31:04,856 main.py:437 - Connecting to Ambari server at https://namenode01.localdomain:8440 (192.168.0.101)
INFO 2017-12-11 21:31:04,856 NetUtil.py:70 - Connecting to https://namenode01.localdomain:8440/ca
WARNING 2017-12-11 21:31:04,859 NetUtil.py:101 - Failed to connect to https://namenode01.localdomain:8440/ca due to [Errno 111] Connection refused  
WARNING 2017-12-11 21:31:04,859 NetUtil.py:124 - Server at https://namenode01.localdomain:8440 is not reachable, sleeping for 10 seconds...
INFO 2017-12-11 21:31:14,860 NetUtil.py:70 - Connecting to https://namenode01.localdomain:8440/ca
WARNING 2017-12-11 21:31:14,861 NetUtil.py:101 - Failed to connect to https://namenode01.localdomain:8440/ca due to [Errno 111] Connection refused  
WARNING 2017-12-11 21:31:14,862 NetUtil.py:124 - Server at https://namenode01.localdomain:8440 is not reachable, sleeping for 10 seconds...
INFO 2017-12-11 21:31:24,863 NetUtil.py:70 - Connecting to https://namenode01.localdomain:8440/ca
WARNING 2017-12-11 21:31:24,864 NetUtil.py:101 - Failed to connect to https://namenode01.localdomain:8440/ca due to [Errno 111] Connection refused  
WARNING 2017-12-11 21:31:24,865 NetUtil.py:124 - Server at https://namenode01.localdomain:8440 is not reachable, sleeping for 10 seconds...
INFO 2017-12-11 21:31:34,865 NetUtil.py:70 - Connecting to https://namenode01.localdomain:8440/ca
WARNING 2017-12-11 21:31:34,871 NetUtil.py:101 - Failed to connect to https://namenode01.localdomain:8440/ca due to [Errno 111] Connection refused  
WARNING 2017-12-11 21:31:34,872 NetUtil.py:124 - Server at https://namenode01.localdomain:8440 is not reachable, sleeping for 10 seconds...
INFO 2017-12-11 21:31:44,873 NetUtil.py:70 - Connecting to https://namenode01.localdomain:8440/ca
WARNING 2017-12-11 21:31:44,878 NetUtil.py:101 - Failed to connect to https://namenode01.localdomain:8440/ca due to [Errno 111] Connection refused  
WARNING 2017-12-11 21:31:44,880 NetUtil.py:124 - Server at https://namenode01.localdomain:8440 is not reachable, sleeping for 10 seconds...
INFO 2017-12-11 21:31:54,881 NetUtil.py:70 - Connecting to https://namenode01.localdomain:8440/ca
WARNING 2017-12-11 21:31:54,883 NetUtil.py:101 - Failed to connect to https://namenode01.localdomain:8440/ca due to [Errno 111] Connection refused  
WARNING 2017-12-11 21:31:54,883 NetUtil.py:124 - Server at https://namenode01.localdomain:8440 is not reachable, sleeping for 10 seconds...
INFO 2017-12-11 21:32:04,884 NetUtil.py:70 - Connecting to https://namenode01.localdomain:8440/ca
WARNING 2017-12-11 21:32:04,890 NetUtil.py:101 - Failed to connect to https://namenode01.localdomain:8440/ca due to [Errno 111] Connection refused  
WARNING 2017-12-11 21:32:04,891 NetUtil.py:124 - Server at https://namenode01.localdomain:8440 is not reachable, sleeping for 10 seconds...
INFO 2017-12-11 21:32:14,893 NetUtil.py:70 - Connecting to https://namenode01.localdomain:8440/ca
WARNING 2017-12-11 21:32:14,898 NetUtil.py:101 - Failed to connect to https://namenode01.localdomain:8440/ca due to [Errno 111] Connection refused  
WARNING 2017-12-11 21:32:14,900 NetUtil.py:124 - Server at https://namenode01.localdomain:8440 is not reachable, sleeping for 10 seconds...
INFO 2017-12-11 21:32:24,902 NetUtil.py:70 - Connecting to https://namenode01.localdomain:8440/ca
WARNING 2017-12-11 21:32:24,907 NetUtil.py:101 - Failed to connect to https://namenode01.localdomain:8440/ca due to [Errno 111] Connection refused  
WARNING 2017-12-11 21:32:24,909 NetUtil.py:124 - Server at https://namenode01.localdomain:8440 is not reachable, sleeping for 10 seconds...
INFO 2017-12-11 21:32:34,910 NetUtil.py:70 - Connecting to https://namenode01.localdomain:8440/ca
WARNING 2017-12-11 21:32:34,916 NetUtil.py:101 - Failed to connect to https://namenode01.localdomain:8440/ca due to [Errno 111] Connection refused  
WARNING 2017-12-11 21:32:34,918 NetUtil.py:124 - Server at https://namenode01.localdomain:8440 is not reachable, sleeping for 10 seconds...
INFO 2017-12-11 21:32:44,920 main.py:437 - Connecting to Ambari server at https://namenode01.localdomain:8440 (192.168.0.101)
INFO 2017-12-11 21:32:44,921 NetUtil.py:70 - Connecting to https://namenode01.localdomain:8440/ca
WARNING 2017-12-11 21:32:44,927 NetUtil.py:101 - Failed to connect to https://namenode01.localdomain:8440/ca due to [Errno 111] Connection refused  
WARNING 2017-12-11 21:32:44,928 NetUtil.py:124 - Server at https://namenode01.localdomain:8440 is not reachable, sleeping for 10 seconds...
INFO 2017-12-11 21:32:54,930 NetUtil.py:70 - Connecting to https://namenode01.localdomain:8440/ca
WARNING 2017-12-11 21:32:54,935 NetUtil.py:101 - Failed to connect to https://namenode01.localdomain:8440/ca due to [Errno 111] Connection refused  
WARNING 2017-12-11 21:32:54,937 NetUtil.py:124 - Server at https://namenode01.localdomain:8440 is not reachable, sleeping for 10 seconds...
INFO 2017-12-11 21:33:04,939 NetUtil.py:70 - Connecting to https://namenode01.localdomain:8440/ca
WARNING 2017-12-11 21:33:04,944 NetUtil.py:101 - Failed to connect to https://namenode01.localdomain:8440/ca due to [Errno 111] Connection refused  
WARNING 2017-12-11 21:33:04,946 NetUtil.py:124 - Server at https://namenode01.localdomain:8440 is not reachable, sleeping for 10 seconds...
INFO 2017-12-11 21:33:14,948 NetUtil.py:70 - Connecting to https://namenode01.localdomain:8440/ca
WARNING 2017-12-11 21:33:14,953 NetUtil.py:101 - Failed to connect to https://namenode01.localdomain:8440/ca due to [Errno 111] Connection refused  
WARNING 2017-12-11 21:33:14,955 NetUtil.py:124 - Server at https://namenode01.localdomain:8440 is not reachable, sleeping for 10 seconds...
INFO 2017-12-11 21:33:24,956 NetUtil.py:70 - Connecting to https://namenode01.localdomain:8440/ca
WARNING 2017-12-11 21:33:24,962 NetUtil.py:101 - Failed to connect to https://namenode01.localdomain:8440/ca due to [Errno 111] Connection refused  
WARNING 2017-12-11 21:33:24,963 NetUtil.py:124 - Server at https://namenode01.localdomain:8440 is not reachable, sleeping for 10 seconds...
INFO 2017-12-11 21:33:34,965 NetUtil.py:70 - Connecting to https://namenode01.localdomain:8440/ca
WARNING 2017-12-11 21:33:34,971 NetUtil.py:101 - Failed to connect to https://namenode01.localdomain:8440/ca due to [Errno 111] Connection refused  
WARNING 2017-12-11 21:33:34,972 NetUtil.py:124 - Server at https://namenode01.localdomain:8440 is not reachable, sleeping for 10 seconds...
INFO 2017-12-11 21:33:44,974 NetUtil.py:70 - Connecting to https://namenode01.localdomain:8440/ca
WARNING 2017-12-11 21:33:44,980 NetUtil.py:101 - Failed to connect to https://namenode01.localdomain:8440/ca due to [Errno 111] Connection refused  
WARNING 2017-12-11 21:33:44,981 NetUtil.py:124 - Server at https://namenode01.localdomain:8440 is not reachable, sleeping for 10 seconds...
INFO 2017-12-11 21:33:54,983 NetUtil.py:70 - Connecting to https://namenode01.localdomain:8440/ca
WARNING 2017-12-11 21:33:54,988 NetUtil.py:101 - Failed to connect to https://namenode01.localdomain:8440/ca due to [Errno 111] Connection refused  
WARNING 2017-12-11 21:33:54,990 NetUtil.py:124 - Server at https://namenode01.localdomain:8440 is not reachable, sleeping for 10 seconds...
INFO 2017-12-11 21:34:04,991 NetUtil.py:70 - Connecting to https://namenode01.localdomain:8440/ca
WARNING 2017-12-11 21:34:04,997 NetUtil.py:101 - Failed to connect to https://namenode01.localdomain:8440/ca due to [Errno 111] Connection refused  
WARNING 2017-12-11 21:34:04,998 NetUtil.py:124 - Server at https://namenode01.localdomain:8440 is not reachable, sleeping for 10 seconds...
INFO 2017-12-11 21:34:15,000 NetUtil.py:70 - Connecting to https://namenode01.localdomain:8440/ca
INFO 2017-12-11 21:34:15,384 main.py:447 - Connected to Ambari server namenode01.localdomain
INFO 2017-12-11 21:34:15,389 hostname.py:67 - agent:hostname_script configuration not defined thus read hostname 'datanode03.localdomain' using socket.getfqdn().
WARNING 2017-12-11 21:34:15,959 ClusterConfiguration.py:71 - Unable to load configurations from /var/lib/ambari-agent/cache/cluster_configuration/configurations.json. This file will be regenerated on registration
INFO 2017-12-11 21:34:15,961 threadpool.py:58 - Started thread pool with 3 core threads and 20 maximum threads
INFO 2017-12-11 21:34:15,964 AlertSchedulerHandler.py:291 - [AlertScheduler] Caching cluster grid01 with alert hash 47d2a465d47cc4c37bcbd73b12900daf
INFO 2017-12-11 21:34:15,965 scheduler.py:287 - Adding job tentatively -- it will be properly scheduled when the scheduler starts
INFO 2017-12-11 21:34:15,965 AlertSchedulerHandler.py:377 - [AlertScheduler] Scheduling ambari_agent_disk_usage with UUID 53d8793a-8a68-4ac3-b0fd-4ebf5525e1ba
INFO 2017-12-11 21:34:15,966 scheduler.py:287 - Adding job tentatively -- it will be properly scheduled when the scheduler starts
INFO 2017-12-11 21:34:15,966 AlertSchedulerHandler.py:377 - [AlertScheduler] Scheduling ambari_agent_version_select with UUID 963d1332-b1c7-4414-be71-e9af7874dd8d
INFO 2017-12-11 21:34:15,966 AlertSchedulerHandler.py:175 - [AlertScheduler] Starting <ambari_agent.apscheduler.scheduler.Scheduler object at 0x1f2ce10>; currently running: False
INFO 2017-12-11 21:34:17,990 hostname.py:106 - Read public hostname 'datanode03.localdomain' using socket.getfqdn()
INFO 2017-12-11 21:34:17,994 Hardware.py:48 - Initializing host system information.
INFO 2017-12-11 21:34:18,029 Hardware.py:176 - Some mount points were ignored: /dev/shm, /run, /sys/fs/cgroup, /run/user/42
INFO 2017-12-11 21:34:18,110 hostname.py:67 - agent:hostname_script configuration not defined thus read hostname 'datanode03.localdomain' using socket.getfqdn().
INFO 2017-12-11 21:34:18,128 Facter.py:202 - Directory: '/etc/resource_overrides' does not exist - it won't be used for gathering system resources.
INFO 2017-12-11 21:34:18,156 Hardware.py:54 - Host system information: {'kernel': 'Linux', 'domain': 'localdomain', 'physicalprocessorcount': 2, 'kernelrelease': '3.10.0-693.5.2.el7.x86_64', 'uptime_days': '0', 'memorytotal': 7927704, 'swapfree': '7.75 GB', 'memorysize': 7927704, 'osfamily': 'redhat', 'swapsize': '7.75 GB', 'processorcount': 2, 'netmask': '255.255.255.0', 'timezone': 'CST', 'hardwareisa': 'x86_64', 'memoryfree': 7310300, 'operatingsystem': 'centos', 'kernelmajversion': '3.10', 'kernelversion': '3.10.0', 'macaddress': 'F4:4D:30:69:03:B1', 'operatingsystemrelease': '7.4.1708', 'ipaddress': '192.168.0.103', 'hostname': 'datanode03', 'uptime_hours': '0', 'fqdn': 'datanode03.localdomain', 'id': 'root', 'architecture': 'x86_64', 'selinux': False, 'mounts': [{'available': '45948916', 'used': '6454284', 'percent': '13%', 'device': '/dev/mapper/cl-root', 'mountpoint': '/', 'type': 'xfs', 'size': '52403200'}, {'available': '3947644', 'used': '0', 'percent': '0%', 'device': 'devtmpfs', 'mountpoint': '/dev', 'type': 'devtmpfs', 'size': '3947644'}, {'available': '802304', 'used': '236032', 'percent': '23%', 'device': '/dev/sda1', 'mountpoint': '/boot', 'type': 'xfs', 'size': '1038336'}, {'available': '55488712', 'used': '91432', 'percent': '1%', 'device': '/dev/mapper/cl-home', 'mountpoint': '/home', 'type': 'xfs', 'size': '55580144'}], 'hardwaremodel': 'x86_64', 'uptime_seconds': '215', 'interfaces': 'enp3s0,lo,virbr0,wlp2s0'}
INFO 2017-12-11 21:34:18,420 Controller.py:170 - Registering with datanode03.localdomain (192.168.0.103) (agent='{"hardwareProfile": {"kernel": "Linux", "domain": "localdomain", "physicalprocessorcount": 2, "kernelrelease": "3.10.0-693.5.2.el7.x86_64", "uptime_days": "0", "memorytotal": 7927704, "swapfree": "7.75 GB", "memorysize": 7927704, "osfamily": "redhat", "swapsize": "7.75 GB", "processorcount": 2, "netmask": "255.255.255.0", "timezone": "CST", "hardwareisa": "x86_64", "memoryfree": 7310300, "operatingsystem": "centos", "kernelmajversion": "3.10", "kernelversion": "3.10.0", "macaddress": "F4:4D:30:69:03:B1", "operatingsystemrelease": "7.4.1708", "ipaddress": "192.168.0.103", "hostname": "datanode03", "uptime_hours": "0", "fqdn": "datanode03.localdomain", "id": "root", "architecture": "x86_64", "selinux": false, "mounts": [{"available": "45948916", "used": "6454284", "percent": "13%", "device": "/dev/mapper/cl-root", "mountpoint": "/", "type": "xfs", "size": "52403200"}, {"available": "3947644", "used": "0", "percent": "0%", "device": "devtmpfs", "mountpoint": "/dev", "type": "devtmpfs", "size": "3947644"}, {"available": "802304", "used": "236032", "percent": "23%", "device": "/dev/sda1", "mountpoint": "/boot", "type": "xfs", "size": "1038336"}, {"available": "55488712", "used": "91432", "percent": "1%", "device": "/dev/mapper/cl-home", "mountpoint": "/home", "type": "xfs", "size": "55580144"}], "hardwaremodel": "x86_64", "uptime_seconds": "215", "interfaces": "enp3s0,lo,virbr0,wlp2s0"}, "currentPingPort": 8670, "prefix": "/var/lib/ambari-agent/data", "agentVersion": "2.6.0.0", "agentEnv": {"transparentHugePage": "", "hostHealth": {"agentTimeStampAtReporting": 1513049658398, "activeJavaProcs": [], "liveServices": [{"status": "Healthy", "name": "ntpd or chronyd", "desc": ""}]}, "reverseLookup": true, "alternatives": [], "hasUnlimitedJcePolicy": null, "umask": "18", "firewallName": "iptables", "stackFoldersAndFiles": [], "existingUsers": [{"status": "Available", "name": "hive", "homeDir": "/home/hive"}, {"status": "Available", "name": "atlas", "homeDir": "/home/atlas"}, {"status": "Available", "name": "ams", "homeDir": "/home/ams"}, {"status": "Available", "name": "falcon", "homeDir": "/home/falcon"}, {"status": "Available", "name": "accumulo", "homeDir": "/home/accumulo"}, {"status": "Available", "name": "spark", "homeDir": "/home/spark"}, {"status": "Available", "name": "flume", "homeDir": "/home/flume"}, {"status": "Available", "name": "hbase", "homeDir": "/home/hbase"}, {"status": "Available", "name": "hcat", "homeDir": "/home/hcat"}, {"status": "Available", "name": "storm", "homeDir": "/home/storm"}, {"status": "Available", "name": "zookeeper", "homeDir": "/home/zookeeper"}, {"status": "Available", "name": "oozie", "homeDir": "/home/oozie"}, {"status": "Available", "name": "tez", "homeDir": "/home/tez"}, {"status": "Available", "name": "zeppelin", "homeDir": "/home/zeppelin"}, {"status": "Available", "name": "mahout", "homeDir": "/home/mahout"}, {"status": "Available", "name": "ambari-qa", "homeDir": "/home/ambari-qa"}, {"status": "Available", "name": "kafka", "homeDir": "/home/kafka"}, {"status": "Available", "name": "hdfs", "homeDir": "/home/hdfs"}, {"status": "Available", "name": "sqoop", "homeDir": "/home/sqoop"}, {"status": "Available", "name": "yarn", "homeDir": "/home/yarn"}, {"status": "Available", "name": "mapred", "homeDir": "/home/mapred"}, {"status": "Available", "name": "knox", "homeDir": "/home/knox"}], "firewallRunning": false}, "timestamp": 1513049658161, "hostname": "datanode03.localdomain", "responseId": -1, "publicHostname": "datanode03.localdomain"}')
INFO 2017-12-11 21:34:18,424 NetUtil.py:70 - Connecting to https://namenode01.localdomain:8440/connection_info
INFO 2017-12-11 21:34:18,738 security.py:93 - SSL Connect being called.. connecting to the server
INFO 2017-12-11 21:34:19,090 security.py:60 - SSL connection established. Two-way SSL authentication is turned off on the server.
INFO 2017-12-11 21:34:19,183 Controller.py:196 - Registration Successful (response id = 0)
INFO 2017-12-11 21:34:19,186 AmbariConfig.py:316 - Updating config property (agent.check.remote.mounts) with value (false)
INFO 2017-12-11 21:34:19,187 AmbariConfig.py:316 - Updating config property (agent.auto.cache.update) with value (true)
INFO 2017-12-11 21:34:19,189 AmbariConfig.py:316 - Updating config property (java.home) with value (/usr/jdk64/jdk1.8.0_112)
INFO 2017-12-11 21:34:19,190 AmbariConfig.py:316 - Updating config property (agent.check.mounts.timeout) with value (0)
WARNING 2017-12-11 21:34:19,191 AlertSchedulerHandler.py:123 - There are no alert definition commands in the heartbeat; unable to update definitions
INFO 2017-12-11 21:34:19,192 Controller.py:516 - Registration response from namenode01.localdomain was OK
INFO 2017-12-11 21:34:19,193 Controller.py:521 - Resetting ActionQueue...
INFO 2017-12-11 21:34:29,202 Controller.py:304 - Heartbeat (response id = 0) with server is running...
INFO 2017-12-11 21:34:29,204 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 21:34:29,216 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 21:34:29,475 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 21:34:30,595 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 21:34:30,600 Controller.py:320 - Sending Heartbeat (id = 0)
INFO 2017-12-11 21:34:30,612 Controller.py:333 - Heartbeat response received (id = 1)
INFO 2017-12-11 21:34:30,613 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 21:34:30,614 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 21:34:30,614 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 21:34:30,614 Controller.py:406 - Adding recovery commands
INFO 2017-12-11 21:34:30,615 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 21:34:40,515 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 21:35:30,220 Controller.py:304 - Heartbeat (response id = 6) with server is running...
INFO 2017-12-11 21:35:30,222 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 21:35:30,236 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 21:35:30,510 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 21:35:31,172 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 21:35:31,177 Controller.py:320 - Sending Heartbeat (id = 6)
INFO 2017-12-11 21:35:31,199 Controller.py:333 - Heartbeat response received (id = 7)
INFO 2017-12-11 21:35:31,200 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 21:35:31,200 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 21:35:31,201 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 21:35:31,201 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 21:35:41,102 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 21:36:30,817 Controller.py:304 - Heartbeat (response id = 12) with server is running...
INFO 2017-12-11 21:36:30,818 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 21:36:30,832 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 21:36:31,126 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 21:36:31,963 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 21:36:31,968 Controller.py:320 - Sending Heartbeat (id = 12)
INFO 2017-12-11 21:36:31,988 Controller.py:333 - Heartbeat response received (id = 13)
INFO 2017-12-11 21:36:31,989 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 21:36:31,989 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 21:36:31,990 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 21:36:31,990 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 21:36:41,891 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 21:37:31,592 Controller.py:304 - Heartbeat (response id = 18) with server is running...
INFO 2017-12-11 21:37:31,594 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 21:37:31,608 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 21:37:31,898 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 21:37:32,581 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 21:37:32,588 Controller.py:320 - Sending Heartbeat (id = 18)
INFO 2017-12-11 21:37:32,606 Controller.py:333 - Heartbeat response received (id = 19)
INFO 2017-12-11 21:37:32,607 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 21:37:32,607 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 21:37:32,607 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 21:37:32,608 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 21:37:42,509 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 21:38:32,210 Controller.py:304 - Heartbeat (response id = 24) with server is running...
INFO 2017-12-11 21:38:32,212 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 21:38:32,225 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 21:38:32,525 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 21:38:33,261 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 21:38:33,267 Controller.py:320 - Sending Heartbeat (id = 24)
INFO 2017-12-11 21:38:33,291 Controller.py:333 - Heartbeat response received (id = 25)
INFO 2017-12-11 21:38:33,291 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 21:38:33,292 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 21:38:33,292 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 21:38:33,293 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 21:38:43,194 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 21:39:32,895 Controller.py:304 - Heartbeat (response id = 30) with server is running...
INFO 2017-12-11 21:39:32,897 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 21:39:32,910 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 21:39:33,205 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 21:39:34,021 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 21:39:34,027 Controller.py:320 - Sending Heartbeat (id = 30)
INFO 2017-12-11 21:39:34,045 Controller.py:333 - Heartbeat response received (id = 31)
INFO 2017-12-11 21:39:34,045 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 21:39:34,046 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 21:39:34,046 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 21:39:34,046 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 21:39:43,947 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 21:40:33,640 Controller.py:304 - Heartbeat (response id = 36) with server is running...
INFO 2017-12-11 21:40:33,642 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 21:40:33,656 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 21:40:33,938 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 21:40:34,744 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 21:40:34,750 Controller.py:320 - Sending Heartbeat (id = 36)
INFO 2017-12-11 21:40:34,767 Controller.py:333 - Heartbeat response received (id = 37)
INFO 2017-12-11 21:40:34,768 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 21:40:34,768 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 21:40:34,769 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 21:40:34,769 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 21:40:44,670 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 21:41:34,368 Controller.py:304 - Heartbeat (response id = 42) with server is running...
INFO 2017-12-11 21:41:34,370 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 21:41:34,383 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 21:41:34,661 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 21:41:35,490 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 21:41:35,494 Controller.py:320 - Sending Heartbeat (id = 42)
INFO 2017-12-11 21:41:35,511 Controller.py:333 - Heartbeat response received (id = 43)
INFO 2017-12-11 21:41:35,512 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 21:41:35,512 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 21:41:35,512 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 21:41:35,513 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 21:41:45,413 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 21:42:35,101 Controller.py:304 - Heartbeat (response id = 48) with server is running...
INFO 2017-12-11 21:42:35,102 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 21:42:35,116 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 21:42:35,400 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 21:42:36,211 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 21:42:36,216 Controller.py:320 - Sending Heartbeat (id = 48)
INFO 2017-12-11 21:42:36,242 Controller.py:333 - Heartbeat response received (id = 49)
INFO 2017-12-11 21:42:36,242 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 21:42:36,243 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 21:42:36,243 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 21:42:36,243 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 21:42:46,144 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 21:43:35,822 Controller.py:304 - Heartbeat (response id = 54) with server is running...
INFO 2017-12-11 21:43:35,823 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 21:43:35,837 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 21:43:36,131 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 21:43:36,898 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 21:43:36,903 Controller.py:320 - Sending Heartbeat (id = 54)
INFO 2017-12-11 21:43:36,920 Controller.py:333 - Heartbeat response received (id = 55)
INFO 2017-12-11 21:43:36,921 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 21:43:36,921 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 21:43:36,921 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 21:43:36,921 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 21:43:46,822 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 21:44:36,505 Controller.py:304 - Heartbeat (response id = 60) with server is running...
INFO 2017-12-11 21:44:36,506 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 21:44:36,518 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 21:44:36,787 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 21:44:37,558 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 21:44:37,564 Controller.py:320 - Sending Heartbeat (id = 60)
INFO 2017-12-11 21:44:37,575 Controller.py:333 - Heartbeat response received (id = 61)
INFO 2017-12-11 21:44:37,576 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 21:44:37,576 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 21:44:37,576 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 21:44:37,577 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 21:44:47,477 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 21:45:37,158 Controller.py:304 - Heartbeat (response id = 66) with server is running...
INFO 2017-12-11 21:45:37,160 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 21:45:37,172 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 21:45:37,450 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 21:45:38,237 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 21:45:38,243 Controller.py:320 - Sending Heartbeat (id = 66)
INFO 2017-12-11 21:45:38,259 Controller.py:333 - Heartbeat response received (id = 67)
INFO 2017-12-11 21:45:38,259 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 21:45:38,260 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 21:45:38,260 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 21:45:38,261 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 21:45:48,161 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 21:46:37,841 Controller.py:304 - Heartbeat (response id = 72) with server is running...
INFO 2017-12-11 21:46:37,843 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 21:46:37,856 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 21:46:38,160 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 21:46:38,967 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 21:46:38,973 Controller.py:320 - Sending Heartbeat (id = 72)
INFO 2017-12-11 21:46:38,992 Controller.py:333 - Heartbeat response received (id = 73)
INFO 2017-12-11 21:46:38,992 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 21:46:38,992 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 21:46:38,993 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 21:46:38,993 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 21:46:48,894 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 21:47:38,572 Controller.py:304 - Heartbeat (response id = 78) with server is running...
INFO 2017-12-11 21:47:38,573 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 21:47:38,588 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 21:47:38,878 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 21:47:39,711 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 21:47:39,716 Controller.py:320 - Sending Heartbeat (id = 78)
INFO 2017-12-11 21:47:39,732 Controller.py:333 - Heartbeat response received (id = 79)
INFO 2017-12-11 21:47:39,733 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 21:47:39,733 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 21:47:39,733 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 21:47:39,734 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 21:47:49,634 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 21:48:39,313 Controller.py:304 - Heartbeat (response id = 84) with server is running...
INFO 2017-12-11 21:48:39,315 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 21:48:39,329 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 21:48:39,613 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 21:48:40,474 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 21:48:40,478 Controller.py:320 - Sending Heartbeat (id = 84)
INFO 2017-12-11 21:48:40,493 Controller.py:333 - Heartbeat response received (id = 85)
INFO 2017-12-11 21:48:40,494 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 21:48:40,494 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 21:48:40,494 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 21:48:40,495 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 21:48:50,395 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 21:49:40,075 Controller.py:304 - Heartbeat (response id = 90) with server is running...
INFO 2017-12-11 21:49:40,077 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 21:49:40,090 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 21:49:40,377 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 21:49:41,211 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 21:49:41,216 Controller.py:320 - Sending Heartbeat (id = 90)
INFO 2017-12-11 21:49:41,231 Controller.py:333 - Heartbeat response received (id = 91)
INFO 2017-12-11 21:49:41,231 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 21:49:41,231 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 21:49:41,232 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 21:49:41,232 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 21:49:51,133 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 21:50:40,803 Controller.py:304 - Heartbeat (response id = 96) with server is running...
INFO 2017-12-11 21:50:40,805 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 21:50:40,817 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 21:50:41,108 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 21:50:41,788 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 21:50:41,794 Controller.py:320 - Sending Heartbeat (id = 96)
INFO 2017-12-11 21:50:41,809 Controller.py:333 - Heartbeat response received (id = 97)
INFO 2017-12-11 21:50:41,810 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 21:50:41,810 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 21:50:41,810 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 21:50:41,811 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 21:50:51,712 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 21:51:41,386 Controller.py:304 - Heartbeat (response id = 102) with server is running...
INFO 2017-12-11 21:51:41,388 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 21:51:41,401 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 21:51:41,683 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 21:51:42,463 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 21:51:42,468 Controller.py:320 - Sending Heartbeat (id = 102)
INFO 2017-12-11 21:51:42,482 Controller.py:333 - Heartbeat response received (id = 103)
INFO 2017-12-11 21:51:42,482 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 21:51:42,483 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 21:51:42,483 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 21:51:42,483 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 21:51:52,384 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 21:52:42,061 Controller.py:304 - Heartbeat (response id = 108) with server is running...
INFO 2017-12-11 21:52:42,062 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 21:52:42,076 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 21:52:42,355 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 21:52:43,192 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 21:52:43,196 Controller.py:320 - Sending Heartbeat (id = 108)
INFO 2017-12-11 21:52:43,210 Controller.py:333 - Heartbeat response received (id = 109)
INFO 2017-12-11 21:52:43,210 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 21:52:43,211 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 21:52:43,211 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 21:52:43,211 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 21:52:53,112 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 21:53:42,783 Controller.py:304 - Heartbeat (response id = 114) with server is running...
INFO 2017-12-11 21:53:42,785 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 21:53:42,800 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 21:53:43,068 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 21:53:43,847 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 21:53:43,853 Controller.py:320 - Sending Heartbeat (id = 114)
INFO 2017-12-11 21:53:43,864 Controller.py:333 - Heartbeat response received (id = 115)
INFO 2017-12-11 21:53:43,865 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 21:53:43,865 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 21:53:43,866 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 21:53:43,866 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 21:53:53,767 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 21:54:43,438 Controller.py:304 - Heartbeat (response id = 120) with server is running...
INFO 2017-12-11 21:54:43,440 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 21:54:43,454 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 21:54:43,731 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 21:54:44,528 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 21:54:44,534 Controller.py:320 - Sending Heartbeat (id = 120)
INFO 2017-12-11 21:54:44,556 Controller.py:333 - Heartbeat response received (id = 121)
INFO 2017-12-11 21:54:44,557 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 21:54:44,557 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 21:54:44,558 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 21:54:44,558 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 21:54:54,459 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 21:55:44,118 Controller.py:304 - Heartbeat (response id = 126) with server is running...
INFO 2017-12-11 21:55:44,120 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 21:55:44,133 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 21:55:44,437 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 21:55:45,251 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 21:55:45,255 Controller.py:320 - Sending Heartbeat (id = 126)
INFO 2017-12-11 21:55:45,263 Controller.py:333 - Heartbeat response received (id = 127)
INFO 2017-12-11 21:55:45,263 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 21:55:45,263 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 21:55:45,263 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 21:55:45,264 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 21:55:55,164 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 21:56:44,817 Controller.py:304 - Heartbeat (response id = 132) with server is running...
INFO 2017-12-11 21:56:44,819 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 21:56:44,832 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 21:56:45,106 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 21:56:45,985 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 21:56:45,990 Controller.py:320 - Sending Heartbeat (id = 132)
INFO 2017-12-11 21:56:45,997 Controller.py:333 - Heartbeat response received (id = 133)
INFO 2017-12-11 21:56:45,998 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 21:56:45,998 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 21:56:45,998 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 21:56:45,999 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 21:56:55,899 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 21:57:45,552 Controller.py:304 - Heartbeat (response id = 138) with server is running...
INFO 2017-12-11 21:57:45,554 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 21:57:45,567 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 21:57:45,859 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 21:57:46,706 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 21:57:46,710 Controller.py:320 - Sending Heartbeat (id = 138)
INFO 2017-12-11 21:57:46,721 Controller.py:333 - Heartbeat response received (id = 139)
INFO 2017-12-11 21:57:46,722 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 21:57:46,722 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 21:57:46,722 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 21:57:46,723 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 21:57:56,623 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 21:58:46,276 Controller.py:304 - Heartbeat (response id = 144) with server is running...
INFO 2017-12-11 21:58:46,277 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 21:58:46,293 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 21:58:46,573 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 21:58:47,257 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 21:58:47,262 Controller.py:320 - Sending Heartbeat (id = 144)
INFO 2017-12-11 21:58:47,274 Controller.py:333 - Heartbeat response received (id = 145)
INFO 2017-12-11 21:58:47,274 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 21:58:47,274 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 21:58:47,275 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 21:58:47,275 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 21:58:57,175 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 21:59:46,829 Controller.py:304 - Heartbeat (response id = 150) with server is running...
INFO 2017-12-11 21:59:46,831 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 21:59:46,844 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 21:59:47,137 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 21:59:47,835 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 21:59:47,840 Controller.py:320 - Sending Heartbeat (id = 150)
INFO 2017-12-11 21:59:47,850 Controller.py:333 - Heartbeat response received (id = 151)
INFO 2017-12-11 21:59:47,851 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 21:59:47,851 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 21:59:47,852 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 21:59:47,852 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 21:59:57,753 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 22:00:47,411 Controller.py:304 - Heartbeat (response id = 156) with server is running...
INFO 2017-12-11 22:00:47,413 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 22:00:47,426 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 22:00:47,723 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 22:00:48,567 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 22:00:48,571 Controller.py:320 - Sending Heartbeat (id = 156)
INFO 2017-12-11 22:00:48,579 Controller.py:333 - Heartbeat response received (id = 157)
INFO 2017-12-11 22:00:48,579 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 22:00:48,580 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 22:00:48,580 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 22:00:48,580 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 22:00:58,481 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 22:01:48,131 Controller.py:304 - Heartbeat (response id = 162) with server is running...
INFO 2017-12-11 22:01:48,132 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 22:01:48,146 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 22:01:48,433 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 22:01:49,172 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 22:01:49,176 Controller.py:320 - Sending Heartbeat (id = 162)
INFO 2017-12-11 22:01:49,184 Controller.py:333 - Heartbeat response received (id = 163)
INFO 2017-12-11 22:01:49,184 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 22:01:49,184 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 22:01:49,184 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 22:01:49,185 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 22:01:59,085 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 22:02:48,746 Controller.py:304 - Heartbeat (response id = 168) with server is running...
INFO 2017-12-11 22:02:48,748 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 22:02:48,761 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 22:02:49,060 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 22:02:49,854 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 22:02:49,858 Controller.py:320 - Sending Heartbeat (id = 168)
INFO 2017-12-11 22:02:49,873 Controller.py:333 - Heartbeat response received (id = 169)
INFO 2017-12-11 22:02:49,873 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 22:02:49,873 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 22:02:49,873 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 22:02:49,874 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 22:02:59,774 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 22:03:49,448 Controller.py:304 - Heartbeat (response id = 174) with server is running...
INFO 2017-12-11 22:03:49,449 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 22:03:49,464 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 22:03:49,776 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 22:03:50,635 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 22:03:50,640 Controller.py:320 - Sending Heartbeat (id = 174)
INFO 2017-12-11 22:03:50,647 Controller.py:333 - Heartbeat response received (id = 175)
INFO 2017-12-11 22:03:50,648 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 22:03:50,648 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 22:03:50,648 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 22:03:50,649 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 22:04:00,549 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 22:04:50,215 Controller.py:304 - Heartbeat (response id = 180) with server is running...
INFO 2017-12-11 22:04:50,217 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 22:04:50,230 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 22:04:50,517 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 22:04:51,323 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 22:04:51,329 Controller.py:320 - Sending Heartbeat (id = 180)
INFO 2017-12-11 22:04:51,338 Controller.py:333 - Heartbeat response received (id = 181)
INFO 2017-12-11 22:04:51,338 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 22:04:51,339 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 22:04:51,339 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 22:04:51,339 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 22:05:01,240 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 22:05:50,895 Controller.py:304 - Heartbeat (response id = 186) with server is running...
INFO 2017-12-11 22:05:50,896 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 22:05:50,909 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 22:05:51,200 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 22:05:51,921 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 22:05:51,927 Controller.py:320 - Sending Heartbeat (id = 186)
INFO 2017-12-11 22:05:51,947 Controller.py:333 - Heartbeat response received (id = 187)
INFO 2017-12-11 22:05:51,948 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 22:05:51,948 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 22:05:51,949 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 22:05:51,949 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 22:06:01,850 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 22:06:51,504 Controller.py:304 - Heartbeat (response id = 192) with server is running...
INFO 2017-12-11 22:06:51,505 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 22:06:51,519 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 22:06:51,802 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 22:06:52,603 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 22:06:52,609 Controller.py:320 - Sending Heartbeat (id = 192)
INFO 2017-12-11 22:06:52,618 Controller.py:333 - Heartbeat response received (id = 193)
INFO 2017-12-11 22:06:52,618 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 22:06:52,618 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 22:06:52,619 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 22:06:52,619 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 22:07:02,520 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 22:07:52,178 Controller.py:304 - Heartbeat (response id = 198) with server is running...
INFO 2017-12-11 22:07:52,180 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 22:07:52,192 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 22:07:52,471 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 22:07:53,287 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 22:07:53,291 Controller.py:320 - Sending Heartbeat (id = 198)
INFO 2017-12-11 22:07:53,304 Controller.py:333 - Heartbeat response received (id = 199)
INFO 2017-12-11 22:07:53,305 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 22:07:53,305 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 22:07:53,305 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 22:07:53,306 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 22:08:03,206 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 22:08:52,860 Controller.py:304 - Heartbeat (response id = 204) with server is running...
INFO 2017-12-11 22:08:52,862 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 22:08:52,875 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 22:08:53,174 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 22:08:53,964 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 22:08:53,970 Controller.py:320 - Sending Heartbeat (id = 204)
INFO 2017-12-11 22:08:53,980 Controller.py:333 - Heartbeat response received (id = 205)
INFO 2017-12-11 22:08:53,981 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 22:08:53,981 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 22:08:53,981 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 22:08:53,981 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 22:09:03,882 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 22:09:53,542 Controller.py:304 - Heartbeat (response id = 210) with server is running...
INFO 2017-12-11 22:09:53,543 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 22:09:53,555 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 22:09:53,795 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 22:09:54,626 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 22:09:54,630 Controller.py:320 - Sending Heartbeat (id = 210)
INFO 2017-12-11 22:09:54,658 Controller.py:333 - Heartbeat response received (id = 211)
INFO 2017-12-11 22:09:54,658 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 22:09:54,659 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 22:09:54,659 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 22:09:54,659 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 22:10:04,560 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 22:10:54,226 Controller.py:304 - Heartbeat (response id = 216) with server is running...
INFO 2017-12-11 22:10:54,228 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 22:10:54,243 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 22:10:54,525 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 22:10:55,276 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 22:10:55,280 Controller.py:320 - Sending Heartbeat (id = 216)
INFO 2017-12-11 22:10:55,288 Controller.py:333 - Heartbeat response received (id = 217)
INFO 2017-12-11 22:10:55,289 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 22:10:55,289 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 22:10:55,289 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 22:10:55,289 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 22:11:05,190 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 22:11:54,858 Controller.py:304 - Heartbeat (response id = 222) with server is running...
INFO 2017-12-11 22:11:54,860 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 22:11:54,873 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 22:11:55,162 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 22:11:55,880 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 22:11:55,886 Controller.py:320 - Sending Heartbeat (id = 222)
INFO 2017-12-11 22:11:55,898 Controller.py:333 - Heartbeat response received (id = 223)
INFO 2017-12-11 22:11:55,898 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 22:11:55,898 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 22:11:55,899 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 22:11:55,899 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 22:12:05,800 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 22:12:55,462 Controller.py:304 - Heartbeat (response id = 228) with server is running...
INFO 2017-12-11 22:12:55,463 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 22:12:55,476 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 22:12:55,763 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 22:12:56,461 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 22:12:56,467 Controller.py:320 - Sending Heartbeat (id = 228)
INFO 2017-12-11 22:12:56,478 Controller.py:333 - Heartbeat response received (id = 229)
INFO 2017-12-11 22:12:56,478 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 22:12:56,479 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 22:12:56,479 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 22:12:56,480 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 22:13:06,380 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 22:13:56,035 Controller.py:304 - Heartbeat (response id = 234) with server is running...
INFO 2017-12-11 22:13:56,037 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 22:13:56,050 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 22:13:56,329 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 22:13:57,132 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 22:13:57,136 Controller.py:320 - Sending Heartbeat (id = 234)
INFO 2017-12-11 22:13:57,217 Controller.py:333 - Heartbeat response received (id = 235)
INFO 2017-12-11 22:13:57,217 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 22:13:57,218 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 22:13:57,218 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 22:13:57,218 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 22:14:07,119 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 22:14:56,784 Controller.py:304 - Heartbeat (response id = 240) with server is running...
INFO 2017-12-11 22:14:56,785 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 22:14:56,798 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 22:14:57,072 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 22:14:57,845 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 22:14:57,849 Controller.py:320 - Sending Heartbeat (id = 240)
INFO 2017-12-11 22:14:57,857 Controller.py:333 - Heartbeat response received (id = 241)
INFO 2017-12-11 22:14:57,858 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 22:14:57,858 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 22:14:57,858 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 22:14:57,859 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 22:15:07,760 Controller.py:482 - Wait for next heartbeat over
INFO 2017-12-11 22:15:57,421 Controller.py:304 - Heartbeat (response id = 246) with server is running...
INFO 2017-12-11 22:15:57,423 Controller.py:311 - Building heartbeat message
INFO 2017-12-11 22:15:57,436 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2017-12-11 22:15:57,731 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2017-12-11 22:15:58,563 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /boot, /home, /run/user/42
INFO 2017-12-11 22:15:58,568 Controller.py:320 - Sending Heartbeat (id = 246)
INFO 2017-12-11 22:15:58,576 Controller.py:333 - Heartbeat response received (id = 247)
INFO 2017-12-11 22:15:58,576 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2017-12-11 22:15:58,576 Controller.py:380 - Updating configurations from heartbeat
INFO 2017-12-11 22:15:58,577 Controller.py:389 - Adding cancel/execution commands
INFO 2017-12-11 22:15:58,577 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2017-12-11 22:16:08,478 Controller.py:482 - Wait for next heartbeat over
[root@datanode03 ~]# cat /var/log/ambari-agent/ambari-agent.out^C
[root@datanode03 ~]# exit
logout
Connection to datanode03 closed.
[root@namenode01 ~]# pwd
/root
[root@namenode01 ~]# cd /var/log/ambari-server/
[root@namenode01 ambari-server]# ls -l
total 1980
-rw-r--r-- 1 root root  10383 Dec  5 20:51 ambari-alerts.log
-rw-r--r-- 1 root root 350690 Dec 11 22:58 ambari-audit.log
-rw-r--r-- 1 root root 221172 Dec  5 06:16 ambari-config-changes.log
-rw-r--r-- 1 root root  11374 Dec 11 21:32 ambari-eclipselink.log
-rw-r--r-- 1 root root  25807 Dec 11 21:32 ambari-server-check-database.log
-rw-r--r-- 1 root root  85532 Dec 11 21:32 ambari-server-command.log
-rw-r--r-- 1 root root 854112 Dec 12 10:05 ambari-server.log
-rw-r--r-- 1 root root    156 Dec 11 21:32 ambari-server.out
drwxr-xr-x 2 root root     30 Dec  3 01:06 capshed-view
drwxr-xr-x 2 root root     28 Dec  3 01:06 files-view
drwxr-xr-x 2 root root     29 Dec  3 01:07 hive20-view
drwxr-xr-x 2 root root     27 Dec  3 01:07 hive-next-view
drwxr-xr-x 2 root root     38 Dec  3 01:07 huetoambarimigration-view
drwxr-xr-x 2 root root     26 Dec  3 01:07 pig-view
drwxr-xr-x 2 root root     29 Dec  3 01:07 slider-view
drwxr-xr-x 2 root root     28 Dec  3 01:07 storm-view
drwxr-xr-x 2 root root     26 Dec  3 01:07 tez-view
drwxr-xr-x 2 root root     32 Dec  3 01:07 wfmanager-view
[root@namenode01 ambari-server]# tail -n 1000 ambari-server.log
12 Dec 2017 06:08:25,978 ERROR [alert-event-bus-2] AlertReceivedListener:480 - Unable to process alert ambari_agent_disk_usage for an invalid cluster named grid01
12 Dec 2017 06:09:13,364  INFO [pool-18-thread-1] MetricsServiceImpl:65 - Attempting to initialize metrics sink
12 Dec 2017 06:09:13,365  INFO [pool-18-thread-1] MetricsServiceImpl:81 - ********* Configuring Metric Sink **********
12 Dec 2017 06:09:13,366  INFO [pool-18-thread-1] AmbariMetricSinkImpl:95 - No clusters configured.
12 Dec 2017 06:09:19,978 ERROR [alert-event-bus-2] AlertReceivedListener:480 - Unable to process alert ambari_agent_version_select for an invalid cluster named grid01
12 Dec 2017 06:09:19,979 ERROR [alert-event-bus-2] AlertReceivedListener:480 - Unable to process alert ambari_agent_disk_usage for an invalid cluster named grid01
12 Dec 2017 06:09:20,978 ERROR [alert-event-bus-1] AlertReceivedListener:480 - Unable to process alert ambari_agent_version_select for an invalid cluster named grid01
12 Dec 2017 06:09:20,979 ERROR [alert-event-bus-1] AlertReceivedListener:480 - Unable to process alert ambari_agent_disk_usage for an invalid cluster named grid01
12 Dec 2017 06:09:25,978 ERROR [alert-event-bus-2] AlertReceivedListener:480 - Unable to process alert ambari_agent_version_select for an invalid cluster named grid01
12 Dec 2017 06:09:25,979 ERROR [alert-event-bus-2] AlertReceivedListener:480 - Unable to process alert ambari_agent_disk_usage for an invalid cluster named grid01
12 Dec 2017 06:10:19,978 ERROR [alert-event-bus-2] AlertReceivedListener:480 - Unable to process alert ambari_agent_disk_usage for an invalid cluster named grid01
12 Dec 2017 06:10:21,978 ERROR [alert-event-bus-1] AlertReceivedListener:480 - Unable to process alert ambari_agent_disk_usage for an invalid cluster named grid01
12 Dec 2017 06:10:26,978 ERROR [alert-event-bus-2] AlertReceivedListener:480 - Unable to process alert ambari_agent_disk_usage for an invalid cluster named grid01
12 Dec 2017 06:11:20,978 ERROR [alert-event-bus-2] AlertReceivedListener:480 - Unable to process alert ambari_agent_disk_usage for an invalid cluster named grid01
12 Dec 2017 06:11:22,978 ERROR [alert-event-bus-1] AlertReceivedListener:480 - Unable to process alert ambari_agent_disk_usage for an invalid cluster named grid01
12 Dec 2017 06:11:27,978 ERROR [alert-event-bus-2] AlertReceivedListener:480 - Unable to process alert ambari_agent_disk_usage for an invalid cluster named grid01
12 Dec 2017 06:12:21,978 ERROR [alert-event-bus-2] AlertReceivedListener:480 - Unable to process alert ambari_agent_disk_usage for an invalid cluster named grid01
12 Dec 2017 06:12:22,978 ERROR [alert-event-bus-1] AlertReceivedListener:480 - Unable to process alert ambari_agent_disk_usage for an invalid cluster named grid01
12 Dec 2017 06:12:27,978 ERROR [alert-event-bus-2] AlertReceivedListener:480 - Unable to process alert ambari_agent_disk_usage for an invalid cluster named grid01
12 Dec 2017 06:13:21,978 ERROR [alert-event-bus-2] AlertReceivedListener:480 - Unable to process alert ambari_agent_disk_usage for an invalid cluster named grid01
12 Dec 2017 06:13:23,978 ERROR [alert-event-bus-1] AlertReceivedListener:480 - Unable to process alert ambari_agent_disk_usage for an invalid cluster named grid01
12 Dec 2017 06:13:28,978 ERROR [alert-event-bus-2] AlertReceivedListener:480 - Unable to process alert ambari_agent_disk_usage for an invalid cluster named grid01
12 Dec 2017 06:14:13,367  INFO [pool-18-thread-1] MetricsServiceImpl:65 - Attempting to initialize metrics sink
12 Dec 2017 06:14:13,367  INFO [pool-18-thread-1] MetricsServiceImpl:81 - ********* Configuring Metric Sink **********
12 Dec 2017 06:14:13,368  INFO [pool-18-thread-1] AmbariMetricSinkImpl:95 - No clusters configured.
12 Dec 2017 06:14:14,978 ERROR [alert-event-bus-2] AlertReceivedListener:480 - Unable to process 

Install has failed several times, not sure what to try next.

thanks,

Thomas

avatar
New Contributor

@shin matsuura, try doing a yum remove manually for all hive* packages on the errorsome host and then retry the step through ambari. Regards, Chandra Sharma.

avatar
Contributor

Try running below on the node you're installing hive2.

1)yum list installed |grep hive -- make sure repo is listed as @HDP-2.6-repo-1. If it says "installed" then do below steps.

2)yum-complete-transaction -- this is important to run.

3)yum remove hive2_ .... -- all components which are having "installed" but without proper repo.

4)Goto Ambari and install again.

This is a issue is happening for almost to any component due to break/killed yum and quasi installed status.

avatar
Contributor

@Aditya Sirna Here I've similar issue but with zookeeper. I've tried the yum clean all and tried to install zookeeper on the node again using yum. But dint work for me. Please some one help. I can't install zookeeper-client on one of the nodes.

stderr:   /var/lib/ambari-agent/data/errors-631.txt Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/ZOOKEEPER/3.4.5/package/scripts/zookeeper_client.py", line 79, in <module> ZookeeperClient().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 375, in execute method(env) File "/var/lib/ambari-agent/cache/common-services/ZOOKEEPER/3.4.5/package/scripts/zookeeper_client.py", line 59, in install self.install_packages(env) File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 811, in install_packages name = self.format_package_name(package['name']) File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 546, in format_package_name raise Fail("Cannot match package for regexp name {0}. Available packages: {1}".format(name, self.available_packages_in_repos)) resource_management.core.exceptions.Fail: Cannot match package for regexp name zookeeper_${stack_version}-server. Available packages: ['accumulo', 'accumulo-conf-standalone', 'accumulo-source', 'accumulo-test', 'accumulo_2_5_3_0_37', 'accumulo_2_5_3_0_37-conf-standalone', 'accumulo_2_5_3_0_37-source', 'accumulo_2_5_3_0_37-test', 'atlas-metadata', 'atlas-metadata-hive-plugin', 'atlas-metadata_2_5_3_0_37', 'bigtop-tomcat', 'datafu', 'falcon', 'falcon-doc', 'falcon_2_5_3_0_37', 'falcon_2_5_3_0_37-doc', 'flume', 'flume-agent', 'flume_2_5_3_0_37', 'flume_2_5_3_0_37-agent', 'hadoop', 'hadoop-client', 'hadoop-conf-pseudo', 'hadoop-doc', 'hadoop-hdfs', 'hadoop-hdfs-datanode', 'hadoop-hdfs-fuse', 'hadoop-hdfs-journalnode', 'hadoop-hdfs-namenode', 'hadoop-hdfs-secondarynamenode', 'hadoop-hdfs-zkfc', 'hadoop-httpfs', 'hadoop-httpfs-server', 'hadoop-libhdfs', 'hadoop-mapreduce', 'hadoop-mapreduce-historyserver', 'hadoop-source', 'hadoop-yarn', 'hadoop-yarn-nodemanager', 'hadoop-yarn-proxyserver', 'hadoop-yarn-resourcemanager', 'hadoop-yarn-timelineserver', 'hadoop_2_5_3_0_37-conf-pseudo', 'hadoop_2_5_3_0_37-doc', 'hadoop_2_5_3_0_37-hdfs-datanode', 'hadoop_2_5_3_0_37-hdfs-fuse', 'hadoop_2_5_3_0_37-hdfs-journalnode', 'hadoop_2_5_3_0_37-hdfs-namenode', 'hadoop_2_5_3_0_37-hdfs-secondarynamenode', 'hadoop_2_5_3_0_37-hdfs-zkfc', 'hadoop_2_5_3_0_37-httpfs', 'hadoop_2_5_3_0_37-httpfs-server', 'hadoop_2_5_3_0_37-mapreduce-historyserver', 'hadoop_2_5_3_0_37-source', 'hadoop_2_5_3_0_37-yarn-nodemanager', 'hadoop_2_5_3_0_37-yarn-proxyserver', 'hadoop_2_5_3_0_37-yarn-resourcemanager', 'hadoop_2_5_3_0_37-yarn-timelineserver', 'hadooplzo', 'hadooplzo-native', 'hadooplzo_2_5_3_0_37', 'hadooplzo_2_5_3_0_37-native', 'hbase', 'hbase-doc', 'hbase-master', 'hbase-regionserver', 'hbase-rest', 'hbase-thrift', 'hbase-thrift2', 'hbase_2_5_3_0_37', 'hbase_2_5_3_0_37-doc', 'hbase_2_5_3_0_37-master', 'hbase_2_5_3_0_37-regionserver', 'hbase_2_5_3_0_37-rest', 'hbase_2_5_3_0_37-thrift', 'hbase_2_5_3_0_37-thrift2', 'hive', 'hive-hcatalog', 'hive-hcatalog-server', 'hive-jdbc', 'hive-metastore', 'hive-server', 'hive-server2', 'hive-webhcat', 'hive-webhcat-server', 'hive2', 'hive2-jdbc', 'hive_2_5_3_0_37-hcatalog-server', 'hive_2_5_3_0_37-metastore', 'hive_2_5_3_0_37-server', 'hive_2_5_3_0_37-server2', 'hive_2_5_3_0_37-webhcat-server', 'hue', 'hue-beeswax', 'hue-common', 'hue-hcatalog', 'hue-oozie', 'hue-pig', 'hue-server', 'kafka', 'kafka_2_5_3_0_37', 'knox', 'knox_2_5_3_0_37', 'livy', 'livy_2_5_3_0_37', 'mahout', 'mahout-doc', 'mahout_2_5_3_0_37', 'mahout_2_5_3_0_37-doc', 'oozie', 'oozie-client', 'oozie_2_5_3_0_37', 'oozie_2_5_3_0_37-client', 'phoenix', 'phoenix_2_5_3_0_37', 'pig', 'ranger-admin', 'ranger-atlas-plugin', 'ranger-hbase-plugin', 'ranger-hdfs-plugin', 'ranger-hive-plugin', 'ranger-kafka-plugin', 'ranger-kms', 'ranger-knox-plugin', 'ranger-solr-plugin', 'ranger-storm-plugin', 'ranger-tagsync', 'ranger-usersync', 'ranger-yarn-plugin', 'ranger_2_5_3_0_37-admin', 'ranger_2_5_3_0_37-atlas-plugin', 'ranger_2_5_3_0_37-hbase-plugin', 'ranger_2_5_3_0_37-kafka-plugin', 'ranger_2_5_3_0_37-kms', 'ranger_2_5_3_0_37-knox-plugin', 'ranger_2_5_3_0_37-solr-plugin', 'ranger_2_5_3_0_37-storm-plugin', 'ranger_2_5_3_0_37-tagsync', 'ranger_2_5_3_0_37-usersync', 'slider', 'spark', 'spark-master', 'spark-python', 'spark-worker', 'spark-yarn-shuffle', 'spark2', 'spark2-master', 'spark2-python', 'spark2-worker', 'spark2-yarn-shuffle', 'spark2_2_5_3_0_37', 'spark2_2_5_3_0_37-master', 'spark2_2_5_3_0_37-python', 'spark2_2_5_3_0_37-worker', 'spark_2_5_3_0_37', 'spark_2_5_3_0_37-master', 'spark_2_5_3_0_37-python', 'spark_2_5_3_0_37-worker', 'sqoop', 'sqoop-metastore', 'sqoop_2_5_3_0_37', 'sqoop_2_5_3_0_37-metastore', 'storm', 'storm-slider-client', 'storm_2_5_3_0_37', 'tez', 'tez_hive2', 'zeppelin', 'zeppelin_2_5_3_0_37', 'zookeeper', 'zookeeper-server', 'R', 'R-core', 'R-core-devel', 'R-devel', 'R-java', 'R-java-devel', 'compat-readline5', 'epel-release', 'extjs', 'fping', 'ganglia-debuginfo', 'ganglia-devel', 'ganglia-gmetad', 'ganglia-gmond', 'ganglia-gmond-modules-python', 'ganglia-web', 'hadoop-lzo', 'hadoop-lzo-native', 'libRmath', 'libRmath-devel', 'libconfuse', 'libganglia', 'libgenders', 'lua-rrdtool', 'lucidworks-hdpsearch', 'lzo-debuginfo', 'lzo-devel', 'mysql-community-release', 'mysql-connector-java', 'nagios', 'nagios-debuginfo', 'nagios-devel', 'nagios-plugins', 'nagios-plugins-debuginfo', 'nagios-www', 'openblas', 'openblas-Rblas', 'openblas-devel', 'openblas-openmp', 'openblas-openmp64', 'openblas-openmp64_', 'openblas-serial64', 'openblas-serial64_', 'openblas-static', 'openblas-threads', 'openblas-threads64', 'openblas-threads64_', 'pdsh', 'perl-Crypt-DES', 'perl-Net-SNMP', 'perl-rrdtool', 'python-rrdtool', 'rrdtool', 'rrdtool-debuginfo', 'rrdtool-devel', 'ruby-rrdtool', 'snappy', 'snappy-devel', 'tcl-rrdtool', 'accumulo', 'accumulo-conf-standalone', 'accumulo-source', 'accumulo-test', 'accumulo_2_5_3_0_37', 'accumulo_2_5_3_0_37-conf-standalone', 'accumulo_2_5_3_0_37-source', 'accumulo_2_5_3_0_37-test', 'atlas-metadata', 'atlas-metadata-hive-plugin', 'atlas-metadata_2_5_3_0_37', 'bigtop-tomcat', 'datafu', 'falcon', 'falcon-doc', 'falcon_2_5_3_0_37', 'falcon_2_5_3_0_37-doc', 'flume', 'flume-agent', 'flume_2_5_3_0_37', 'flume_2_5_3_0_37-agent', 'hadoop', 'hadoop-client', 'hadoop-conf-pseudo', 'hadoop-doc', 'hadoop-hdfs', 'hadoop-hdfs-datanode', 'hadoop-hdfs-fuse', 'hadoop-hdfs-journalnode', 'hadoop-hdfs-namenode', 'hadoop-hdfs-secondarynamenode', 'hadoop-hdfs-zkfc', 'hadoop-httpfs', 'hadoop-httpfs-server', 'hadoop-libhdfs', 'hadoop-mapreduce', 'hadoop-mapreduce-historyserver', 'hadoop-source', 'hadoop-yarn', 'hadoop-yarn-nodemanager', 'hadoop-yarn-proxyserver', 'hadoop-yarn-resourcemanager', 'hadoop-yarn-timelineserver', 'hadoop_2_5_3_0_37-conf-pseudo', 'hadoop_2_5_3_0_37-doc', 'hadoop_2_5_3_0_37-hdfs-datanode', 'hadoop_2_5_3_0_37-hdfs-fuse', 'hadoop_2_5_3_0_37-hdfs-journalnode', 'hadoop_2_5_3_0_37-hdfs-namenode', 'hadoop_2_5_3_0_37-hdfs-secondarynamenode', 'hadoop_2_5_3_0_37-hdfs-zkfc', 'hadoop_2_5_3_0_37-httpfs', 'hadoop_2_5_3_0_37-httpfs-server', 'hadoop_2_5_3_0_37-mapreduce-historyserver', 'hadoop_2_5_3_0_37-source', 'hadoop_2_5_3_0_37-yarn-nodemanager', 'hadoop_2_5_3_0_37-yarn-proxyserver', 'hadoop_2_5_3_0_37-yarn-resourcemanager', 'hadoop_2_5_3_0_37-yarn-timelineserver', 'hadooplzo', 'hadooplzo-native', 'hadooplzo_2_5_3_0_37', 'hadooplzo_2_5_3_0_37-native', 'hbase', 'hbase-doc', 'hbase-master', 'hbase-regionserver', 'hbase-rest', 'hbase-thrift', 'hbase-thrift2', 'hbase_2_5_3_0_37', 'hbase_2_5_3_0_37-doc', 'hbase_2_5_3_0_37-master', 'hbase_2_5_3_0_37-regionserver', 'hbase_2_5_3_0_37-rest', 'hbase_2_5_3_0_37-thrift', 'hbase_2_5_3_0_37-thrift2', 'hive', 'hive-hcatalog', 'hive-hcatalog-server', 'hive-jdbc', 'hive-metastore', 'hive-server', 'hive-server2', 'hive-webhcat', 'hive-webhcat-server', 'hive2', 'hive2-jdbc', 'hive_2_5_3_0_37-hcatalog-server', 'hive_2_5_3_0_37-metastore', 'hive_2_5_3_0_37-server', 'hive_2_5_3_0_37-server2', 'hive_2_5_3_0_37-webhcat-server', 'hue', 'hue-beeswax', 'hue-common', 'hue-hcatalog', 'hue-oozie', 'hue-pig', 'hue-server', 'kafka', 'kafka_2_5_3_0_37', 'knox', 'knox_2_5_3_0_37', 'livy', 'livy_2_5_3_0_37', 'mahout', 'mahout-doc', 'mahout_2_5_3_0_37', 'mahout_2_5_3_0_37-doc', 'oozie', 'oozie-client', 'oozie_2_5_3_0_37', 'oozie_2_5_3_0_37-client', 'phoenix', 'phoenix_2_5_3_0_37', 'pig', 'ranger-admin', 'ranger-atlas-plugin', 'ranger-hbase-plugin', 'ranger-hdfs-plugin', 'ranger-hive-plugin', 'ranger-kafka-plugin', 'ranger-kms', 'ranger-knox-plugin', 'ranger-solr-plugin', 'ranger-storm-plugin', 'ranger-tagsync', 'ranger-usersync', 'ranger-yarn-plugin', 'ranger_2_5_3_0_37-admin', 'ranger_2_5_3_0_37-atlas-plugin', 'ranger_2_5_3_0_37-hbase-plugin', 'ranger_2_5_3_0_37-kafka-plugin', 'ranger_2_5_3_0_37-kms', 'ranger_2_5_3_0_37-knox-plugin', 'ranger_2_5_3_0_37-solr-plugin', 'ranger_2_5_3_0_37-storm-plugin', 'ranger_2_5_3_0_37-tagsync', 'ranger_2_5_3_0_37-usersync', 'slider', 'spark', 'spark-master', 'spark-python', 'spark-worker', 'spark-yarn-shuffle', 'spark2', 'spark2-master', 'spark2-python', 'spark2-worker', 'spark2-yarn-shuffle', 'spark2_2_5_3_0_37', 'spark2_2_5_3_0_37-master', 'spark2_2_5_3_0_37-python', 'spark2_2_5_3_0_37-worker', 'spark_2_5_3_0_37', 'spark_2_5_3_0_37-master', 'spark_2_5_3_0_37-python', 'spark_2_5_3_0_37-worker', 'sqoop', 'sqoop-metastore', 'sqoop_2_5_3_0_37', 'sqoop_2_5_3_0_37-metastore', 'storm', 'storm-slider-client', 'storm_2_5_3_0_37', 'tez', 'tez_hive2', 'zeppelin', 'zeppelin_2_5_3_0_37', 'zookeeper', 'zookeeper-server', 'R', 'R-core', 'R-core-devel', 'R-devel', 'R-java', 'R-java-devel', 'compat-readline5', 'epel-release', 'extjs', 'fping', 'ganglia-debuginfo', 'ganglia-devel', 'ganglia-gmetad', 'ganglia-gmond', 'ganglia-gmond-modules-python', 'ganglia-web', 'hadoop-lzo', 'hadoop-lzo-native', 'libRmath', 'libRmath-devel', 'libconfuse', 'libganglia', 'libgenders', 'lua-rrdtool', 'lucidworks-hdpsearch', 'lzo-debuginfo', 'lzo-devel', 'mysql-community-release', 'mysql-connector-java', 'nagios', 'nagios-debuginfo', 'nagios-devel', 'nagios-plugins', 'nagios-plugins-debuginfo', 'nagios-www', 'openblas', 'openblas-Rblas', 'openblas-devel', 'openblas-openmp', 'openblas-openmp64', 'openblas-openmp64_', 'openblas-serial64', 'openblas-serial64_', 'openblas-static', 'openblas-threads', 'openblas-threads64', 'openblas-threads64_', 'pdsh', 'perl-Crypt-DES', 'perl-Net-SNMP', 'perl-rrdtool', 'python-rrdtool', 'rrdtool', 'rrdtool-debuginfo', 'rrdtool-devel', 'ruby-rrdtool', 'snappy', 'snappy-devel', 'tcl-rrdtool', 'atlas-metadata_2_5_3_0_37-hive-plugin', 'bigtop-jsvc', 'datafu_2_5_3_0_37', 'hadoop_2_5_3_0_37', 'hadoop_2_5_3_0_37-client', 'hadoop_2_5_3_0_37-hdfs', 'hadoop_2_5_3_0_37-libhdfs', 'hadoop_2_5_3_0_37-mapreduce', 'hadoop_2_5_3_0_37-yarn', 'hdp-select', 'hive2_2_5_3_0_37', 'hive2_2_5_3_0_37-jdbc', 'hive_2_5_3_0_37', 'hive_2_5_3_0_37-hcatalog', 'hive_2_5_3_0_37-jdbc', 'hive_2_5_3_0_37-webhcat', 'pig_2_5_3_0_37', 'ranger_2_5_3_0_37-hdfs-plugin', 'ranger_2_5_3_0_37-hive-plugin', 'ranger_2_5_3_0_37-yarn-plugin', 'slider_2_5_3_0_37', 'spark2_2_5_3_0_37-yarn-shuffle', 'spark_2_5_3_0_37-yarn-shuffle', 'storm_2_5_3_0_37-slider-client', 'tez_2_5_3_0_37', 'tez_hive2_2_5_3_0_37', 'zookeeper_2_5_3_0_37', 'snappy-devel']stdout:   /var/lib/ambari-agent/data/output-631.txt 2018-03-13 09:24:57,026 - Stack Feature Version Info: Cluster Stack=2.5, Command Stack=None, Command Version=None -> 2.5 2018-03-13 09:24:57,037 - Using hadoop conf dir: /usr/hdp/2.5.3.0-37/hadoop/conf 2018-03-13 09:24:57,040 - Group['hdfs'] {} 2018-03-13 09:24:57,042 - Group['hadoop'] {} 2018-03-13 09:24:57,043 - Group['users'] {} 2018-03-13 09:24:57,044 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2018-03-13 09:24:57,046 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2018-03-13 09:24:57,048 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2018-03-13 09:24:57,049 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None} 2018-03-13 09:24:57,052 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None} 2018-03-13 09:24:57,055 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None} 2018-03-13 09:24:57,059 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2018-03-13 09:24:57,062 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2018-03-13 09:24:57,065 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2018-03-13 09:24:57,067 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2018-03-13 09:24:57,072 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2018-03-13 09:24:57,082 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if 2018-03-13 09:24:57,083 - Group['hdfs'] {} 2018-03-13 09:24:57,084 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', u'hdfs']} 2018-03-13 09:24:57,086 - FS Type: 2018-03-13 09:24:57,086 - Directory['/etc/hadoop'] {'mode': 0755} 2018-03-13 09:24:57,134 - File['/usr/hdp/2.5.3.0-37/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2018-03-13 09:24:57,136 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777} 2018-03-13 09:24:57,168 - Repository['HDP-2.5-repo-2'] {'append_to_file': False, 'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-2', 'mirror_list': None} 2018-03-13 09:24:57,190 - File['/etc/yum.repos.d/ambari-hdp-2.repo'] {'content': '[HDP-2.5-repo-2]\nname=HDP-2.5-repo-2\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0\n\npath=/\nenabled=1\ngpgcheck=0'} 2018-03-13 09:24:57,192 - Writing File['/etc/yum.repos.d/ambari-hdp-2.repo'] because contents don't match 2018-03-13 09:24:57,193 - Repository['HDP-UTILS-1.1.0.21-repo-2'] {'append_to_file': True, 'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-2', 'mirror_list': None} 2018-03-13 09:24:57,204 - File['/etc/yum.repos.d/ambari-hdp-2.repo'] {'content': '[HDP-2.5-repo-2]\nname=HDP-2.5-repo-2\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.21-repo-2]\nname=HDP-UTILS-1.1.0.21-repo-2\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'} 2018-03-13 09:24:57,205 - Writing File['/etc/yum.repos.d/ambari-hdp-2.repo'] because contents don't match 2018-03-13 09:24:57,217 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2018-03-13 09:24:57,494 - Skipping installation of existing package unzip 2018-03-13 09:24:57,495 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2018-03-13 09:24:57,527 - Skipping installation of existing package curl 2018-03-13 09:24:57,527 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2018-03-13 09:24:57,566 - Skipping installation of existing package hdp-select 2018-03-13 09:24:57,574 - The repository with version 2.5.3.0-37 for this command has been marked as resolved. It will be used to report the version of the component which was installed 2018-03-13 09:24:57,899 - Command repositories: HDP-2.5-repo-2, HDP-UTILS-1.1.0.21-repo-2 2018-03-13 09:24:57,899 - Applicable repositories: HDP-2.5-repo-2, HDP-UTILS-1.1.0.21-repo-2 2018-03-13 09:24:57,903 - Looking for matching packages in the following repositories: HDP-2.5-repo-2, HDP-UTILS-1.1.0.21-repo-2 2018-03-13 09:25:00,329 - Adding fallback repositories: HDP-2.5-repo-1, HDP-UTILS-1.1.0.21-repo-1 2018-03-13 09:25:03,636 - Package['zookeeper_2_5_3_0_37'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2018-03-13 09:25:03,783 - Skipping installation of existing package zookeeper_2_5_3_0_37 2018-03-13 09:25:03,787 - No package found for zookeeper_${stack_version}-server(zookeeper_(\d|_)+-server$) 2018-03-13 09:25:03,793 - The repository with version 2.5.3.0-37 for this command has been marked as resolved. It will be used to report the version of the component which was installed Command failed after 1 tries