Member since
08-30-2018
7
Posts
0
Kudos Received
0
Solutions
10-02-2018
09:15 AM
Which file you think that I should check? From the following directory I ran ls and can see there are many configuration file and wonder which one I should look into /usr/hdp/2.6.4.0-91/hadoop/conf capacity-scheduler.xml kms-acls.xml secure configuration.xsl kms-env.sh slaves container-executor.cfg kms-log4j.properties ssl-client.xml core-site.xml kms-site.xml ssl-client.xml.example hadoop-env.cmd log4j.properties ssl-server.xml hadoop-env.sh mapred-env.cmd ssl-server.xml.example hadoop-metrics2.properties mapred-env.sh taskcontroller.cfg hadoop-metrics.properties mapred-queues.xml.template yarn-env.cmd hadoop-policy.xml mapred-site.xml yarn-env.sh hdfs-site.xml mapred-site.xml.template yarn-site.xml
We do not have usr/hdp/2.6.4.0-91/spark2/conf
Thank you Jessica
... View more
09-28-2018
03:14 AM
We encountered following problem during HPD - 2.6.4 installation using Ambari . The step it was stopped was in Livy Server install . I attached the error message below. Please help ! stderr: Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/SPARK/1.2.1/package/scripts/livy_server.py", line 141, in <module> LivyServer().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 375, in execute method(env) File "/var/lib/ambari-agent/cache/common-services/SPARK/1.2.1/package/scripts/livy_server.py", line 45, in install self.install_packages(env) File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 821, in install_packages retry_count=agent_stack_retry_count) File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 53, in action_install self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos) File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 264, in install_package self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput()) File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 266, in checked_call_with_retries return self._call_with_retries(cmd, is_checked=True, **kwargs) File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 283, in _call_with_retries code, out = func(cmd, **kwargs) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner result = function(command, **kwargs) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call raise ExecutionFailed(err_msg, code, out, err) resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/bin/yum -d 0 -e 0 -y install spark_2_6_4_0_91' returned 1. Error: Package: glibc-2.17-196.el7_4.2.i686 (rhel-7-server-rpms) Requires: glibc-common = 2.17-196.el7_4.2 Installed: glibc-common-2.17-222.el7.x86_64 (@repo/$releasever) glibc-common = 2.17-222.el7 Available: glibc-common-2.17-55.el7.x86_64 (rhel-7-server-rpms) glibc-common = 2.17-55.el7 Available: glibc-common-2.17-55.el7_0.1.x86_64 (rhel-7-server-rpms) glibc-common = 2.17-55.el7_0.1 Available: glibc-common-2.17-55.el7_0.3.x86_64 (rhel-7-server-rpms) glibc-common = 2.17-55.el7_0.3 Available: glibc-common-2.17-55.el7_0.5.x86_64 (rhel-7-server-rpms) glibc-common = 2.17-55.el7_0.5 Available: glibc-common-2.17-78.el7.x86_64 (rhel-7-server-rpms) glibc-common = 2.17-78.el7 Available: glibc-common-2.17-105.el7.x86_64 (rhel-7-server-rpms) glibc-common = 2.17-105.el7 Available: glibc-common-2.17-106.el7_2.1.x86_64 (rhel-7-server-rpms) glibc-common = 2.17-106.el7_2.1 Available: glibc-common-2.17-106.el7_2.4.x86_64 (rhel-7-server-rpms) glibc-common = 2.17-106.el7_2.4 Available: glibc-common-2.17-106.el7_2.6.x86_64 (rhel-7-server-rpms) glibc-common = 2.17-106.el7_2.6 Available: glibc-common-2.17-106.el7_2.8.x86_64 (rhel-7-server-rpms) glibc-common = 2.17-106.el7_2.8 Available: glibc-common-2.17-157.el7.x86_64 (rhel-7-server-rpms) glibc-common = 2.17-157.el7 Available: glibc-common-2.17-157.el7_3.1.x86_64 (rhel-7-server-rpms) glibc-common = 2.17-157.el7_3.1 Available: glibc-common-2.17-157.el7_3.2.x86_64 (rhel-7-server-rpms) glibc-common = 2.17-157.el7_3.2 Available: glibc-common-2.17-157.el7_3.4.x86_64 (rhel-7-server-rpms) glibc-common = 2.17-157.el7_3.4 Available: glibc-common-2.17-157.el7_3.5.x86_64 (rhel-7-server-rpms) glibc-common = 2.17-157.el7_3.5 Available: glibc-common-2.17-196.el7.x86_64 (rhel-7-server-rpms) glibc-common = 2.17-196.el7 Available: glibc-common-2.17-196.el7_4.2.x86_64 (rhel-7-server-rpms) glibc-common = 2.17-196.el7_4.2 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest Loaded plugins: langpacks, product-id, subscription-manager stdout: 2018-09-27 11:57:39,013 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=None -> 2.6 2018-09-27 11:57:39,018 - Using hadoop conf dir: /usr/hdp/2.6.4.0-91/hadoop/conf 2018-09-27 11:57:39,020 - Group['livy'] {} 2018-09-27 11:57:39,021 - Group['spark'] {} 2018-09-27 11:57:39,021 - Group['hdfs'] {} 2018-09-27 11:57:39,022 - Group['hadoop'] {} 2018-09-27 11:57:39,022 - Group['users'] {} 2018-09-27 11:57:39,022 - Group['knox'] {} 2018-09-27 11:57:39,023 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2018-09-27 11:57:39,025 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2018-09-27 11:57:39,026 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2018-09-27 11:57:39,027 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None} 2018-09-27 11:57:39,031 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2018-09-27 11:57:39,033 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2018-09-27 11:57:39,035 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None} 2018-09-27 11:57:39,036 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2018-09-27 11:57:39,040 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2018-09-27 11:57:39,041 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None} 2018-09-27 11:57:39,043 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2018-09-27 11:57:39,044 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2018-09-27 11:57:39,045 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2018-09-27 11:57:39,046 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2018-09-27 11:57:39,048 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2018-09-27 11:57:39,049 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2018-09-27 11:57:39,051 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2018-09-27 11:57:39,082 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if 2018-09-27 11:57:39,083 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'} 2018-09-27 11:57:39,084 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2018-09-27 11:57:39,086 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2018-09-27 11:57:39,087 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {} 2018-09-27 11:57:39,127 - call returned (0, '1002') 2018-09-27 11:57:39,127 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1002'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'} 2018-09-27 11:57:39,159 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1002'] due to not_if 2018-09-27 11:57:39,160 - Group['hdfs'] {} 2018-09-27 11:57:39,160 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', u'hdfs']} 2018-09-27 11:57:39,161 - FS Type: 2018-09-27 11:57:39,161 - Directory['/etc/hadoop'] {'mode': 0755} 2018-09-27 11:57:39,178 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2018-09-27 11:57:39,179 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777} 2018-09-27 11:57:39,197 - Repository['HDP-2.6-repo-51'] {'append_to_file': False, 'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.4.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-51', 'mirror_list': None} 2018-09-27 11:57:39,206 - File['/etc/yum.repos.d/ambari-hdp-51.repo'] {'content': '[HDP-2.6-repo-51]\nname=HDP-2.6-repo-51\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.4.0\n\npath=/\nenabled=1\ngpgcheck=0'} 2018-09-27 11:57:39,207 - Writing File['/etc/yum.repos.d/ambari-hdp-51.repo'] because contents don't match 2018-09-27 11:57:39,207 - Repository with url http://public-repo-1.hortonworks.com/HDP-GPL/centos7/2.x/updates/2.6.4.0 is not created due to its tags: set([u'GPL']) 2018-09-27 11:57:39,208 - Repository['HDP-UTILS-1.1.0.22-repo-51'] {'append_to_file': True, 'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.22/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-51', 'mirror_list': None} 2018-09-27 11:57:39,212 - File['/etc/yum.repos.d/ambari-hdp-51.repo'] {'content': '[HDP-2.6-repo-51]\nname=HDP-2.6-repo-51\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.4.0\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.22-repo-51]\nname=HDP-UTILS-1.1.0.22-repo-51\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.22/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'} 2018-09-27 11:57:39,212 - Writing File['/etc/yum.repos.d/ambari-hdp-51.repo'] because contents don't match 2018-09-27 11:57:39,217 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2018-09-27 11:57:40,130 - Skipping installation of existing package unzip 2018-09-27 11:57:40,130 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2018-09-27 11:57:40,804 - Skipping installation of existing package curl 2018-09-27 11:57:40,805 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2018-09-27 11:57:41,517 - Skipping installation of existing package hdp-select 2018-09-27 11:57:41,522 - The repository with version 2.6.4.0-91 for this command has been marked as resolved. It will be used to report the version of the component which was installed 2018-09-27 11:57:41,879 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=None -> 2.6 2018-09-27 11:57:41,881 - Using hadoop conf dir: /usr/hdp/2.6.4.0-91/hadoop/conf 2018-09-27 11:57:41,890 - call['ambari-python-wrap /usr/bin/hdp-select status spark-client'] {'timeout': 20} 2018-09-27 11:57:41,945 - call returned (0, 'spark-client - 2.6.4.0-91') 2018-09-27 11:57:41,952 - Command repositories: HDP-2.6-repo-51, HDP-2.6-GPL-repo-51, HDP-UTILS-1.1.0.22-repo-51 2018-09-27 11:57:41,953 - Applicable repositories: HDP-2.6-repo-51, HDP-2.6-GPL-repo-51, HDP-UTILS-1.1.0.22-repo-51 2018-09-27 11:57:41,955 - Looking for matching packages in the following repositories: HDP-2.6-repo-51, HDP-2.6-GPL-repo-51, HDP-UTILS-1.1.0.22-repo-51 2018-09-27 11:57:52,609 - Package['spark_2_6_4_0_91'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2018-09-27 11:57:53,504 - Installing package spark_2_6_4_0_91 ('/usr/bin/yum -d 0 -e 0 -y install spark_2_6_4_0_91') 2018-09-27 11:58:21,646 - Execution of '/usr/bin/yum -d 0 -e 0 -y install spark_2_6_4_0_91' returned 1. Error: Package: glibc-2.17-196.el7_4.2.i686 (rhel-7-server-rpms) Requires: glibc-common = 2.17-196.el7_4.2 Installed: glibc-common-2.17-222.el7.x86_64 (@repo/$releasever) glibc-common = 2.17-222.el7 Available: glibc-common-2.17-55.el7.x86_64 (rhel-7-server-rpms) glibc-common = 2.17-55.el7 Available: glibc-common-2.17-55.el7_0.1.x86_64 (rhel-7-server-rpms) glibc-common = 2.17-55.el7_0.1 Available: glibc-common-2.17-55.el7_0.3.x86_64 (rhel-7-server-rpms) glibc-common = 2.17-55.el7_0.3
... View more
Labels:
09-12-2018
06:10 PM
Below was the error message we saw on the website SolrCore Initialization Failures
test: org.apache.solr.common.cloud.ZooKeeperException:org.apache.solr.common.cloud.ZooKeeperException: Could not find configName for collection test found:[]
... View more
09-12-2018
04:00 PM
We used solr create -c test to create the collection and it had error. So we tried to delete with the following command and still failed # solr delete -c test ERROR: Error loading config name for collection test
... View more
- Tags:
- Data Processing
- solr
Labels:
- Labels:
-
Apache Solr
09-12-2018
01:50 AM
Here is what I got: tail -f/var/log/ambari-server/ambari-server.log INFO:root:Executing parallel bootstrap INFO:root:Finished parallel bootstrap 11 Sep 2018 13:17:32,406INFO [pool-18-thread-1] BSHostStatusCollector:55 - Request directory
/var/run/ambari-server/bootstrap/1 11 Sep 2018 13:17:32,407INFO [pool-18-thread-1] BSHostStatusCollector:62 - HostList for polling
on [***01.**.***, ***02.**.***, ***03.**.***] 11 Sep 2018 13:17:33,389INFO [pool-18-thread-1] BSHostStatusCollector:55 - Request directory
/var/run/ambari-server/bootstrap/1 11 Sep 2018 13:17:33,389INFO [pool-18-thread-1] BSHostStatusCollector:62 - HostList for polling
on [***01.**.***, ***02.**.***, ***03.**.***] 11 Sep 2018 13:17:33,407INFO [pool-18-thread-1] BSHostStatusCollector:55 - Request directory
/var/run/ambari-server/bootstrap/1 11 Sep 2018 13:17:33,407INFO [pool-18-thread-1] BSHostStatusCollector:62 - HostList for polling
on [***01.**.***, ***02.**.***, ***03.**.***] 11 Sep 2018 13:22:14,962INFO [pool-17-thread-1] AmbariMetricSinkImpl:278 - No live collector to
send metrics to. Metrics to be sent will be discarded. This message will be
skipped for the next 20 times. 11 Sep 2018 13:37:18,613INFO [ambari-client-thread-37] BootStrapImpl:108 - BootStrapping hosts ***01.**.***:***02.**.***:***03.**.***: 11 Sep 2018 13:37:18,615INFO [Thread-32] BSRunner:189 - Kicking off the scheduler for polling on
logs in /var/run/ambari-server/bootstrap/2 11 Sep 2018 13:37:18,616INFO [pool-19-thread-1] BSHostStatusCollector:55 - Request directory
/var/run/ambari-server/bootstrap/2 11 Sep 2018 13:37:18,617INFO [Thread-32] BSRunner:258 - Host= ***01.**.***,***02.**.***,***03.**.***
bs=/usr/lib/python2.6/site-packages/ambari_server/bootstrap.py
requestDir=/var/run/ambari-server/bootstrap/2 user=root sshPort=22
keyfile=/var/run/ambari-server/bootstrap/2/sshKey passwordFile null server=***01.**.***
version=2.6.1.5 serverPort=8080 userRunAs=root timeout=300 11 Sep 2018 13:37:18,617INFO [pool-19-thread-1] BSHostStatusCollector:62 - HostList for polling
on [***01.**.***, ***02.**.***, ***03.**.***] 11 Sep 2018 13:37:18,623INFO [Thread-32] BSRunner:286 - Bootstrap output,
log=/var/run/ambari-server/bootstrap/2/bootstrap.err
/var/run/ambari-server/bootstrap/2/bootstrap.out at ***01.**.*** 11 Sep 2018 13:37:28,618INFO [pool-19-thread-1] BSHostStatusCollector:55 - Request directory
/var/run/ambari-server/bootstrap/2 11 Sep 2018 13:37:28,618INFO [pool-19-thread-1] BSHostStatusCollector:62 - HostList for polling
on [***01.**.***, ***02.**.***, ***03.**.***] 11 Sep 2018 13:37:30,902WARN [qtp-ambari-agent-47] SecurityFilter:103 - Request https://***01.**.***:8440/ca
doesn't match any pattern. 11 Sep 2018 13:37:30,903WARN [qtp-ambari-agent-47] SecurityFilter:62 - This request is not
allowed on this port: https://***01.**.***:8440/ca 11 Sep 2018 13:37:31,308INFO [qtp-ambari-agent-48] HeartBeatHandler:385 - agentOsType = redhat7 11 Sep 2018 13:37:31,392INFO [qtp-ambari-agent-48] HostImpl:334 - Received host registration,
host=[hostname=***01,fqdn=***01.**.***,domain=***.***,architecture=x86_64,processorcount=4,physicalprocessorcount=4,osname=redhat,osversion=7.4,osfamily=redhat,memory=28801816,uptime_hours=2,mounts=(available=13212668,mountpoint=/,used=3554308,percent=22%,size=16766976,device=/dev/mapper/vg00-root,type=xfs)(available=14389996,mountpoint=/dev,used=0,percent=0%,size=14389996,device=devtmpfs,type=devtmpfs)(available=842240,mountpoint=/boot,used=196096,percent=19%,size=1038336,device=/dev/sda1,type=xfs)(available=3299920,mountpoint=/opt,used=359856,percent=10%,size=3659776,device=/dev/mapper/vg00-opt,type=xfs)(available=2629132,mountpoint=/var,used=3652084,percent=59%,size=6281216,device=/dev/mapper/vg00-var,type=xfs)(available=1493104,mountpoint=/opt/patrol,used=593808,percent=29%,size=2086912,device=/dev/mapper/vg00-patrol,type=xfs)(available=3984360,mountpoint=/opt/Tanium,used=199704,percent=5%,size=4184064,device=/dev/mapper/vg00-tanium,type=xfs)(available=4151136,mountpoint=/opt/maestro,used=32928,percent=1%,size=4184064,device=/dev/mapper/vg00-maestro,type=xfs)(available=1005408,mountpoint=/opt/ibm-ucd,used=32928,percent=4%,size=1038336,device=/dev/mapper/vg00-ibmucd,type=xfs)(available=4150944,mountpoint=/tmp,used=33120,percent=1%,size=4184064,device=/dev/mapper/vg00-tmp,type=xfs)(available=1808296,mountpoint=/opt/bmc/bladelogic,used=278616,percent=14%,size=2086912,device=/dev/mapper/vg00-bladelogic,type=xfs)(available=3101972,mountpoint=/home,used=33516,percent=2%,size=3135488,device=/dev/mapper/vg00-home,type=xfs)(available=47129936,mountpoint=/comp/archive,used=32944,percent=1%,size=47162880,device=/dev/mapper/vg01-comparchive,type=xfs)(available=36649296,mountpoint=/comp/logs,used=32944,percent=1%,size=36682240,device=/dev/mapper/vg01-complogs,type=xfs)(available=62850896,mountpoint=/comp/runtime,used=32944,percent=1%,size=62883840,device=/dev/mapper/vg01-compruntime,type=xfs)(available=178284900,mountpoint=/***/resource,used=17503980,percent=9%,size=206290920,device=/dev/sdb1,type=ext4)] , registrationTime=1536687451308, agentVersion=2.6.1.5 11 Sep 2018 13:37:31,393INFO [qtp-ambari-agent-48] TopologyManager:637 -
TopologyManager.onHostRegistered: Entering 11 Sep 2018 13:37:31,393INFO [qtp-ambari-agent-48] TopologyManager:639 -
TopologyManager.onHostRegistered: host = ***01.**.*** is already associated
with the cluster or is currently being processed 11 Sep 2018 13:37:31,403INFO [qtp-ambari-agent-48] HeartBeatHandler:464 - Recovery configuration
set to RecoveryConfig{, type=AUTO_START, maxCount=6, windowInMinutes=60,
retryGap=5, maxLifetimeCount=1024, components=,
recoveryTimestamp=1536687451402} 11 Sep 2018 13:37:31,565WARN [qtp-ambari-agent-48] SecurityFilter:103 - Request https://***01.**.***:8440/ca
doesn't match any pattern. 11 Sep 2018 13:37:31,565WARN [qtp-ambari-agent-48] SecurityFilter:62 - This request is not
allowed on this port: https://***01.**.***:8440/ca 11 Sep 2018 13:37:31,575WARN [qtp-ambari-agent-48] SecurityFilter:103 - Request https://***01.**.***:8440/ca
doesn't match any pattern. 11 Sep 2018 13:37:31,575WARN [qtp-ambari-agent-48] SecurityFilter:62 - This request is not allowed
on this port: https://***01.**.***:8440/ca 11 Sep 2018 13:37:31,977INFO [qtp-ambari-agent-48] HeartBeatHandler:385 - agentOsType = redhat7 11 Sep 2018 13:37:31,978INFO [qtp-ambari-agent-46] HeartBeatHandler:385 - agentOsType = redhat7 11 Sep 2018 13:37:32,030INFO [qtp-ambari-agent-48] HostImpl:334 - Received host registration,
host=[hostname=***03,fqdn=***03.**.***,domain=***.***,architecture=x86_64,processorcount=4,physicalprocessorcount=4,osname=redhat,osversion=7.4,osfamily=redhat,memory=28801816,uptime_hours=2,mounts=(available=13502536,mountpoint=/,used=3264440,percent=20%,size=16766976,device=/dev/mapper/vg00-root,type=xfs)(available=14389944,mountpoint=/dev,used=0,percent=0%,size=14389944,device=devtmpfs,type=devtmpfs)(available=842140,mountpoint=/boot,used=196196,percent=19%,size=1038336,device=/dev/sda1,type=xfs)(available=4151000,mountpoint=/tmp,used=33064,percent=1%,size=4184064,device=/dev/mapper/vg00-tmp,type=xfs)(available=3299936,mountpoint=/opt,used=359840,percent=10%,size=3659776,device=/dev/mapper/vg00-opt,type=xfs)(available=1493100,mountpoint=/opt/patrol,used=593812,percent=29%,size=2086912,device=/dev/mapper/vg00-patrol,type=xfs)(available=4151136,mountpoint=/opt/maestro,used=32928,percent=1%,size=4184064,device=/dev/mapper/vg00-maestro,type=xfs)(available=1005408,mountpoint=/opt/ibm-ucd,used=32928,percent=4%,size=1038336,device=/dev/mapper/vg00-ibmucd,type=xfs)(available=3469776,mountpoint=/var,used=2811440,percent=45%,size=6281216,device=/dev/mapper/vg00-var,type=xfs)(available=3984212,mountpoint=/opt/Tanium,used=199852,percent=5%,size=4184064,device=/dev/mapper/vg00-tanium,type=xfs)(available=1808284,mountpoint=/opt/bmc/bladelogic,used=278628,percent=14%,size=2086912,device=/dev/mapper/vg00-bladelogic,type=xfs)(available=3101892,mountpoint=/home,used=33596,percent=2%,size=3135488,device=/dev/mapper/vg00-home,type=xfs)(available=62850896,mountpoint=/comp/runtime,used=32944,percent=1%,size=62883840,device=/dev/mapper/vg01-compruntime,type=xfs)(available=36649296,mountpoint=/comp/logs,used=32944,percent=1%,size=36682240,device=/dev/mapper/vg01-complogs,type=xfs)(available=47129936,mountpoint=/comp/archive,used=32944,percent=1%,size=47162880,device=/dev/mapper/vg01-comparchive,type=xfs)(available=192109732,mountpoint=/***/resource,used=3679148,percent=2%,size=206290920,device=/dev/sdb1,type=ext4)] , registrationTime=1536687451977, agentVersion=2.6.1.5 11 Sep 2018 13:37:32,030INFO [qtp-ambari-agent-48] TopologyManager:637 -
TopologyManager.onHostRegistered: Entering 11 Sep 2018 13:37:32,030INFO [qtp-ambari-agent-48] TopologyManager:639 -
TopologyManager.onHostRegistered: host = ***03.**.*** is already associated
with the cluster or is currently being processed 11 Sep 2018 13:37:32,035INFO [qtp-ambari-agent-46] HostImpl:334 - Received host registration,
host=[hostname=***02,fqdn=***02.**.***,domain=***.***,architecture=x86_64,processorcount=4,physicalprocessorcount=4,osname=redhat,osversion=7.4,osfamily=redhat,memory=28801816,uptime_hours=2,mounts=(available=13350860,mountpoint=/,used=3416116,percent=21%,size=16766976,device=/dev/mapper/vg00-root,type=xfs)(available=14389944,mountpoint=/dev,used=0,percent=0%,size=14389944,device=devtmpfs,type=devtmpfs)(available=842140,mountpoint=/boot,used=196196,percent=19%,size=1038336,device=/dev/sda1,type=xfs)(available=4335448,mountpoint=/var,used=1945768,percent=31%,size=6281216,device=/dev/mapper/vg00-var,type=xfs)(available=3299920,mountpoint=/opt,used=359856,percent=10%,size=3659776,device=/dev/mapper/vg00-opt,type=xfs)(available=3983916,mountpoint=/opt/Tanium,used=200148,percent=5%,size=4184064,device=/dev/mapper/vg00-tanium,type=xfs)(available=4151136,mountpoint=/opt/maestro,used=32928,percent=1%,size=4184064,device=/dev/mapper/vg00-maestro,type=xfs)(available=4151000,mountpoint=/tmp,used=33064,percent=1%,size=4184064,device=/dev/mapper/vg00-tmp,type=xfs)(available=1005408,mountpoint=/opt/ibm-ucd,used=32928,percent=4%,size=1038336,device=/dev/mapper/vg00-ibmucd,type=xfs)(available=3101984,mountpoint=/home,used=33504,percent=2%,size=3135488,device=/dev/mapper/vg00-home,type=xfs)(available=1808268,mountpoint=/opt/bmc/bladelogic,used=278644,percent=14%,size=2086912,device=/dev/mapper/vg00-bladelogic,type=xfs)(available=36649296,mountpoint=/comp/logs,used=32944,percent=1%,size=36682240,device=/dev/mapper/vg01-complogs,type=xfs)(available=62850896,mountpoint=/comp/runtime,used=32944,percent=1%,size=62883840,device=/dev/mapper/vg01-compruntime,type=xfs)(available=47129936,mountpoint=/comp/archive,used=32944,percent=1%,size=47162880,device=/dev/mapper/vg01-comparchive,type=xfs)(available=1496204,mountpoint=/opt/patrol,used=590708,percent=29%,size=2086912,device=/dev/mapper/vg00-patrol,type=xfs)(available=192198208,mountpoint=/***/resource,used=3590672,percent=2%,size=206290920,device=/dev/sdb1,type=ext4)] , registrationTime=1536687451978, agentVersion=2.6.1.5 11 Sep 2018 13:37:32,035INFO [qtp-ambari-agent-46] TopologyManager:637 -
TopologyManager.onHostRegistered: Entering 11 Sep 2018 13:37:32,035INFO [qtp-ambari-agent-46] TopologyManager:639 -
TopologyManager.onHostRegistered: host = ***02.**.*** is already associated
with the cluster or is currently being processed 11 Sep 2018 13:37:32,038INFO [qtp-ambari-agent-48] HeartBeatHandler:464 - Recovery configuration
set to RecoveryConfig{, type=AUTO_START, maxCount=6, windowInMinutes=60,
retryGap=5, maxLifetimeCount=1024, components=METRICS_COLLECTOR,
recoveryTimestamp=1536687452037} 11 Sep 2018 13:37:32,041INFO [qtp-ambari-agent-46] HeartBeatHandler:464 - Recovery configuration
set to RecoveryConfig{, type=AUTO_START, maxCount=6, windowInMinutes=60,
retryGap=5, maxLifetimeCount=1024, components=,
recoveryTimestamp=1536687452041} 11 Sep 2018 13:37:36,627INFO [Thread-32] BSRunner:310 - Script log Mesg INFO:root:BootStrapping hosts ['***01.**.***', '***02.**.***', '***03.**.***'] using
/usr/lib/python2.6/site-packages/ambari_server cluster primary OS: redhat7 with
user 'root'with ssh Port '22' sshKey File
/var/run/ambari-server/bootstrap/2/sshKey password File null using tmp dir
/var/run/ambari-server/bootstrap/2 ambari: ***01.**.***; server_port: 8080;
ambari version: 2.6.1.5; user_run_as: root INFO:root:Executing parallel bootstrap INFO:root:Finished parallel bootstrap 11 Sep 2018 13:37:36,627INFO [pool-19-thread-1] BSHostStatusCollector:55 - Request directory
/var/run/ambari-server/bootstrap/2 11 Sep 2018 13:37:36,627INFO [pool-19-thread-1] BSHostStatusCollector:62 - HostList for polling
on [***01.**.***, ***02.**.***, ***03.**.***]
... View more
09-11-2018
02:14 PM
Hi Akhil and Vincius, Thanks a lot for the help! Yes after I used the solution indicated here the error message disappeared. However, the registration still failed. I attached the log below. ========================== Creating target directory... ========================== Command start time 2018-09-10 17:15:36 Connection to ***.***.net closed. SSH command execution finished host=***.***.net, exitcode=0 Command end time 2018-09-10 17:15:36 ========================== Copying ambari sudo script... ========================== Command start time 2018-09-10 17:15:36 scp /var/lib/ambari-server/ambari-sudo.sh host=***.***.net, exitcode=0 Command end time 2018-09-10 17:15:36 ========================== Copying common functions script... ========================== Command start time 2018-09-10 17:15:36 scp
/usr/lib/python2.6/site-packages/ambari_commons host=***.***.net, exitcode=0 Command end time 2018-09-10 17:15:37 ========================== Copying create-python-wrap script... ========================== Command start time 2018-09-10 17:15:37 scp /var/lib/ambari-server/create-python-wrap.sh host=***.***.net, exitcode=0 Command end time 2018-09-10 17:15:37 ========================== Copying OS type check script... ========================== Command start time 2018-09-10 17:15:37 scp
/usr/lib/python2.6/site-packages/ambari_server/os_check_type.py host=***.***.net, exitcode=0 Command end time 2018-09-10 17:15:38 ========================== Running create-python-wrap script... ========================== Command start time 2018-09-10 17:15:38 Connection to ***.***.net closed. SSH command execution finished host=***.***.net, exitcode=0 Command end time 2018-09-10 17:15:38 ========================== Running OS type check... ========================== Command start time 2018-09-10 17:15:38 Cluster primary/cluster OS family is redhat7 and
local/current OS family is redhat7 ******************************************************************************* Connection to ***.***.net closed. SSH command execution finished host=***.***.net, exitcode=0 Command end time 2018-09-10 17:15:38 ========================== Checking 'sudo' package on remote host... ========================== Command start time 2018-09-10 17:15:38 Connection to ***.***.net closed. SSH command execution finished host=***.***.net, exitcode=0 Command end time 2018-09-10 17:15:39 ========================== Copying repo file to 'tmp' folder... ========================== Command start time 2018-09-10 17:15:39 scp /etc/yum.repos.d/ambari.repo host=***.***.net, exitcode=0 Command end time 2018-09-10 17:15:39 ========================== Moving file to repo dir... ========================== Command start time 2018-09-10 17:15:39 Connection to ***.***.net closed. SSH command execution finished host=***.***.net, exitcode=0 Command end time 2018-09-10 17:15:39 ========================== Changing permissions for ambari.repo... ========================== Command start time 2018-09-10 17:15:39 Connection to ***.***.net closed. SSH command execution finished host=***.***.net, exitcode=0 Command end time 2018-09-10 17:15:40 ========================== Copying setup script file... ========================== Command start time 2018-09-10 17:15:40 scp
/usr/lib/python2.6/site-packages/ambari_server/setupAgent.py host=***.***.net, exitcode=0 Command end time 2018-09-10 17:15:40 ========================== Running setup agent script... ========================== Command start time 2018-09-10 17:15:40 ("INFO 2018-09-10 17:15:48,906 security.py:93
- SSL Connect being called.. connecting to the server INFO 2018-09-10 17:15:48,962 security.py:60 - SSL
connection established. Two-way SSL authentication is turned off on the server. INFO 2018-09-10 17:15:49,065 Controller.py:196 -
Registration Successful (response id = 0) INFO 2018-09-10 17:15:49,066
ClusterConfiguration.py:119 - Updating cached configurations for cluster ***_*** INFO 2018-09-10 17:15:49,079
RecoveryManager.py:577 - RecoverConfig = {u'components': u'', u'maxCount': u'6', u'maxLifetimeCount': u'1024', u'recoveryTimestamp': 1536614149004, u'retryGap': u'5', u'type':
u'AUTO_START', u'windowInMinutes': u'60'} INFO 2018-09-10 17:15:49,079
RecoveryManager.py:677 - ==> Auto recovery is enabled with maximum 6 in 60
minutes with gap of 5 minutes between and lifetime max being 1024. Enabled
components - INFO 2018-09-10 17:15:49,079 AmbariConfig.py:316 -
Updating config property (agent.check.remote.mounts) with value (false) INFO 2018-09-10 17:15:49,080 AmbariConfig.py:316 -
Updating config property (agent.auto.cache.update) with value (true) INFO 2018-09-10 17:15:49,080 AmbariConfig.py:316 -
Updating config property (java.home) with value (/usr/jdk64/jdk1.8.0_112) INFO 2018-09-10 17:15:49,080 AmbariConfig.py:316 -
Updating config property (agent.check.mounts.timeout) with value (0) INFO 2018-09-10 17:15:49,085
AlertSchedulerHandler.py:291 - [AlertScheduler] Caching cluster ***_*** with
alert hash c5ef37fa8872c86e4ca91019b1b160e3 INFO 2018-09-10 17:15:49,089
AlertSchedulerHandler.py:230 - [AlertScheduler] Reschedule Summary: 0
rescheduled, 0 unscheduled INFO 2018-09-10 17:15:49,090 Controller.py:516 -
Registration response from ***.***.net was OK INFO 2018-09-10 17:15:49,090 Controller.py:521 -
Resetting ActionQueue... ", None) ("INFO 2018-09-10 17:15:48,906 security.py:93
- SSL Connect being called.. connecting to the server INFO 2018-09-10 17:15:48,962 security.py:60 - SSL
connection established. Two-way SSL authentication is turned off on the server. INFO 2018-09-10 17:15:49,065 Controller.py:196 -
Registration Successful (response id = 0) INFO 2018-09-10 17:15:49,066
ClusterConfiguration.py:119 - Updating cached configurations for cluster ***_*** INFO 2018-09-10 17:15:49,079
RecoveryManager.py:577 - RecoverConfig = {u'components': u'', u'maxCount': u'6', u'maxLifetimeCount': u'1024', u'recoveryTimestamp': 1536614149004, u'retryGap': u'5', u'type':
u'AUTO_START', u'windowInMinutes': u'60'} INFO 2018-09-10 17:15:49,079 RecoveryManager.py:677
- ==> Auto recovery is enabled with maximum 6 in 60 minutes with gap of 5
minutes between and lifetime max being 1024. Enabled components - INFO 2018-09-10 17:15:49,079 AmbariConfig.py:316 -
Updating config property (agent.check.remote.mounts) with value (false) INFO 2018-09-10 17:15:49,080 AmbariConfig.py:316 -
Updating config property (agent.auto.cache.update) with value (true) INFO 2018-09-10 17:15:49,080 AmbariConfig.py:316 -
Updating config property (java.home) with value (/usr/jdk64/jdk1.8.0_112) INFO 2018-09-10 17:15:49,080 AmbariConfig.py:316 -
Updating config property (agent.check.mounts.timeout) with value (0) INFO 2018-09-10 17:15:49,085
AlertSchedulerHandler.py:291 - [AlertScheduler] Caching cluster ***_*** with
alert hash c5ef37fa8872c86e4ca91019b1b160e3 INFO 2018-09-10 17:15:49,089
AlertSchedulerHandler.py:230 - [AlertScheduler] Reschedule Summary: 0
rescheduled, 0 unscheduled INFO 2018-09-10 17:15:49,090 Controller.py:516 -
Registration response from ***.***.net was OK INFO 2018-09-10 17:15:49,090 Controller.py:521 -
Resetting ActionQueue... ", None) Connection to ***.***.net closed. SSH command execution finished host=***.***.net, exitcode=0 Command end time 2018-09-10 17:15:51 Registering with the server... Registration with the server failed.
... View more
09-07-2018
07:52 PM
We were trying to install HDP-2.6.4.0 using Ambari
– 2.6.1.5 We got the following error at the step of comfirm hosts. INFO 2018-09-07 12:51:45,461 DataCleaner.py:122 - Data cleanup finished
INFO 2018-09-07 12:51:45,461 hostname.py:67 - agent:hostname_script configuration not defined thus read hostname '***.net' using socket.getfqdn().
INFO 2018-09-07 12:51:45,595 PingPortListener.py:50 - Ping port listener started on port: 8670
INFO 2018-09-07 12:51:45,598 main.py:437 - Connecting to Ambari server at https://***.net:8440 (10.215.102.157)
INFO 2018-09-07 12:51:45,598 NetUtil.py:70 - Connecting to https://***:8440/ca
ERROR 2018-09-07 12:51:45,601 NetUtil.py:96 - EOF occurred in violation of protocol (_ssl.c:579)
ERROR 2018-09-07 12:51:45,601 NetUtil.py:97 - SSLError: Failed to connect. Please check openssl library versions.
Refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1022468 for more details.
WARNING 2018-09-07 12:51:45,601 NetUtil.py:124 - Server at https://***:8440 is not reachable, sleeping for 10 seconds...
", None)
*** is our website and sorry that due to the company policy we have to replace it with wild card
... View more
- Tags:
- Hadoop Core
- hdp-2.6
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)