Support Questions

Find answers, ask questions, and share your expertise

Ambari Registration with the server failed.

avatar
Explorer

This is my error logs ...... 

 

==========================Creating target directory...==========================
Command start time 2019-09-02 17:01:05
Connection to master closed.SSH command execution finishedhost=master, exitcode=0Command end time 2019-09-02 17:01:05
==========================Copying ambari sudo script...==========================
Command start time 2019-09-02 17:01:05
scp /var/lib/ambari-server/ambari-sudo.shhost=master, exitcode=0Command end time 2019-09-02 17:01:05
==========================Copying common functions script...==========================
Command start time 2019-09-02 17:01:05
scp /usr/lib/python2.6/site-packages/ambari_commonshost=master, exitcode=0Command end time 2019-09-02 17:01:06
==========================Copying OS type check script...==========================
Command start time 2019-09-02 17:01:06
scp /usr/lib/python2.6/site-packages/ambari_server/os_check_type.pyhost=master, exitcode=0Command end time 2019-09-02 17:01:06
==========================Running OS type check...==========================
Command start time 2019-09-02 17:01:06Cluster primary/cluster OS family is ubuntu16 and local/current OS family is ubuntu16
Connection to master closed.SSH command execution finishedhost=master, exitcode=0Command end time 2019-09-02 17:01:06
==========================Checking 'sudo' package on remote host...==========================
Command start time 2019-09-02 17:01:06
Connection to master closed.SSH command execution finishedhost=master, exitcode=0Command end time 2019-09-02 17:01:06
==========================Copying repo file to 'tmp' folder...==========================
Command start time 2019-09-02 17:01:06
scp /etc/apt/sources.list.d/ambari.listhost=master, exitcode=0Command end time 2019-09-02 17:01:07
==========================Moving file to repo dir...==========================
Command start time 2019-09-02 17:01:07
Connection to master closed.SSH command execution finishedhost=master, exitcode=0Command end time 2019-09-02 17:01:07
==========================Changing permissions for ambari.repo...==========================
Command start time 2019-09-02 17:01:07
Connection to master closed.SSH command execution finishedhost=master, exitcode=0Command end time 2019-09-02 17:01:07
==========================Update apt cache of repository...==========================
Command start time 2019-09-02 17:01:07
0% [Working]0% [Connecting to public-repo-1.hortonworks.com] [Waiting for headers]Hit:1 http://ppa.launchpad.net/webupd8team/java/ubuntu xenial InRelease
0% [Connecting to public-repo-1.hortonworks.com]0% [1 InRelease gpgv 17.6 kB] [Connecting to public-repo-1.hortonworks.com]0% [Connecting to public-repo-1.hortonworks.com]0% [Connecting to public-repo-1.hortonworks.com]0% [Connecting to public-repo-1.hortonworks.com]0% [Connecting to public-repo-1.hortonworks.com]0% [Connecting to public-repo-1.hortonworks.com]0% [Connecting to public-repo-1.hortonworks.com]0% [Connecting to public-repo-1.hortonworks.com]0% [Connecting to public-repo-1.hortonworks.com]0% [Connecting to public-repo-1.hortonworks.com]Hit:2 http://public-repo-1.hortonworks.com/ambari/ubuntu14/2.x/updates/2.4.2.0 Ambari InRelease
0% [Working]0% [2 InRelease gpgv 3,190 B]20% [Working]
Reading package lists... 0%
Reading package lists... 0%
Reading package lists... 0%
Reading package lists... 0%
Reading package lists... 5%
Reading package lists... Done
W: http://public-repo-1.hortonworks.com/ambari/ubuntu14/2.x/updates/2.4.2.0/dists/Ambari/InRelease: Signature by key DF52ED4F7A3A5882C0994C66B9733A7A07513CAD uses weak digest algorithm (SHA1)
Connection to master closed.SSH command execution finishedhost=master, exitcode=0Command end time 2019-09-02 17:01:13
==========================Copying setup script file...==========================
Command start time 2019-09-02 17:01:13
scp /usr/lib/python2.6/site-packages/ambari_server/setupAgent.pyhost=master, exitcode=0Command end time 2019-09-02 17:01:13
==========================Running setup agent script...==========================
Command start time 2019-09-02 17:01:13('INFO 2019-09-02 17:01:47,369 logger.py:71 - call returned (0, \'\')INFO 2019-09-02 17:01:47,369 logger.py:71 - call[[\'test\', \'-w\', \'/run/user/0\']] {\'sudo\': True, \'timeout\': 5}INFO 2019-09-02 17:01:47,372 logger.py:71 - call returned (0, \'\')INFO 2019-09-02 17:01:47,378 Facter.py:194 - Directory: \'/etc/resource_overrides\' does not exist - it won\'t be used for gathering system resources.INFO 2019-09-02 17:01:47,449 Controller.py:160 - Registering with master (127.0.1.1) (agent=\'{"hardwareProfile": {"kernel": "Linux", "domain": "", "physicalprocessorcount": 4, "kernelrelease": "4.15.0-45-generic", "uptime_days": "3", "memorytotal": 16390376, "swapfree": "0.95 GB", "memorysize": 16390376, "osfamily": "ubuntu", "swapsize": "0.95 GB", "processorcount": 4, "netmask": null, "timezone": "KST", "hardwareisa": "x86_64", "memoryfree": 12196540, "operatingsystem": "ubuntu", "kernelmajversion": "4.15", "kernelversion": "4.15.0", "macaddress": "30:9C:23:43:F8:99", "operatingsystemrelease": "16.04", "ipaddress": "127.0.1.1", "hostname": "master", "uptime_hours": "73", "fqdn": "master", "id": "root", "architecture": "x86_64", "selinux": false, "mounts": [{"available": "8165416", "used": "0", "percent": "0%", "device": "udev", "mountpoint": "/dev", "type": "devtmpfs", "size": "8165416"}, {"available": "1627856", "used": "11184", "percent": "1%", "device": "tmpfs", "mountpoint": "/run", "type": "tmpfs", "size": "1639040"}, {"available": "903763276", "used": "7750156", "percent": "1%", "device": "/dev/sda1", "mountpoint": "/", "type": "ext4", "size": "960317832"}, {"available": "8195008", "used": "180", "percent": "1%", "device": "tmpfs", "mountpoint": "/dev/shm", "type": "tmpfs", "size": "8195188"}, {"available": "5116", "used": "4", "percent": "1%", "device": "tmpfs", "mountpoint": "/run/lock", "type": "tmpfs", "size": "5120"}, {"available": "1638984", "used": "56", "percent": "1%", "device": "tmpfs", "mountpoint": "/run/user/1000", "type": "tmpfs", "size": "1639040"}, {"available": "1639040", "used": "0", "percent": "0%", "device": "tmpfs", "mountpoint": "/run/user/0", "type": "tmpfs", "size": "1639040"}], "hardwaremodel": "x86_64", "uptime_seconds": "264311", "interfaces": "enp0s31f6,lo,wlx909f330d4aff"}, "currentPingPort": 8670, "prefix": "/var/lib/ambari-agent/data", "agentVersion": "2.4.2.0", "agentEnv": {"transparentHugePage": "madvise", "hostHealth": {"agentTimeStampAtReporting": 1567411307448, "activeJavaProcs": [], "liveServices": [{"status": "Healthy", "name": "ntp", "desc": ""}]}, "reverseLookup": true, "alternatives": [], "umask": "18", "firewallName": "ufw", "stackFoldersAndFiles": [], "existingUsers": [], "firewallRunning": false}, "timestamp": 1567411307380, "hostname": "master", "responseId": -1, "publicHostname": "master"}\')INFO 2019-09-02 17:01:47,449 NetUtil.py:62 - Connecting to https://192.168.56.101 master master.hadoop.com:8440/connection_infoWARNING 2019-09-02 17:01:47,460 NetUtil.py:85 - GET https://192.168.56.101 master master.hadoop.com:8440/connection_info -> 400, body:INFO 2019-09-02 17:01:47,461 security.py:100 - SSL Connect being called.. connecting to the serverINFO 2019-09-02 17:01:47,461 security.py:67 - Insecure connection to https://192.168.56.101 master master.hadoop.com:8441/ failed. Reconnecting using two-way SSL authentication..INFO 2019-09-02 17:01:47,461 security.py:186 - Server certicate not exists, downloadingINFO 2019-09-02 17:01:47,461 security.py:209 - Downloading server cert from https://192.168.56.101 master master.hadoop.com:8440/cert/ca/ERROR 2019-09-02 17:01:47,471 Controller.py:212 - Unable to connect to: https://192.168.56.101 master master.hadoop.com:8441/agent/v1/register/masterTraceback (most recent call last):File "/usr/lib/python2.6/site-packages/ambari_agent/Controller.py", line 165, in registerWithServerret = self.sendRequest(self.registerUrl, data)File "/usr/lib/python2.6/site-packages/ambari_agent/Controller.py", line 496, in sendRequestraise IOError(\'Request to {0} failed due to {1}\'.format(url, str(exception)))IOError: Request to https://192.168.56.101 master master.hadoop.com:8441/agent/v1/register/master failed due to <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)>ERROR 2019-09-02 17:01:47,471 Controller.py:213 - Error:Request to https://192.168.56.101 master master.hadoop.com:8441/agent/v1/register/master failed due to <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)>WARNING 2019-09-02 17:01:47,471 Controller.py:214 - Sleeping for 19 seconds and then trying again', None)('INFO 2019-09-02 17:01:47,369 logger.py:71 - call returned (0, \'\')INFO 2019-09-02 17:01:47,369 logger.py:71 - call[[\'test\', \'-w\', \'/run/user/0\']] {\'sudo\': True, \'timeout\': 5}INFO 2019-09-02 17:01:47,372 logger.py:71 - call returned (0, \'\')INFO 2019-09-02 17:01:47,378 Facter.py:194 - Directory: \'/etc/resource_overrides\' does not exist - it won\'t be used for gathering system resources.INFO 2019-09-02 17:01:47,449 Controller.py:160 - Registering with master (127.0.1.1) (agent=\'{"hardwareProfile": {"kernel": "Linux", "domain": "", "physicalprocessorcount": 4, "kernelrelease": "4.15.0-45-generic", "uptime_days": "3", "memorytotal": 16390376, "swapfree": "0.95 GB", "memorysize": 16390376, "osfamily": "ubuntu", "swapsize": "0.95 GB", "processorcount": 4, "netmask": null, "timezone": "KST", "hardwareisa": "x86_64", "memoryfree": 12196540, "operatingsystem": "ubuntu", "kernelmajversion": "4.15", "kernelversion": "4.15.0", "macaddress": "30:9C:23:43:F8:99", "operatingsystemrelease": "16.04", "ipaddress": "127.0.1.1", "hostname": "master", "uptime_hours": "73", "fqdn": "master", "id": "root", "architecture": "x86_64", "selinux": false, "mounts": [{"available": "8165416", "used": "0", "percent": "0%", "device": "udev", "mountpoint": "/dev", "type": "devtmpfs", "size": "8165416"}, {"available": "1627856", "used": "11184", "percent": "1%", "device": "tmpfs", "mountpoint": "/run", "type": "tmpfs", "size": "1639040"}, {"available": "903763276", "used": "7750156", "percent": "1%", "device": "/dev/sda1", "mountpoint": "/", "type": "ext4", "size": "960317832"}, {"available": "8195008", "used": "180", "percent": "1%", "device": "tmpfs", "mountpoint": "/dev/shm", "type": "tmpfs", "size": "8195188"}, {"available": "5116", "used": "4", "percent": "1%", "device": "tmpfs", "mountpoint": "/run/lock", "type": "tmpfs", "size": "5120"}, {"available": "1638984", "used": "56", "percent": "1%", "device": "tmpfs", "mountpoint": "/run/user/1000", "type": "tmpfs", "size": "1639040"}, {"available": "1639040", "used": "0", "percent": "0%", "device": "tmpfs", "mountpoint": "/run/user/0", "type": "tmpfs", "size": "1639040"}], "hardwaremodel": "x86_64", "uptime_seconds": "264311", "interfaces": "enp0s31f6,lo,wlx909f330d4aff"}, "currentPingPort": 8670, "prefix": "/var/lib/ambari-agent/data", "agentVersion": "2.4.2.0", "agentEnv": {"transparentHugePage": "madvise", "hostHealth": {"agentTimeStampAtReporting": 1567411307448, "activeJavaProcs": [], "liveServices": [{"status": "Healthy", "name": "ntp", "desc": ""}]}, "reverseLookup": true, "alternatives": [], "umask": "18", "firewallName": "ufw", "stackFoldersAndFiles": [], "existingUsers": [], "firewallRunning": false}, "timestamp": 1567411307380, "hostname": "master", "responseId": -1, "publicHostname": "master"}\')INFO 2019-09-02 17:01:47,449 NetUtil.py:62 - Connecting to https://192.168.56.101 master master.hadoop.com:8440/connection_infoWARNING 2019-09-02 17:01:47,460 NetUtil.py:85 - GET https://192.168.56.101 master master.hadoop.com:8440/connection_info -> 400, body:INFO 2019-09-02 17:01:47,461 security.py:100 - SSL Connect being called.. connecting to the serverINFO 2019-09-02 17:01:47,461 security.py:67 - Insecure connection to https://192.168.56.101 master master.hadoop.com:8441/ failed. Reconnecting using two-way SSL authentication..INFO 2019-09-02 17:01:47,461 security.py:186 - Server certicate not exists, downloadingINFO 2019-09-02 17:01:47,461 security.py:209 - Downloading server cert from https://192.168.56.101 master master.hadoop.com:8440/cert/ca/ERROR 2019-09-02 17:01:47,471 Controller.py:212 - Unable to connect to: https://192.168.56.101 master master.hadoop.com:8441/agent/v1/register/masterTraceback (most recent call last):File "/usr/lib/python2.6/site-packages/ambari_agent/Controller.py", line 165, in registerWithServerret = self.sendRequest(self.registerUrl, data)File "/usr/lib/python2.6/site-packages/ambari_agent/Controller.py", line 496, in sendRequestraise IOError(\'Request to {0} failed due to {1}\'.format(url, str(exception)))IOError: Request to https://192.168.56.101 master master.hadoop.com:8441/agent/v1/register/master failed due to <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)>ERROR 2019-09-02 17:01:47,471 Controller.py:213 - Error:Request to https://192.168.56.101 master master.hadoop.com:8441/agent/v1/register/master failed due to <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)>WARNING 2019-09-02 17:01:47,471 Controller.py:214 - Sleeping for 19 seconds and then trying again', None)
Connection to master closed.SSH command execution finishedhost=master, exitcode=0Command end time 2019-09-02 17:01:48
Registering with the server...Registration with the server failed.


And this is my ambari-agent.ini
# Licensed to the Apache Software Foundation (ASF) under one or more# contributor license agreements. See the NOTICE file distributed with# this work for additional information regarding copyright ownership.# The ASF licenses this file to You under the Apache License, Version 2.0# (the "License"); you may not use this file except in compliance with# the License. You may obtain a copy of the License at## http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific
[server]hostname=192.168.56.101 master master.hadoop.comurl_port=8440secured_url_port=8441
[agent]logdir=/var/log/ambari-agentpiddir=/var/run/ambari-agentprefix=/var/lib/ambari-agent/data;loglevel=(DEBUG/INFO)loglevel=INFOdata_cleanup_interval=86400data_cleanup_max_age=2592000data_cleanup_max_size_MB = 100ping_port=8670cache_dir=/var/lib/ambari-agent/cachetolerate_download_failures=truerun_as_user=rootparallel_execution=0alert_grace_period=5alert_kinit_timeout=14400000system_resource_overrides=/etc/resource_overrides; memory_threshold_soft_mb=400; memory_threshold_hard_mb=1000
[security]keysdir=/var/lib/ambari-agent/keysserver_crt=ca.crtpassphrase_env_var_name=AMBARI_PASSPHRASEssl_verify_cert=0
[services]pidLookupPath=/var/run/
[heartbeat]state_interval_seconds=60dirs=/etc/hadoop,/etc/hadoop/conf,/etc/hbase,/etc/hcatalog,/etc/hive,/etc/oozie,/etc/sqoop,/etc/ganglia,/var/run/hadoop,/var/run/zookeeper,/var/run/hbase,/var/run/templeton,/var/run/oozie,/var/log/hadoop,/var/log/zookeeper,/var/log/hbase,/var/run/templeton,/var/log/hive; 0 - unlimitedlog_lines_count=300idle_interval_min=1idle_interval_max=10
[logging]syslog_enabled=0

and This is my /etc/hosts

127.0.0.1 localhost127.0.1.1 master
192.168.56.101 master master.hadoop.com192.168.56.102 slave1 slave1.hadoop.com192.168.56.103 slave2 slave2.hadoop.com192.168.56.104 slave3 slave3.hadoop.com192.168.56.105 slave4 slave4.hadoop.com192.168.56.106 slave5 slave5.hadoop.com

# The following lines are desirable for IPv6 capable hosts::1 ip6-localhost ip6-loopbackfe00::0 ip6-localnetff00::0 ip6-mcastprefixff02::1 ip6-allnodesff02::2 ip6-allrouters

root@master:~# hostnamemasterroot@master:~# hostname -fmaster

 

How can i fix it???

 

1 ACCEPTED SOLUTION

avatar
Master Mentor

@CoPen 

 

If you are getting exactly the same error then you might be doing the same mistake again. Can you please share the complete log again with the new error.

Please see this which we noticed earlier:

 

OError: Request to <a href="<a href="https://192.168.56.101" target="_blank">https://192.168.56.101</a>" target="_blank"><a href="https://192.168.56.101</a" target="_blank">https://192.168.56.101</a</a>> master master.hadoop.com:8441/agent/v1/register/master failed due to <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)>

ERROR 2019-09-02 17:01:47,471 Controller.py:213 - Error:Request to <a href="<a href="https://192.168.56.101" target="_blank">https://192.168.56.101</a>" target="_blank"><a href="https://192.168.56.101</a" target="_blank">https://192.168.56.101</a</a>> master master.hadoop.com:8441/agent/v1/register/master failed due to <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)

 


Notice the URL: "https://192.168.56.101 master master.hadoop.com:8441" it contains IP Address (192.168.56.101), ambari server hostname alias (master) and hostname (master.hadoop.com) all thgether.

So definitely the URL is incorrect thats why you see the mentioned error.


Also as we see "CERTIFICATE_VERIFY_FAILED] certificate verify failed" error hence you must also see the Article
https://community.cloudera.com/t5/Community-Articles/Java-Python-Updates-and-Ambari-Agent-TLS-Settin...

However, the more serious issue is that you are using 3 generation Old Ambari Server (2.4.2)
Is there any specific reason that you are using so OLD ambari server version (2.4.2) ? The latest version of ambari is 2.7.4

 

View solution in original post

4 REPLIES 4

avatar
Master Mentor

@CoPen 

Your ambari-agent.ini file is pointing to incorrect ambari server . (it should not have IP Address and hostname together)

You have the following entry

[server]
hostname=192.168.56.101 master master.hadoop.com

 

Ideally it should be following:

[server]
hostname=master.hadoop.com

 

 

avatar
Explorer

Thanks for advice,I changed my ambari-agent.ini file but received same error.....

 

Registering with the server...
Registration with the server failed.

 

I try to change 

First.

Hostname = master

Second.

Hostnme = master.hadoop.com 

 

and ambari-agent restart 

but i receive same error

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific

[server]
hostname=master.hadoop.com
url_port=8440
secured_url_port=8441

[agent]
logdir=/var/log/ambari-agent
piddir=/var/run/ambari-agent
prefix=/var/lib/ambari-agent/data
;loglevel=(DEBUG/INFO)
loglevel=INFO
data_cleanup_interval=86400
data_cleanup_max_age=2592000
data_cleanup_max_size_MB = 100
ping_port=8670
cache_dir=/var/lib/ambari-agent/cache
tolerate_download_failures=true
run_as_user=root
parallel_execution=0
alert_grace_period=5
alert_kinit_timeout=14400000
system_resource_overrides=/etc/resource_overrides
; memory_threshold_soft_mb=400
; memory_threshold_hard_mb=1000

[security]
keysdir=/var/lib/ambari-agent/keys
server_crt=ca.crt
passphrase_env_var_name=AMBARI_PASSPHRASE
ssl_verify_cert=0

[services]
pidLookupPath=/var/run/

[heartbeat]
state_interval_seconds=60
dirs=/etc/hadoop,/etc/hadoop/conf,/etc/hbase,/etc/hcatalog,/etc/hive,/etc/oozie,
/etc/sqoop,/etc/ganglia,
/var/run/hadoop,/var/run/zookeeper,/var/run/hbase,/var/run/templeton,/var/run/oozie,
/var/log/hadoop,/var/log/zookeeper,/var/log/hbase,/var/run/templeton,/var/log/hive
; 0 - unlimited
log_lines_count=300
idle_interval_min=1
idle_interval_max=10


[logging]
syslog_enabled=0

avatar
Master Mentor

@CoPen 

 

If you are getting exactly the same error then you might be doing the same mistake again. Can you please share the complete log again with the new error.

Please see this which we noticed earlier:

 

OError: Request to <a href="<a href="https://192.168.56.101" target="_blank">https://192.168.56.101</a>" target="_blank"><a href="https://192.168.56.101</a" target="_blank">https://192.168.56.101</a</a>> master master.hadoop.com:8441/agent/v1/register/master failed due to <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)>

ERROR 2019-09-02 17:01:47,471 Controller.py:213 - Error:Request to <a href="<a href="https://192.168.56.101" target="_blank">https://192.168.56.101</a>" target="_blank"><a href="https://192.168.56.101</a" target="_blank">https://192.168.56.101</a</a>> master master.hadoop.com:8441/agent/v1/register/master failed due to <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)

 


Notice the URL: "https://192.168.56.101 master master.hadoop.com:8441" it contains IP Address (192.168.56.101), ambari server hostname alias (master) and hostname (master.hadoop.com) all thgether.

So definitely the URL is incorrect thats why you see the mentioned error.


Also as we see "CERTIFICATE_VERIFY_FAILED] certificate verify failed" error hence you must also see the Article
https://community.cloudera.com/t5/Community-Articles/Java-Python-Updates-and-Ambari-Agent-TLS-Settin...

However, the more serious issue is that you are using 3 generation Old Ambari Server (2.4.2)
Is there any specific reason that you are using so OLD ambari server version (2.4.2) ? The latest version of ambari is 2.7.4

 

avatar
Explorer

Oh.. Thanks,  I upgrade my ambari version and my hostname master with master.hadoop.com togerther!