Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Server registration failed even with local repositories

Server registration failed even with local repositories

New Contributor

One of my server failed to register. I tried even installing with local repositories in my web server, but the error persist.

This happened on my second try when I almost finish the installation and failed due to Ambary agent not running in this

server in the installation last stage.

Here is my entire registration log:

==========================
Creating target directory...
==========================

Command start time 2015-12-30 09:46:55

Connection to slave.dev.local closed.
SSH command execution finished
host=slave.dev.local, exitcode=0
Command end time 2015-12-30 09:46:55

==========================
Copying common functions script...
==========================

Command start time 2015-12-30 09:46:55

scp /usr/lib/python2.6/site-packages/ambari_commons
host=slave.dev.local, exitcode=0
Command end time 2015-12-30 09:46:56

==========================
Copying OS type check script...
==========================

Command start time 2015-12-30 09:46:56

scp /usr/lib/python2.6/site-packages/ambari_server/os_check_type.py
host=slave.dev.local, exitcode=0
Command end time 2015-12-30 09:46:56

==========================
Running OS type check...
==========================

Command start time 2015-12-30 09:46:56
Cluster primary/cluster OS family is ubuntu14 and local/current OS family is ubuntu14

Connection to slave.dev.local closed.
SSH command execution finished
host=slave.dev.local, exitcode=0
Command end time 2015-12-30 09:46:56

==========================
Checking 'sudo' package on remote host...
==========================

Command start time 2015-12-30 09:46:56
sudo						install

Connection to slave.dev.local closed.
SSH command execution finished
host=slave.dev.local, exitcode=0
Command end time 2015-12-30 09:46:57

==========================
Copying repo file to 'tmp' folder...
==========================

Command start time 2015-12-30 09:46:57

scp /etc/apt/sources.list.d/ambari.list
host=slave.dev.local, exitcode=0
Command end time 2015-12-30 09:46:57

==========================
Moving file to repo dir...
==========================

Command start time 2015-12-30 09:46:57

Connection to slave.dev.local closed.
SSH command execution finished
host=slave.dev.local, exitcode=0
Command end time 2015-12-30 09:46:57

==========================
Changing permissions for ambari.repo...
==========================

Command start time 2015-12-30 09:46:57

Connection to slave.dev.local closed.
SSH command execution finished
host=slave.dev.local, exitcode=0
Command end time 2015-12-30 09:46:58

==========================
Update apt cache of repository...
==========================

Command start time 2015-12-30 09:46:58
0% [Working]
            
Err http://public-repo-1.hortonworks.com Ambari InRelease
  

0% [Working]
            
Err http://public-repo-1.hortonworks.com HDP-UTILS InRelease
  

            
Err http://public-repo-1.hortonworks.com HDP InRelease
  

0% [Working]
            
Err http://public-repo-1.hortonworks.com Ambari Release.gpg
  Could not resolve 'public-repo-1.hortonworks.com'

0% [Working]
            
Err http://public-repo-1.hortonworks.com HDP-UTILS Release.gpg
  Could not resolve 'public-repo-1.hortonworks.com'

0% [Working]
            
Err http://public-repo-1.hortonworks.com HDP Release.gpg
  Could not resolve 'public-repo-1.hortonworks.com'

0% [Working]
            

Reading package lists... 0%

Reading package lists... 0%

Reading package lists... 0%

Reading package lists... 0%

Reading package lists... 0%

Reading package lists... 0%

Reading package lists... 6%

Reading package lists... Done

W: Failed to fetch http://public-repo-1.hortonworks.com/ambari/ubuntu14/2.x/updates/2.1.2/dists/Ambari/InRelease  

W: Failed to fetch http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/ubuntu14/dists/HDP-UTILS/InRelease  

W: Failed to fetch http://public-repo-1.hortonworks.com/HDP/ubuntu14/2.x/updates/2.3.4.0/dists/HDP/InRelease  

W: Failed to fetch http://public-repo-1.hortonworks.com/ambari/ubuntu14/2.x/updates/2.1.2/dists/Ambari/Release.gpg  Could not resolve 'public-repo-1.hortonworks.com'

W: Failed to fetch http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/ubuntu14/dists/HDP-UTILS/Release.gpg  Could not resolve 'public-repo-1.hortonworks.com'

W: Failed to fetch http://public-repo-1.hortonworks.com/HDP/ubuntu14/2.x/updates/2.3.4.0/dists/HDP/Release.gpg  Could not resolve 'public-repo-1.hortonworks.com'

W: Some index files failed to download. They have been ignored, or old ones used instead.
W: Duplicate sources.list entry http://public-repo-1.hortonworks.com/ambari/ubuntu14/2.x/updates/2.1.2/ Ambari/main amd64 Packages (/var/lib/apt/lists/public-repo-1.hortonworks.com_ambari_ubuntu14_2.x_updates_2.1.2_dists_Ambari_main_binary-amd64_Packages)
W: Duplicate sources.list entry http://public-repo-1.hortonworks.com/ambari/ubuntu14/2.x/updates/2.1.2/ Ambari/main i386 Packages (/var/lib/apt/lists/public-repo-1.hortonworks.com_ambari_ubuntu14_2.x_updates_2.1.2_dists_Ambari_main_binary-i386_Packages)

Connection to slave.dev.local closed.
SSH command execution finished
host=slave.dev.local, exitcode=0
Command end time 2015-12-30 09:46:59

==========================
Copying setup script file...
==========================

Command start time 2015-12-30 09:46:59

scp /usr/lib/python2.6/site-packages/ambari_server/setupAgent.py
host=slave.dev.local, exitcode=0
Command end time 2015-12-30 09:46:59

==========================
Running setup agent script...
==========================

Command start time 2015-12-30 09:46:59
('INFO 2015-12-30 09:47:13,707 AlertSchedulerHandler.py:323 - [AlertScheduler] Scheduling datanode_process with UUID bb05c01d-a891-4ea2-aca6-282b919dd96b
INFO 2015-12-30 09:47:13,707 scheduler.py:287 - Adding job tentatively -- it will be properly scheduled when the scheduler starts
INFO 2015-12-30 09:47:13,707 AlertSchedulerHandler.py:323 - [AlertScheduler] Scheduling ambari_agent_disk_usage with UUID dedb798d-2ed3-44e3-b8df-97d55eb2d84d
INFO 2015-12-30 09:47:13,708 scheduler.py:287 - Adding job tentatively -- it will be properly scheduled when the scheduler starts
INFO 2015-12-30 09:47:13,708 AlertSchedulerHandler.py:323 - [AlertScheduler] Scheduling zookeeper_server_process with UUID fd4ad98d-35d6-432c-98e3-3a4f5f03a246
INFO 2015-12-30 09:47:13,708 AlertSchedulerHandler.py:134 - [AlertScheduler] Starting <ambari_agent.apscheduler.scheduler.Scheduler object at 0x7f6305f02cd0>; currently running: False
INFO 2015-12-30 09:47:13,714 hostname.py:87 - Read public hostname \'slave.dev.local\' using socket.getfqdn()
WARNING 2015-12-30 09:47:13,865 Facter.py:354 - Could not run /usr/sbin/sestatus: OK
INFO 2015-12-30 09:47:14,100 Controller.py:140 - Registering with slave.dev.local (192.168.0.2) (agent=\'{"hardwareProfile": {"kernel": "Linux", "domain": "dev.local", "physicalprocessorcount": 2, "kernelrelease": "3.19.0-42-generic", "uptime_days": "0", "memorytotal": 7918076, "swapfree": "7.75 GB", "memorysize": 7918076, "osfamily": "ubuntu", "swapsize": "7.75 GB", "processorcount": 2, "netmask": "255.255.255.0", "timezone": "CET", "hardwareisa": "x86_64", "memoryfree": 6802196, "operatingsystem": "ubuntu", "kernelmajversion": "3.19", "kernelversion": "3.19.0", "macaddress": "00:19:66:88:A0:DE", "operatingsystemrelease": "14.04", "ipaddress": "192.168.0.2", "hostname": "slave", "uptime_hours": "0", "fqdn": "slave.dev.local", "id": "root", "architecture": "x86_64", "selinux": false, "mounts": [{"available": "3915384", "used": "4", "percent": "1%", "device": "udev", "mountpoint": "/dev", "type": "devtmpfs", "size": "3915388"}, {"available": "790656", "used": "1152", "percent": "1%", "device": "tmpfs", "mountpoint": "/run", "type": "tmpfs", "size": "791808"}, {"available": "900199676", "used": "4656112", "percent": "1%", "device": "/dev/sda1", "mountpoint": "/", "type": "ext4", "size": "953303940"}, {"available": "4", "used": "0", "percent": "0%", "device": "none", "mountpoint": "/sys/fs/cgroup", "type": "tmpfs", "size": "4"}, {"available": "5120", "used": "0", "percent": "0%", "device": "none", "mountpoint": "/run/lock", "type": "tmpfs", "size": "5120"}, {"available": "3958884", "used": "152", "percent": "1%", "device": "none", "mountpoint": "/run/shm", "type": "tmpfs", "size": "3959036"}, {"available": "102368", "used": "32", "percent": "1%", "device": "none", "mountpoint": "/run/user", "type": "tmpfs", "size": "102400"}], "hardwaremodel": "x86_64", "uptime_seconds": "1651", "interfaces": "eth0,lo"}, "currentPingPort": 8670, "prefix": "/var/lib/ambari-agent/data", "agentVersion": "2.1.2", "agentEnv": {"transparentHugePage": "", "hostHealth": {"agentTimeStampAtReporting": 1451465234096, "activeJavaProcs": [], "liveServices": [{"status": "Healthy", "name": "ntp", "desc": ""}]}, "reverseLookup": true, "alternatives": [], "umask": "18", "firewallName": "ufw", "stackFoldersAndFiles": [{"type": "directory", "name": "/etc/hadoop"}], "existingUsers": [{"status": "Available", "name": "mapred", "homeDir": "/home/mapred"}, {"status": "Available", "name": "ambari-qa", "homeDir": "/home/ambari-qa"}, {"status": "Available", "name": "zookeeper", "homeDir": "/home/zookeeper"}, {"status": "Available", "name": "hdfs", "homeDir": "/home/hdfs"}, {"status": "Available", "name": "yarn", "homeDir": "/home/yarn"}, {"status": "Available", "name": "spark", "homeDir": "/home/spark"}, {"status": "Available", "name": "ams", "homeDir": "/home/ams"}, {"status": "Available", "name": "mahout", "homeDir": "/home/mahout"}], "firewallRunning": false}, "timestamp": 1451465233867, "hostname": "slave.dev.local", "responseId": -1, "publicHostname": "slave.dev.local"}\')
INFO 2015-12-30 09:47:14,101 NetUtil.py:59 - Connecting to https://master.dev.local:8440/connection_info
INFO 2015-12-30 09:47:14,226 security.py:99 - SSL Connect being called.. connecting to the server
INFO 2015-12-30 09:47:14,350 security.py:60 - SSL connection established. Two-way SSL authentication is turned off on the server.
INFO 2015-12-30 09:47:14,711 Controller.py:166 - Registration Successful (response id = 0)
INFO 2015-12-30 09:47:14,711 ClusterConfiguration.py:123 - Updating cached configurations for cluster clusterone
INFO 2015-12-30 09:47:14,740 AmbariConfig.py:260 - Updating config property (agent.auto.cache.update) with value (true)
INFO 2015-12-30 09:47:14,740 AmbariConfig.py:260 - Updating config property (agent.check.remote.mounts) with value (false)
INFO 2015-12-30 09:47:14,741 AmbariConfig.py:260 - Updating config property (agent.check.mounts.timeout) with value (0)
INFO 2015-12-30 09:47:14,787 AlertSchedulerHandler.py:189 - [AlertScheduler] Reschedule Summary: 0 rescheduled, 0 unscheduled
INFO 2015-12-30 09:47:14,787 Controller.py:387 - Registration response from master.dev.local was OK
INFO 2015-12-30 09:47:14,787 Controller.py:392 - Resetting ActionQueue...
', None)
('INFO 2015-12-30 09:47:13,707 AlertSchedulerHandler.py:323 - [AlertScheduler] Scheduling datanode_process with UUID bb05c01d-a891-4ea2-aca6-282b919dd96b
INFO 2015-12-30 09:47:13,707 scheduler.py:287 - Adding job tentatively -- it will be properly scheduled when the scheduler starts
INFO 2015-12-30 09:47:13,707 AlertSchedulerHandler.py:323 - [AlertScheduler] Scheduling ambari_agent_disk_usage with UUID dedb798d-2ed3-44e3-b8df-97d55eb2d84d
INFO 2015-12-30 09:47:13,708 scheduler.py:287 - Adding job tentatively -- it will be properly scheduled when the scheduler starts
INFO 2015-12-30 09:47:13,708 AlertSchedulerHandler.py:323 - [AlertScheduler] Scheduling zookeeper_server_process with UUID fd4ad98d-35d6-432c-98e3-3a4f5f03a246
INFO 2015-12-30 09:47:13,708 AlertSchedulerHandler.py:134 - [AlertScheduler] Starting <ambari_agent.apscheduler.scheduler.Scheduler object at 0x7f6305f02cd0>; currently running: False
INFO 2015-12-30 09:47:13,714 hostname.py:87 - Read public hostname \'slave.dev.local\' using socket.getfqdn()
WARNING 2015-12-30 09:47:13,865 Facter.py:354 - Could not run /usr/sbin/sestatus: OK
INFO 2015-12-30 09:47:14,100 Controller.py:140 - Registering with slave.dev.local (192.168.0.2) (agent=\'{"hardwareProfile": {"kernel": "Linux", "domain": "dev.local", "physicalprocessorcount": 2, "kernelrelease": "3.19.0-42-generic", "uptime_days": "0", "memorytotal": 7918076, "swapfree": "7.75 GB", "memorysize": 7918076, "osfamily": "ubuntu", "swapsize": "7.75 GB", "processorcount": 2, "netmask": "255.255.255.0", "timezone": "CET", "hardwareisa": "x86_64", "memoryfree": 6802196, "operatingsystem": "ubuntu", "kernelmajversion": "3.19", "kernelversion": "3.19.0", "macaddress": "00:19:66:88:A0:DE", "operatingsystemrelease": "14.04", "ipaddress": "192.168.0.2", "hostname": "slave", "uptime_hours": "0", "fqdn": "slave.dev.local", "id": "root", "architecture": "x86_64", "selinux": false, "mounts": [{"available": "3915384", "used": "4", "percent": "1%", "device": "udev", "mountpoint": "/dev", "type": "devtmpfs", "size": "3915388"}, {"available": "790656", "used": "1152", "percent": "1%", "device": "tmpfs", "mountpoint": "/run", "type": "tmpfs", "size": "791808"}, {"available": "900199676", "used": "4656112", "percent": "1%", "device": "/dev/sda1", "mountpoint": "/", "type": "ext4", "size": "953303940"}, {"available": "4", "used": "0", "percent": "0%", "device": "none", "mountpoint": "/sys/fs/cgroup", "type": "tmpfs", "size": "4"}, {"available": "5120", "used": "0", "percent": "0%", "device": "none", "mountpoint": "/run/lock", "type": "tmpfs", "size": "5120"}, {"available": "3958884", "used": "152", "percent": "1%", "device": "none", "mountpoint": "/run/shm", "type": "tmpfs", "size": "3959036"}, {"available": "102368", "used": "32", "percent": "1%", "device": "none", "mountpoint": "/run/user", "type": "tmpfs", "size": "102400"}], "hardwaremodel": "x86_64", "uptime_seconds": "1651", "interfaces": "eth0,lo"}, "currentPingPort": 8670, "prefix": "/var/lib/ambari-agent/data", "agentVersion": "2.1.2", "agentEnv": {"transparentHugePage": "", "hostHealth": {"agentTimeStampAtReporting": 1451465234096, "activeJavaProcs": [], "liveServices": [{"status": "Healthy", "name": "ntp", "desc": ""}]}, "reverseLookup": true, "alternatives": [], "umask": "18", "firewallName": "ufw", "stackFoldersAndFiles": [{"type": "directory", "name": "/etc/hadoop"}], "existingUsers": [{"status": "Available", "name": "mapred", "homeDir": "/home/mapred"}, {"status": "Available", "name": "ambari-qa", "homeDir": "/home/ambari-qa"}, {"status": "Available", "name": "zookeeper", "homeDir": "/home/zookeeper"}, {"status": "Available", "name": "hdfs", "homeDir": "/home/hdfs"}, {"status": "Available", "name": "yarn", "homeDir": "/home/yarn"}, {"status": "Available", "name": "spark", "homeDir": "/home/spark"}, {"status": "Available", "name": "ams", "homeDir": "/home/ams"}, {"status": "Available", "name": "mahout", "homeDir": "/home/mahout"}], "firewallRunning": false}, "timestamp": 1451465233867, "hostname": "slave.dev.local", "responseId": -1, "publicHostname": "slave.dev.local"}\')
INFO 2015-12-30 09:47:14,101 NetUtil.py:59 - Connecting to https://master.dev.local:8440/connection_info
INFO 2015-12-30 09:47:14,226 security.py:99 - SSL Connect being called.. connecting to the server
INFO 2015-12-30 09:47:14,350 security.py:60 - SSL connection established. Two-way SSL authentication is turned off on the server.
INFO 2015-12-30 09:47:14,711 Controller.py:166 - Registration Successful (response id = 0)
INFO 2015-12-30 09:47:14,711 ClusterConfiguration.py:123 - Updating cached configurations for cluster clusterone
INFO 2015-12-30 09:47:14,740 AmbariConfig.py:260 - Updating config property (agent.auto.cache.update) with value (true)
INFO 2015-12-30 09:47:14,740 AmbariConfig.py:260 - Updating config property (agent.check.remote.mounts) with value (false)
INFO 2015-12-30 09:47:14,741 AmbariConfig.py:260 - Updating config property (agent.check.mounts.timeout) with value (0)
INFO 2015-12-30 09:47:14,787 AlertSchedulerHandler.py:189 - [AlertScheduler] Reschedule Summary: 0 rescheduled, 0 unscheduled
INFO 2015-12-30 09:47:14,787 Controller.py:387 - Registration response from master.dev.local was OK
INFO 2015-12-30 09:47:14,787 Controller.py:392 - Resetting ActionQueue...
', None)

Connection to slave.dev.local closed.
SSH command execution finished
host=slave.dev.local, exitcode=0
Command end time 2015-12-30 09:47:14

Registering with the server...
Registration with the server failed.
8 REPLIES 8

Re: Server registration failed even with local repositories

Expert Contributor

Try updating openssl on the server.

Re: Server registration failed even with local repositories

New Contributor

I have the latest version

Re: Server registration failed even with local repositories

New Contributor

Apparently I am having the same problem I had before with the cache, but this time it is not updating to the new repo using apt-get update.

This is the error I am getting:

Reading package lists... Done

W: Failed to fetch http://public-repo-1.hortonworks.com/ambari/ubuntu14/2.x/updates/2.2.0.0/dists/Ambari/InRelease  

W: Failed to fetch http://public-repo-1.hortonworks.com/ambari/ubuntu14/2.x/updates/2.2.0.0/dists/Ambari/Release.gpg  Could not resolve 'public-repo-1.hortonworks.com'

W: Some index files failed to download. They have been ignored, or old ones used instead.

But I am using the correct repos. Even using local repositories I get same errors. Something is going on in the ambari cache. Any body can help on how to update ambari cache so the repos I enter in the repository options are the ones used by the server?

Re: Server registration failed even with local repositories

Guru

Hi @Jose Antonio Munoz ,

do you have several repo files under /etc/apt/sources.list.d/ containing Ambari repository URLs, or even entry(-ies) in /etc/apt/sources.list directly ?

Try e.g. "grep -R -i hortonworks /etc/apt/sources*" to check for URLs pointing to official HW repo

Re: Server registration failed even with local repositories

New Contributor

The grep command shows zero so, no URLs exist that point to the HW repo. I look directly into the sources.list too.

Re: Server registration failed even with local repositories

@Jose Antonio Munoz

You mentioned that you are using local repo. You have to update your .repo files with your local reposerver

http://docs.hortonworks.com/HDPDocuments/Ambari-2....

Re: Server registration failed even with local repositories

Mentor

@Jose Antonio Munoz can you accept the best answer or provide your own solution?

Highlighted

Re: Server registration failed even with local repositories

New Contributor

@Jose Antonio Munroz : Stop Ambari server, edit the repoinfo.xml location could be " /var/lib/ambari-server/resources/..../repos/repoinfo.xml" modify the repository urls in this file and start Ambari server.

Don't have an account?
Coming from Hortonworks? Activate your account here