Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Openssl error when registering host during installation of Ambari HDP

Openssl error when registering host during installation of Ambari HDP

New Contributor

Hello. I'm new of HDP.
During the installation of HDP using Ambari, an error occurred while registering the host.
(registration process of hostname and SSH private key)
Attach error message and picture.

if this photo is not enough to resolve my problem or my question is need tobe more detail, feel free to let me know..

Please help.... Thank you..^^

(I installed HDP 2.6 version)

92639-ambari1.jpg

92640-ambari2.jpg

Command start time 2018-10-04 09:53:02

("INFO 2018-10-04 09:53:10,602 HeartbeatHandlers.py:116 - Stop event received
INFO 2018-10-04 09:53:10,602 NetUtil.py:130 - Stop event received
INFO 2018-10-04 09:53:10,603 ExitHelper.py:56 - Performing cleanup before exiting...
INFO 2018-10-04 09:53:10,603 ExitHelper.py:70 - Cleanup finished, exiting with code:0
INFO 2018-10-04 09:53:11,301 main.py:283 - Agent died gracefully, exiting.
INFO 2018-10-04 09:53:11,302 ExitHelper.py:56 - Performing cleanup before exiting...
INFO 2018-10-04 09:53:12,002 main.py:145 - loglevel=logging.INFO
INFO 2018-10-04 09:53:12,002 main.py:145 - loglevel=logging.INFO
INFO 2018-10-04 09:53:12,002 main.py:145 - loglevel=logging.INFO
INFO 2018-10-04 09:53:12,005 DataCleaner.py:39 - Data cleanup thread started
INFO 2018-10-04 09:53:12,007 DataCleaner.py:120 - Data cleanup started
INFO 2018-10-04 09:53:12,008 hostname.py:67 - agent:hostname_script configuration not defined thus read hostname 'master.hadoop.com' using socket.getfqdn().
INFO 2018-10-04 09:53:12,031 DataCleaner.py:122 - Data cleanup finished
INFO 2018-10-04 09:53:12,101 PingPortListener.py:50 - Ping port listener started on port: 8670
INFO 2018-10-04 09:53:12,107 main.py:437 - Connecting to Ambari server at https://master:8440 (10.253.9.66)
INFO 2018-10-04 09:53:12,107 NetUtil.py:70 - Connecting to https://master:8440/ca
ERROR 2018-10-04 09:53:12,111 NetUtil.py:96 - EOF occurred in violation of protocol (_ssl.c:579)
ERROR 2018-10-04 09:53:12,112 NetUtil.py:97 - SSLError: Failed to connect. Please check openssl library versions. 
Refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1022468 for more details.
WARNING 2018-10-04 09:53:12,112 NetUtil.py:124 - Server at https://master:8440 is not reachable, sleeping for 10 seconds...
", None)
("INFO 2018-10-04 09:53:10,602 HeartbeatHandlers.py:116 - Stop event received
INFO 2018-10-04 09:53:10,602 NetUtil.py:130 - Stop event received
INFO 2018-10-04 09:53:10,603 ExitHelper.py:56 - Performing cleanup before exiting...
INFO 2018-10-04 09:53:10,603 ExitHelper.py:70 - Cleanup finished, exiting with code:0
INFO 2018-10-04 09:53:11,301 main.py:283 - Agent died gracefully, exiting.
INFO 2018-10-04 09:53:11,302 ExitHelper.py:56 - Performing cleanup before exiting...
INFO 2018-10-04 09:53:12,002 main.py:145 - loglevel=logging.INFO
INFO 2018-10-04 09:53:12,002 main.py:145 - loglevel=logging.INFO
INFO 2018-10-04 09:53:12,002 main.py:145 - loglevel=logging.INFO
INFO 2018-10-04 09:53:12,005 DataCleaner.py:39 - Data cleanup thread started
INFO 2018-10-04 09:53:12,007 DataCleaner.py:120 - Data cleanup started
INFO 2018-10-04 09:53:12,008 hostname.py:67 - agent:hostname_script configuration not defined thus read hostname 'master.hadoop.com' using socket.getfqdn().
INFO 2018-10-04 09:53:12,031 DataCleaner.py:122 - Data cleanup finished
INFO 2018-10-04 09:53:12,101 PingPortListener.py:50 - Ping port listener started on port: 8670
INFO 2018-10-04 09:53:12,107 main.py:437 - Connecting to Ambari server at https://master:8440 (10.253.9.66)
INFO 2018-10-04 09:53:12,107 NetUtil.py:70 - Connecting to https://master:8440/ca
ERROR 2018-10-04 09:53:12,111 NetUtil.py:96 - EOF occurred in violation of protocol (_ssl.c:579)
ERROR 2018-10-04 09:53:12,112 NetUtil.py:97 - SSLError: Failed to connect. Please check openssl library versions. 
Refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1022468 for more details.
WARNING 2018-10-04 09:53:12,112 NetUtil.py:124 - Server at https://master:8440 is not reachable, sleeping for 10 seconds...
", None)

Connection to master.hadoop.com closed.
SSH command execution finished
host=master.hadoop.com, exitcode=0
Command end time 2018-10-04 09:53:14

Registering with the server...
Registration with the server failed.

ambari3.jpg
6 REPLIES 6

Re: Openssl error when registering host during installation of Ambari HDP

Super Mentor

@SeonYeong Oh

You are getting the following error:

ERROR 2018-10-04 09:53:12,111 NetUtil.py:96 - EOF occurred in violation of protocol (_ssl.c:579)

.

Which is explained here: https://community.hortonworks.com/articles/188269/javapython-updates-and-ambari-agent-tls-settings.h...

Please try adding the following option based on the recommendation from the link and based on your OS in your ambari-agent.ini file and then try restarting agent.

Try to configure the Ambari Agent to use TLSv1.2 when communicating with the Ambari Server by editing each Ambari Agent’s /etc/ambari-agent/conf/ambari-agent.ini file and adding the following configuration property to the security section:

[security]
force_https_protocol=PROTOCOL_TLSv1_2

.

Re: Openssl error when registering host during installation of Ambari HDP

New Contributor

I'm sorry .. I replied below, can you check it out?

Re: Openssl error when registering host during installation of Ambari HDP

New Contributor

@Jay Kumar SenSharma

Thanks for your reply...^^

I added TLSv1_2 to [security] along with the link, but another error occurred.
The contents of the error are as follows.

(I attached a picture and error message)

92645-ambari4.jpg

==========================
Running setup agent script...
==========================

Command start time 2018-10-04 18:43:46
('INFO 2018-10-04 18:43:54,535 scheduler.py:287 - Adding job tentatively -- it will be properly scheduled when the scheduler starts
INFO 2018-10-04 18:43:54,535 AlertSchedulerHandler.py:377 - [AlertScheduler] Scheduling zookeeper_server_process with UUID fcc22914-a542-4ec3-b5c9-13cc6c49d5e2
INFO 2018-10-04 18:43:54,535 scheduler.py:287 - Adding job tentatively -- it will be properly scheduled when the scheduler starts
INFO 2018-10-04 18:43:54,535 AlertSchedulerHandler.py:377 - [AlertScheduler] Scheduling knox_gateway_process with UUID 48fe1bbb-7f19-4df4-b1f0-87c18178184f
INFO 2018-10-04 18:43:54,536 scheduler.py:287 - Adding job tentatively -- it will be properly scheduled when the scheduler starts
INFO 2018-10-04 18:43:54,536 AlertSchedulerHandler.py:377 - [AlertScheduler] Scheduling SPARK_JOBHISTORYSERVER_PROCESS with UUID a0533d0a-3de1-4219-bcd7-6fc8a764952b
INFO 2018-10-04 18:43:54,536 scheduler.py:287 - Adding job tentatively -- it will be properly scheduled when the scheduler starts
INFO 2018-10-04 18:43:54,536 AlertSchedulerHandler.py:377 - [AlertScheduler] Scheduling ambari_agent_version_select with UUID 86c502ce-7ace-40d6-a36c-8ce7f829ad40
INFO 2018-10-04 18:43:54,536 scheduler.py:287 - Adding job tentatively -- it will be properly scheduled when the scheduler starts
INFO 2018-10-04 18:43:54,536 AlertSchedulerHandler.py:377 - [AlertScheduler] Scheduling ambari_agent_disk_usage with UUID 9a2d038f-7708-4195-8b98-8a742ba6a4e6
INFO 2018-10-04 18:43:54,536 AlertSchedulerHandler.py:175 - [AlertScheduler] Starting <ambari_agent.apscheduler.scheduler.Scheduler object at 0x7fc61c1b6950>; currently running: False
INFO 2018-10-04 18:43:56,547 hostname.py:106 - Read public hostname \'master.hadoop.com\' using socket.getfqdn()
INFO 2018-10-04 18:43:56,548 Hardware.py:68 - Initializing host system information.
INFO 2018-10-04 18:43:56,566 Hardware.py:188 - Some mount points were ignored: /dev/shm, /run, /sys/fs/cgroup, /run/user/42, /run/user/0
INFO 2018-10-04 18:43:56,597 hostname.py:67 - agent:hostname_script configuration not defined thus read hostname \'master.hadoop.com\' using socket.getfqdn().
INFO 2018-10-04 18:43:56,603 Facter.py:202 - Directory: \'/etc/resource_overrides\' does not exist - it won\'t be used for gathering system resources.
INFO 2018-10-04 18:43:56,612 Hardware.py:73 - Host system information: {\'kernel\': \'Linux\', \'domain\': \'hadoop.com\', \'physicalprocessorcount\': 8, \'kernelrelease\': \'3.10.0-514.el7.x86_64\', \'uptime_days\': \'0\', \'memorytotal\': 5892408, \'swapfree\': \'5.88 GB\', \'memorysize\': 5892408, \'osfamily\': \'redhat\', \'swapsize\': \'5.88 GB\', \'processorcount\': 8, \'netmask\': \'255.255.255.0\', \'timezone\': \'KST\', \'hardwareisa\': \'x86_64\', \'memoryfree\': 1643992, \'operatingsystem\': \'centos\', \'kernelmajversion\': \'3.10\', \'kernelversion\': \'3.10.0\', \'macaddress\': \'E8:9A:8F:15:EE:21\', \'operatingsystemrelease\': \'7.3.1611\', \'ipaddress\': \'10.253.9.66\', \'hostname\': \'master\', \'uptime_hours\': \'0\', \'fqdn\': \'master.hadoop.com\', \'id\': \'root\', \'architecture\': \'x86_64\', \'selinux\': False, \'mounts\': [{\'available\': \'36467996\', \'used\': \'15935204\', \'percent\': \'31%\', \'device\': \'/dev/sda3\', \'mountpoint\': \'/\', \'type\': \'xfs\', \'size\': \'52403200\'}, {\'available\': \'2930948\', \'used\': \'0\', \'percent\': \'0%\', \'device\': \'devtmpfs\', \'mountpoint\': \'/dev\', \'type\': \'devtmpfs\', \'size\': \'2930948\'}, {\'available\': \'863864\', \'used\': \'174472\', \'percent\': \'17%\', \'device\': \'/dev/sda1\', \'mountpoint\': \'/boot\', \'type\': \'xfs\', \'size\': \'1038336\'}, {\'available\': \'672565756\', \'used\': \'38528\', \'percent\': \'1%\', \'device\': \'/dev/sda5\', \'mountpoint\': \'/home\', \'type\': \'xfs\', \'size\': \'672604284\'}], \'hardwaremodel\': \'x86_64\', \'uptime_seconds\': \'760\', \'interfaces\': \'enp13s0,lo,virbr0\'}
INFO 2018-10-04 18:43:56,728 Controller.py:170 - Registering with master.hadoop.com (10.253.9.66) (agent=\'{"hardwareProfile": {"kernel": "Linux", "domain": "hadoop.com", "physicalprocessorcount": 8, "kernelrelease": "3.10.0-514.el7.x86_64", "uptime_days": "0", "memorytotal": 5892408, "swapfree": "5.88 GB", "memorysize": 5892408, "osfamily": "redhat", "swapsize": "5.88 GB", "processorcount": 8, "netmask": "255.255.255.0", "timezone": "KST", "hardwareisa": "x86_64", "memoryfree": 1643992, "operatingsystem": "centos", "kernelmajversion": "3.10", "kernelversion": "3.10.0", "macaddress": "E8:9A:8F:15:EE:21", "operatingsystemrelease": "7.3.1611", "ipaddress": "10.253.9.66", "hostname": "master", "uptime_hours": "0", "fqdn": "master.hadoop.com", "id": "root", "architecture": "x86_64", "selinux": false, "mounts": [{"available": "36467996", "used": "15935204", "percent": "31%", "device": "/dev/sda3", "mountpoint": "/", "type": "xfs", "size": "52403200"}, {"available": "2930948", "used": "0", "percent": "0%", "device": "devtmpfs", "mountpoint": "/dev", "type": "devtmpfs", "size": "2930948"}, {"available": "863864", "used": "174472", "percent": "17%", "device": "/dev/sda1", "mountpoint": "/boot", "type": "xfs", "size": "1038336"}, {"available": "672565756", "used": "38528", "percent": "1%", "device": "/dev/sda5", "mountpoint": "/home", "type": "xfs", "size": "672604284"}], "hardwaremodel": "x86_64", "uptime_seconds": "760", "interfaces": "enp13s0,lo,virbr0"}, "currentPingPort": 8670, "prefix": "/var/lib/ambari-agent/data", "agentVersion": "2.6.1.5", "agentEnv": {"transparentHugePage": "", "hostHealth": {"agentTimeStampAtReporting": 1538646236720, "activeJavaProcs": [{"command": "/usr/jdk64/jdk1.8.0_112/bin/java -server -XX:NewRatio=3 -XX:+UseConcMarkSweepGC -XX:-UseGCOverheadLimit -XX:CMSInitiatingOccupancyFraction=60 -Djava.io.tmpdir=/var/lib/smartsense/hst-server/tmp -Dlog.file.name=hst-server.log -Xms1024m -Xmx2048m -cp /etc/hst/conf:/usr/hdp/share/hst/hst-common/lib/* com.hortonworks.support.tools.server.SupportToolServer", "pid": 3053, "hadoop": false, "user": "root"}], "liveServices": [{"status": "Healthy", "name": "ntpd or chronyd", "desc": ""}]}, "reverseLookup": true, "alternatives": [], "hasUnlimitedJcePolicy": null, "umask": "18", "firewallName": "iptables", "stackFoldersAndFiles": [{"type": "directory", "name": "/etc/hadoop"}, {"type": "directory", "name": "/etc/hbase"}, {"type": "directory", "name": "/etc/hive"}, {"type": "directory", "name": "/etc/oozie"}, {"type": "directory", "name": "/etc/zookeeper"}, {"type": "directory", "name": "/etc/flume"}, {"type": "directory", "name": "/etc/storm"}, {"type": "directory", "name": "/etc/hive-hcatalog"}, {"type": "directory", "name": "/etc/tez"}, {"type": "directory", "name": "/etc/falcon"}, {"type": "directory", "name": "/etc/knox"}, {"type": "directory", "name": "/etc/hive-webhcat"}, {"type": "directory", "name": "/etc/kafka"}, {"type": "directory", "name": "/etc/mahout"}, {"type": "directory", "name": "/etc/spark"}, {"type": "directory", "name": "/etc/pig"}, {"type": "directory", "name": "/etc/accumulo"}, {"type": "directory", "name": "/etc/ambari-metrics-collector"}, {"type": "directory", "name": "/etc/ambari-metrics-monitor"}, {"type": "directory", "name": "/etc/atlas"}, {"type": "directory", "name": "/etc/zeppelin"}, {"type": "directory", "name": "/var/log/hbase"}, {"type": "directory", "name": "/var/log/hive"}, {"type": "directory", "name": "/var/log/oozie"}, {"type": "directory", "name": "/var/log/zookeeper"}, {"type": "directory", "name": "/var/log/flume"}, {"type": "directory", "name": "/var/log/storm"}, {"type": "directory", "name": "/var/log/hive-hcatalog"}, {"type": "directory", "name": "/var/log/falcon"}, {"type": "directory", "name": "/var/log/hadoop-hdfs"}, {"type": "directory", "name": "/var/log/hadoop-yarn"}, {"type": "directory", "name": "/var/log/hadoop-mapreduce"}, {"type": "directory", "name": "/var/log/knox"}, {"type": "directory", "name": "/var/log/kafka"}, {"type": "directory", "name": "/var/log/spark"}, {"type": "directory", "name": "/var/log/accumulo"}, {"type": "directory", "name": "/var/log/ambari-metrics-monitor"}, {"type": "directory", "name": "/var/log/zeppelin"}, {"type": "directory", "name": "/usr/lib/flume"}, {"type": "directory", "name": "/usr/lib/storm"}, {"type": "directory", "name": "/usr/lib/ambari-metrics-collector"}, {"type": "directory", "name": "/var/lib/hive"}, {"type": "directory", "name": "/var/lib/oozie"}, {"type": "directory", "name": "/var/lib/zookeeper"}, {"type": "directory", "name": "/var/lib/flume"}, {"type": "directory", "name": "/var/lib/hadoop-hdfs"}, {"type": "directory", "name": "/var/lib/hadoop-yarn"}, {"type": "directory", "name": "/var/lib/hadoop-mapreduce"}, {"type": "directory", "name": "/var/lib/knox"}, {"type": "directory", "name": "/var/lib/spark"}, {"type": "directory", "name": "/var/lib/ambari-metrics-collector"}, {"type": "directory", "name": "/var/lib/zeppelin"}, {"type": "directory", "name": "/var/tmp/oozie"}, {"type": "directory", "name": "/tmp/ambari-qa"}, {"type": "directory", "name": "/hadoop/storm"}, {"type": "directory", "name": "/hadoop/falcon"}], "existingUsers": [{"status": "Available", "name": "hive", "homeDir": "/home/hive"}, {"status": "Available", "name": "atlas", "homeDir": "/home/atlas"}, {"status": "Available", "name": "ams", "homeDir": "/home/ams"}, {"status": "Available", "name": "falcon", "homeDir": "/home/falcon"}, {"status": "Available", "name": "accumulo", "homeDir": "/home/accumulo"}, {"status": "Available", "name": "spark", "homeDir": "/home/spark"}, {"status": "Available", "name": "flume", "homeDir": "/home/flume"}, {"status": "Available", "name": "hbase", "homeDir": "/home/hbase"}, {"status": "Available", "name": "hcat", "homeDir": "/home/hcat"}, {"status": "Available", "name": "storm", "homeDir": "/home/storm"}, {"status": "Available", "name": "zookeeper", "homeDir": "/home/zookeeper"}, {"status": "Available", "name": "oozie", "homeDir": "/home/oozie"}, {"status": "Available", "name": "tez", "homeDir": "/home/tez"}, {"status": "Available", "name": "zeppelin", "homeDir": "/home/zeppelin"}, {"status": "Available", "name": "mahout", "homeDir": "/home/mahout"}, {"status": "Available", "name": "ambari-qa", "homeDir": "/home/ambari-qa"}, {"status": "Available", "name": "kafka", "homeDir": "/home/kafka"}, {"status": "Available", "name": "hdfs", "homeDir": "/home/hdfs"}, {"status": "Available", "name": "sqoop", "homeDir": "/home/sqoop"}, {"status": "Available", "name": "yarn", "homeDir": "/home/yarn"}, {"status": "Available", "name": "mapred", "homeDir": "/home/mapred"}, {"status": "Available", "name": "knox", "homeDir": "/home/knox"}], "firewallRunning": false}, "timestamp": 1538646236614, "hostname": "master.hadoop.com", "responseId": -1, "publicHostname": "master.hadoop.com"}\')
INFO 2018-10-04 18:43:56,729 NetUtil.py:70 - Connecting to https://master:8440/connection_info
INFO 2018-10-04 18:43:56,826 security.py:93 - SSL Connect being called.. connecting to the server
', None)
('INFO 2018-10-04 18:43:54,535 scheduler.py:287 - Adding job tentatively -- it will be properly scheduled when the scheduler starts
INFO 2018-10-04 18:43:54,535 AlertSchedulerHandler.py:377 - [AlertScheduler] Scheduling zookeeper_server_process with UUID fcc22914-a542-4ec3-b5c9-13cc6c49d5e2
INFO 2018-10-04 18:43:54,535 scheduler.py:287 - Adding job tentatively -- it will be properly scheduled when the scheduler starts
INFO 2018-10-04 18:43:54,535 AlertSchedulerHandler.py:377 - [AlertScheduler] Scheduling knox_gateway_process with UUID 48fe1bbb-7f19-4df4-b1f0-87c18178184f
INFO 2018-10-04 18:43:54,536 scheduler.py:287 - Adding job tentatively -- it will be properly scheduled when the scheduler starts
INFO 2018-10-04 18:43:54,536 AlertSchedulerHandler.py:377 - [AlertScheduler] Scheduling SPARK_JOBHISTORYSERVER_PROCESS with UUID a0533d0a-3de1-4219-bcd7-6fc8a764952b
INFO 2018-10-04 18:43:54,536 scheduler.py:287 - Adding job tentatively -- it will be properly scheduled when the scheduler starts
INFO 2018-10-04 18:43:54,536 AlertSchedulerHandler.py:377 - [AlertScheduler] Scheduling ambari_agent_version_select with UUID 86c502ce-7ace-40d6-a36c-8ce7f829ad40
INFO 2018-10-04 18:43:54,536 scheduler.py:287 - Adding job tentatively -- it will be properly scheduled when the scheduler starts
INFO 2018-10-04 18:43:54,536 AlertSchedulerHandler.py:377 - [AlertScheduler] Scheduling ambari_agent_disk_usage with UUID 9a2d038f-7708-4195-8b98-8a742ba6a4e6
INFO 2018-10-04 18:43:54,536 AlertSchedulerHandler.py:175 - [AlertScheduler] Starting <ambari_agent.apscheduler.scheduler.Scheduler object at 0x7fc61c1b6950>; currently running: False
INFO 2018-10-04 18:43:56,547 hostname.py:106 - Read public hostname \'master.hadoop.com\' using socket.getfqdn()
INFO 2018-10-04 18:43:56,548 Hardware.py:68 - Initializing host system information.
INFO 2018-10-04 18:43:56,566 Hardware.py:188 - Some mount points were ignored: /dev/shm, /run, /sys/fs/cgroup, /run/user/42, /run/user/0
INFO 2018-10-04 18:43:56,597 hostname.py:67 - agent:hostname_script configuration not defined thus read hostname \'master.hadoop.com\' using socket.getfqdn().
INFO 2018-10-04 18:43:56,603 Facter.py:202 - Directory: \'/etc/resource_overrides\' does not exist - it won\'t be used for gathering system resources.
INFO 2018-10-04 18:43:56,612 Hardware.py:73 - Host system information: {\'kernel\': \'Linux\', \'domain\': \'hadoop.com\', \'physicalprocessorcount\': 8, \'kernelrelease\': \'3.10.0-514.el7.x86_64\', \'uptime_days\': \'0\', \'memorytotal\': 5892408, \'swapfree\': \'5.88 GB\', \'memorysize\': 5892408, \'osfamily\': \'redhat\', \'swapsize\': \'5.88 GB\', \'processorcount\': 8, \'netmask\': \'255.255.255.0\', \'timezone\': \'KST\', \'hardwareisa\': \'x86_64\', \'memoryfree\': 1643992, \'operatingsystem\': \'centos\', \'kernelmajversion\': \'3.10\', \'kernelversion\': \'3.10.0\', \'macaddress\': \'E8:9A:8F:15:EE:21\', \'operatingsystemrelease\': \'7.3.1611\', \'ipaddress\': \'10.253.9.66\', \'hostname\': \'master\', \'uptime_hours\': \'0\', \'fqdn\': \'master.hadoop.com\', \'id\': \'root\', \'architecture\': \'x86_64\', \'selinux\': False, \'mounts\': [{\'available\': \'36467996\', \'used\': \'15935204\', \'percent\': \'31%\', \'device\': \'/dev/sda3\', \'mountpoint\': \'/\', \'type\': \'xfs\', \'size\': \'52403200\'}, {\'available\': \'2930948\', \'used\': \'0\', \'percent\': \'0%\', \'device\': \'devtmpfs\', \'mountpoint\': \'/dev\', \'type\': \'devtmpfs\', \'size\': \'2930948\'}, {\'available\': \'863864\', \'used\': \'174472\', \'percent\': \'17%\', \'device\': \'/dev/sda1\', \'mountpoint\': \'/boot\', \'type\': \'xfs\', \'size\': \'1038336\'}, {\'available\': \'672565756\', \'used\': \'38528\', \'percent\': \'1%\', \'device\': \'/dev/sda5\', \'mountpoint\': \'/home\', \'type\': \'xfs\', \'size\': \'672604284\'}], \'hardwaremodel\': \'x86_64\', \'uptime_seconds\': \'760\', \'interfaces\': \'enp13s0,lo,virbr0\'}
INFO 2018-10-04 18:43:56,728 Controller.py:170 - Registering with master.hadoop.com (10.253.9.66) (agent=\'{"hardwareProfile": {"kernel": "Linux", "domain": "hadoop.com", "physicalprocessorcount": 8, "kernelrelease": "3.10.0-514.el7.x86_64", "uptime_days": "0", "memorytotal": 5892408, "swapfree": "5.88 GB", "memorysize": 5892408, "osfamily": "redhat", "swapsize": "5.88 GB", "processorcount": 8, "netmask": "255.255.255.0", "timezone": "KST", "hardwareisa": "x86_64", "memoryfree": 1643992, "operatingsystem": "centos", "kernelmajversion": "3.10", "kernelversion": "3.10.0", "macaddress": "E8:9A:8F:15:EE:21", "operatingsystemrelease": "7.3.1611", "ipaddress": "10.253.9.66", "hostname": "master", "uptime_hours": "0", "fqdn": "master.hadoop.com", "id": "root", "architecture": "x86_64", "selinux": false, "mounts": [{"available": "36467996", "used": "15935204", "percent": "31%", "device": "/dev/sda3", "mountpoint": "/", "type": "xfs", "size": "52403200"}, {"available": "2930948", "used": "0", "percent": "0%", "device": "devtmpfs", "mountpoint": "/dev", "type": "devtmpfs", "size": "2930948"}, {"available": "863864", "used": "174472", "percent": "17%", "device": "/dev/sda1", "mountpoint": "/boot", "type": "xfs", "size": "1038336"}, {"available": "672565756", "used": "38528", "percent": "1%", "device": "/dev/sda5", "mountpoint": "/home", "type": "xfs", "size": "672604284"}], "hardwaremodel": "x86_64", "uptime_seconds": "760", "interfaces": "enp13s0,lo,virbr0"}, "currentPingPort": 8670, "prefix": "/var/lib/ambari-agent/data", "agentVersion": "2.6.1.5", "agentEnv": {"transparentHugePage": "", "hostHealth": {"agentTimeStampAtReporting": 1538646236720, "activeJavaProcs": [{"command": "/usr/jdk64/jdk1.8.0_112/bin/java -server -XX:NewRatio=3 -XX:+UseConcMarkSweepGC -XX:-UseGCOverheadLimit -XX:CMSInitiatingOccupancyFraction=60 -Djava.io.tmpdir=/var/lib/smartsense/hst-server/tmp -Dlog.file.name=hst-server.log -Xms1024m -Xmx2048m -cp /etc/hst/conf:/usr/hdp/share/hst/hst-common/lib/* com.hortonworks.support.tools.server.SupportToolServer", "pid": 3053, "hadoop": false, "user": "root"}], "liveServices": [{"status": "Healthy", "name": "ntpd or chronyd", "desc": ""}]}, "reverseLookup": true, "alternatives": [], "hasUnlimitedJcePolicy": null, "umask": "18", "firewallName": "iptables", "stackFoldersAndFiles": [{"type": "directory", "name": "/etc/hadoop"}, {"type": "directory", "name": "/etc/hbase"}, {"type": "directory", "name": "/etc/hive"}, {"type": "directory", "name": "/etc/oozie"}, {"type": "directory", "name": "/etc/zookeeper"}, {"type": "directory", "name": "/etc/flume"}, {"type": "directory", "name": "/etc/storm"}, {"type": "directory", "name": "/etc/hive-hcatalog"}, {"type": "directory", "name": "/etc/tez"}, {"type": "directory", "name": "/etc/falcon"}, {"type": "directory", "name": "/etc/knox"}, {"type": "directory", "name": "/etc/hive-webhcat"}, {"type": "directory", "name": "/etc/kafka"}, {"type": "directory", "name": "/etc/mahout"}, {"type": "directory", "name": "/etc/spark"}, {"type": "directory", "name": "/etc/pig"}, {"type": "directory", "name": "/etc/accumulo"}, {"type": "directory", "name": "/etc/ambari-metrics-collector"}, {"type": "directory", "name": "/etc/ambari-metrics-monitor"}, {"type": "directory", "name": "/etc/atlas"}, {"type": "directory", "name": "/etc/zeppelin"}, {"type": "directory", "name": "/var/log/hbase"}, {"type": "directory", "name": "/var/log/hive"}, {"type": "directory", "name": "/var/log/oozie"}, {"type": "directory", "name": "/var/log/zookeeper"}, {"type": "directory", "name": "/var/log/flume"}, {"type": "directory", "name": "/var/log/storm"}, {"type": "directory", "name": "/var/log/hive-hcatalog"}, {"type": "directory", "name": "/var/log/falcon"}, {"type": "directory", "name": "/var/log/hadoop-hdfs"}, {"type": "directory", "name": "/var/log/hadoop-yarn"}, {"type": "directory", "name": "/var/log/hadoop-mapreduce"}, {"type": "directory", "name": "/var/log/knox"}, {"type": "directory", "name": "/var/log/kafka"}, {"type": "directory", "name": "/var/log/spark"}, {"type": "directory", "name": "/var/log/accumulo"}, {"type": "directory", "name": "/var/log/ambari-metrics-monitor"}, {"type": "directory", "name": "/var/log/zeppelin"}, {"type": "directory", "name": "/usr/lib/flume"}, {"type": "directory", "name": "/usr/lib/storm"}, {"type": "directory", "name": "/usr/lib/ambari-metrics-collector"}, {"type": "directory", "name": "/var/lib/hive"}, {"type": "directory", "name": "/var/lib/oozie"}, {"type": "directory", "name": "/var/lib/zookeeper"}, {"type": "directory", "name": "/var/lib/flume"}, {"type": "directory", "name": "/var/lib/hadoop-hdfs"}, {"type": "directory", "name": "/var/lib/hadoop-yarn"}, {"type": "directory", "name": "/var/lib/hadoop-mapreduce"}, {"type": "directory", "name": "/var/lib/knox"}, {"type": "directory", "name": "/var/lib/spark"}, {"type": "directory", "name": "/var/lib/ambari-metrics-collector"}, {"type": "directory", "name": "/var/lib/zeppelin"}, {"type": "directory", "name": "/var/tmp/oozie"}, {"type": "directory", "name": "/tmp/ambari-qa"}, {"type": "directory", "name": "/hadoop/storm"}, {"type": "directory", "name": "/hadoop/falcon"}], "existingUsers": [{"status": "Available", "name": "hive", "homeDir": "/home/hive"}, {"status": "Available", "name": "atlas", "homeDir": "/home/atlas"}, {"status": "Available", "name": "ams", "homeDir": "/home/ams"}, {"status": "Available", "name": "falcon", "homeDir": "/home/falcon"}, {"status": "Available", "name": "accumulo", "homeDir": "/home/accumulo"}, {"status": "Available", "name": "spark", "homeDir": "/home/spark"}, {"status": "Available", "name": "flume", "homeDir": "/home/flume"}, {"status": "Available", "name": "hbase", "homeDir": "/home/hbase"}, {"status": "Available", "name": "hcat", "homeDir": "/home/hcat"}, {"status": "Available", "name": "storm", "homeDir": "/home/storm"}, {"status": "Available", "name": "zookeeper", "homeDir": "/home/zookeeper"}, {"status": "Available", "name": "oozie", "homeDir": "/home/oozie"}, {"status": "Available", "name": "tez", "homeDir": "/home/tez"}, {"status": "Available", "name": "zeppelin", "homeDir": "/home/zeppelin"}, {"status": "Available", "name": "mahout", "homeDir": "/home/mahout"}, {"status": "Available", "name": "ambari-qa", "homeDir": "/home/ambari-qa"}, {"status": "Available", "name": "kafka", "homeDir": "/home/kafka"}, {"status": "Available", "name": "hdfs", "homeDir": "/home/hdfs"}, {"status": "Available", "name": "sqoop", "homeDir": "/home/sqoop"}, {"status": "Available", "name": "yarn", "homeDir": "/home/yarn"}, {"status": "Available", "name": "mapred", "homeDir": "/home/mapred"}, {"status": "Available", "name": "knox", "homeDir": "/home/knox"}], "firewallRunning": false}, "timestamp": 1538646236614, "hostname": "master.hadoop.com", "responseId": -1, "publicHostname": "master.hadoop.com"}\')
INFO 2018-10-04 18:43:56,729 NetUtil.py:70 - Connecting to https://master:8440/connection_info
INFO 2018-10-04 18:43:56,826 security.py:93 - SSL Connect being called.. connecting to the server
', None)

Connection to master.hadoop.com closed.
SSH command execution finished
host=master.hadoop.com, exitcode=0
Command end time 2018-10-04 18:43:56

Registering with the server...
Registration with the server failed.

Re: Openssl error when registering host during installation of Ambari HDP

Super Mentor

@SeonYeong Oh

In your latest shared log snippet i do not see any Error like earlier we noticed "EOF occurred in violation of protocol" .

I just see INFO messages in the current logs and which looks normal.

INFO 2018-10-04 18:43:56,729 NetUtil.py:70 - Connecting to https://master:8440/connection_info
INFO 2018-10-04 18:43:56,826 security.py:93 - SSL Connect being called.. connecting to the server

Connection to master.hadoop.com closed.
SSH command execution finished
host=master.hadoop.com, exitcode=0

.

Can you please let us know what issue/error are you facing currently.

Can you please share the complete "/var/log/ambari-agent/ambari-agent.log" and also the output of the following command:

# hostname -f

.

Re: Openssl error when registering host during installation of Ambari HDP

New Contributor

ambari-agent.txt

92692-ambari6.jpg

@Jay Kumar SenSharma

I attached picture about # hostname - f and file about "/var/log/ambari-agent/ambari-agent.log"....

Re: Openssl error when registering host during installation of Ambari HDP

Super Mentor

@SeonYeong Oh

Looks like amabri agent is running dfine and communicating with the amabri server fine as we see Heartbeat request/response in the logs.

INFO 2018-10-05 09:25:53,688 Controller.py:304 - Heartbeat (response id = 1188) with server is running.
INFO 2018-10-05 09:25:53,689 Controller.py:311 - Building heartbeat message
INFO 2018-10-05 09:25:53,693 Heartbeat.py:87 - Adding host info/state to heartbeat message.

.

However the Ambari Managed cluster strictly relies on the FQDN setup then "# hostname -f" command should return the fully qualified hostname,

# sysctl kernel.hostname=master.hadoop.com
# hostname -f
# ambari-agent restart

.

Please refer to the following doc to know more about FQDN setup.

https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.2.0/bk_ambari-installation-ppc/content/set_the_...

# vi /etc/sysconfig/network

NETWORKING=yes
HOSTNAME=master.hadoop.com

https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.2.0/bk_ambari-installation-ppc/content/edit_the...

Don't have an account?
Coming from Hortonworks? Activate your account here