Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Installing and configuring a hadoop cluster with Ambari

Installing and configuring a hadoop cluster with Ambari

New Contributor

masterambariserver.pngHello,

I need some clarifications about the way to install a hadoop cluster with Ambari , I tried several methods but unfortunately, none of them has worked properly ( I usually end up with a failing start of all the services such as namenode HDFS spark server etc, and heartbeats lost for all the nodes of the cluster) .

let's make the hypothesis that I want to install a 3 nodes cluster

Usually I install an ambari server in one node , and ambari agents in the 3 nodes , so the node where the ambari server is set will also include an ambari agent as well as master/slave services from the HDP Stack such as namenode datanode , spark server etc, with this method , the established cluster is totally failing since all services can't even start and heartbeats from all the ambari agents are lost.

My questions are as follows :

Do you have any explanations about the services start/heartbeats fail I got for the method described above ?

Shall I install the ambari server in one node , this node will be totally dedicated to the ambari server , so for this node, we won't have any ambari agents or HDP services, in the hosts confirmation step of the ambari wizard install , I intend to not mention this node but I will include it is private SSH key in the table below the registration of hosts,ambariagentslavenode.png the repartition of the master /slave services will not take this node into consideration either .

Do you think that with this method I will get better results?

I included the ambari agents /ambari server log files for my current installation

Thanks a lot in advance for your feedbacks

7 REPLIES 7

Re: Installing and configuring a hadoop cluster with Ambari

Super Mentor

@Haifa Ben Aouicha

Form the attached s screen shot we see that the main problem is that your ambari-agent is not able to connect to ambari server on port 8440.

Every Host of an Ambari Managed Cluster must have ambari-agent installed in it. And those agent should be able to communicate with ambari server on the defaults HTTPS ports 8441 & 8440

8440 => (HTTPS) Is ambari server port, Which is Handshake Port for Ambari Agents to Ambari Server.
8441 => (HTTPS) Registration and Heartbeat Port for Ambari Agents to Ambari Server.

.

So please make sure that ambari-agents has correct entry inside their "/etc/hosts" file and can resolve the ambari-server properly with name "master.hadoop.com" (in your case)

.

Also please chck if from ambari agent hosts you can connect to the ambari server host & Port? If not then please check if "service iptables stop" (IP tables is stop on ambari server and there is not Network issue).

From Agent hosts please check the ambari server Port access:

# telnet master.hadoop.com 8440
# telnet master.hadoop.com 8441

.

Also please check if the IPAddress of Ambari server which is mentioned in the image "ambariagentslavenode.png" line number 8 is correct? I mean some times in Multi home network one host might have multiple IP Addresses and some of them are not accessible from remote host due to some network policies.

.

Ambari Agent is a light weight process and hence it can be installed on the ambari server host as well. Sometimes the ambari-agent package containes some libraries that are needed by ambari-server binaries. So even if you do not want to Register Ambari Server host to your HDP cluster ... it is better to at least have the "ambari-agent" binaries installed to the ambari server host.

Re: Installing and configuring a hadoop cluster with Ambari

Super Mentor

@Haifa Ben Aouicha

Another quick way to check the Ambari Agent to Ambari Serevr communication over https is fine or not using the following simple commands:

# openssl s_client -connect master.hadoop.com:8440
# openssl s_client -connect master.hadoop.com:8441

.

Highlighted

Re: Installing and configuring a hadoop cluster with Ambari

New Contributor

@ Jay Kumar SenSharma Hello ,

Currently, I have a 3 nodes cluster with one node including both ambari server and ambari agent and the other 2 nodes just include ambari agent. Do you think that this is a proper choice, is it better to have a node only dedicated to ambari server ( without any ambari agent , or master /slave services ) ?

the etc/hosts file is the same for the 3 nodes of the cluster and I am aware of the fact the port 8441 in not working properly for the ambari agents but I don't know how to fix this issue

Any suggestions about opening the port 8441 properly in the nodes ( I guess this port shall be opened for the 3 nodes of the cluster since I have ambari agents in the 3 nodes )

Thanks a lot in advanc efor your feedback

Re: Installing and configuring a hadoop cluster with Ambari

Super Mentor

@Haifa Ben Aouicha

You can Install Ambari Server on a Dedicated Host and Ambari Agents on other hosts of the HDP cluster. Ambari Server host can be outside of the HDP cluster. The HDP cluster nodes needs to have Ambari Agents installed on it. You can install your desired HDP services/components on any host which is part of your cluster. So it is completely valid that your ambari server will act as a management server and will not have any HDP components installed to it.

.

Regarding Ambari Server Ports. Ambari Server opens the default ports like 8080 (client API ports), 8440 and 8441. So please check on the ambari server host if these ports are opened or not and also if the IPTables is disabled or not ?

On Ambari Server Host:

# netstat -tnlpa | grep 8440
# netstat -tnlpa | grep 8441


# service iptables status
# service iptables stop

.

From ambari agents try to connect to Ambari Servers port 8440 & 8441 using openssl client to verify if the connectivity is working or not?

From Ambari Agent hosts:

# openssl s_client -connect master.hadoop.com:8440
# openssl s_client -connect master.hadoop.com:8441

.

Re: Installing and configuring a hadoop cluster with Ambari

New Contributor
@Jay Kumar SenSharma

I will try this option (ambari server host as a management server only) for my next installation

Otherwise for my current installation , below are the differents results I got :

[root@node2 xinetd.d]# openssl s_client -connect master.hadoop.com:8440 

CONNECTED(00000003) depth=0 C = XX, L = Default City, O = Default Company Ltd verify error:num=18:self signed certificate verify return:1 depth=0 C = XX, L = Default City, O = Default Company Ltd verify return:1 --- Certificate chain 0 s:/C=XX/L=Default City/O=Default Company Ltd i:/C=XX/L=Default City/O=Default Company Ltd --- Server certificate -----BEGIN CERTIFICATE----- MIIFnDCCA4SgAwIBAgIBATANBgkqhkiG9w0BAQsFADBCMQswCQYDVQQGEwJYWDEV MBMGA1UEBwwMRGVmYXVsdCBDaXR5MRwwGgYDVQQKDBNEZWZhdWx0IENvbXBhbnkg THRkMB4XDTE3MTEyMTE0MTk1NloXDTE4MTEyMTE0MTk1NlowQjELMAkGA1UEBhMC WFgxFTATBgNVBAcMDERlZmF1bHQgQ2l0eTEcMBoGA1UECgwTRGVmYXVsdCBDb21w YW55IEx0ZDCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAMEOXHcQHo/X PLQCkSQxg5d/SO62E7wHeB/7m+lYG8SNflnmrZ23f+SGIPVX2cMKt/kzzPfr/vrs se6/cgCn6ep2sJk1woXlyyuj1A2QCzU1Lp9kklX1/1SfbMOxlUUVHhYRtMaCepCr R6sOyNP9TU/o7IcG3Kl648tte1ToeLETep6gDOKYdTCCyPZbH31RsZKjU29SUT5F fm6NMmJ2MD8wNP7x9XM9v5YtjXNPFVf5J67iTy4y/YBM9UTJtswmimqrS6SZ/Itx sgDI6l+NMS9Un2p0yej0NVSziaU4W0sBBlnZQw6JumW1gLFSNjYXUxHc2M2zEhku xUWsodw3OA0q3ay5EG8ixJjE27aby4JGEE7cE0XpacHiZKr/NWLNZvzJ98I47ZoB ZspoT734Oz4wofwhY2eWP5Xah+cl95GwfmG2oc/t4eOOGPPv6lfxBdjxv0uSf4HE CgHHsVD5hBYBULsOYbWuKZBsWMfJTTH+8XasUqEBnMpgyUqRwpGOaNmymdMgfdOR aCQcNm7p22N/JAize8sjSZIA2NS1E6VZv39NRjZNUsd0BeWIjOz4l1tj4v3MtDoS ZwbjR7xfEC2SboI3VLKcu/VHt9Mii8o+Z1RW8E06jSa9iNcQubzzpnC0li28t2+7 rl8LNnQLOlbziX8mQq18O4cbPYCv8oiRAgMBAAGjgZwwgZkwHQYDVR0OBBYEFKLx HWO2dMviNEUkrBnXXVo8czp/MGoGA1UdIwRjMGGAFKLxHWO2dMviNEUkrBnXXVo8 czp/oUakRDBCMQswCQYDVQQGEwJYWDEVMBMGA1UEBwwMRGVmYXVsdCBDaXR5MRww GgYDVQQKDBNEZWZhdWx0IENvbXBhbnkgTHRkggEBMAwGA1UdEwQFMAMBAf8wDQYJ KoZIhvcNAQELBQADggIBAKx3g0LHqG3KtTSP+l1GxDYkfB/ONVi8KNTA8GKcbQuP IifFaM1Q6UbDYJ2RguLgveWT5Tv9yD5Qclvh8BGha0mpBDyt2iumOaITMD7YLvlq /sXc/oaJYBCKc7HNl/+98iV+8gTs/Kbvadq+SzjqDNQi8eXDf2YHC+DGCK4agTd0 L7WmrRDAkKnJyG1OeI413PYifO4a2rxUNvdv81imlUCvBoeot8j7/lee1fWnqlw7 FJdlXWBNwvv1NNerFEOQbnqipaZ7+WYwuXSThcJ6lB5mSspU9U/CF/GwHSpIo9G6 p9Ka/hXlDZ3u2qwFdnAiPK+fg8yoL/Y4YK5PLHkKbI8w/1/0AU6M57njKKPueCyn 5WxoLF1ovNYm/R4jB8uDiktzUqXhFPQ2yrTe/pglSIf5+iSMxib5fr1yE3EeSJd3 dZ+PcNeFNMjQUKvLabXG6xJD/VLzGsrheAFUdpvGopvRLKgIcbpd+VqS/04Cc0Ll vggS2a0u392+LW4aJ7dymZ3Fep9ucoq/ixGwvsOW0XgwUnTszOAKZOAcLtsZL4xY TvMZvJUe4rgK444rBcg6Lv5ANrI7O4WZpu5bc/DDtCS34O3XrcP3fFATNte9WQiY 6g5+8JAiwAgHzeLsaSkdILHH+rcjf5VbgQxk6uxDCnRWSlbKYjez/oKUp7E96tVs -----END CERTIFICATE----- subject=/C=XX/L=Default City/O=Default Company Ltd issuer=/C=XX/L=Default City/O=Default Company Ltd --- No client certificate CA names sent Peer signing digest: SHA512 Server Temp Key: ECDH, P-256, 256 bits --- SSL handshake has read 2180 bytes and written 415 bytes --- New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES128-GCM-SHA256 Server public key is 4096 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1.2 Cipher : ECDHE-RSA-AES128-GCM-SHA256 Session-ID: 5A181F111CDAF77B72E024D4A8ED984010E06D3C4768057738028E565718A5CC Session-ID-ctx: Master-Key: 92BB6A68AF40DC456930AD2D5E46C617DC1858E021760867190D81543D56C42A06562DF3179FF978E66538D005D19985 Key-Arg : None Krb5 Principal: None PSK identity: None PSK identity hint: None Start Time: 1511530257 Timeout : 300 (sec) Verify return code: 18 (self signed certificate) --- closed

[root@node2 ~]# openssl s_client -connect master.hadoop.com:8441 

CONNECTED(00000003) depth=0 C = XX, L = Default City, O = Default Company Ltd verify error:num=18:self signed certificate verify return:1 depth=0 C = XX, L = Default City, O = Default Company Ltd verify return:1 --- Certificate chain 0 s:/C=XX/L=Default City/O=Default Company Ltd i:/C=XX/L=Default City/O=Default Company Ltd --- Server certificate -----BEGIN CERTIFICATE----- MIIFnDCCA4SgAwIBAgIBATANBgkqhkiG9w0BAQsFADBCMQswCQYDVQQGEwJYWDEV MBMGA1UEBwwMRGVmYXVsdCBDaXR5MRwwGgYDVQQKDBNEZWZhdWx0IENvbXBhbnkg THRkMB4XDTE3MTEyMTE0MTk1NloXDTE4MTEyMTE0MTk1NlowQjELMAkGA1UEBhMC WFgxFTATBgNVBAcMDERlZmF1bHQgQ2l0eTEcMBoGA1UECgwTRGVmYXVsdCBDb21w YW55IEx0ZDCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAMEOXHcQHo/X PLQCkSQxg5d/SO62E7wHeB/7m+lYG8SNflnmrZ23f+SGIPVX2cMKt/kzzPfr/vrs se6/cgCn6ep2sJk1woXlyyuj1A2QCzU1Lp9kklX1/1SfbMOxlUUVHhYRtMaCepCr R6sOyNP9TU/o7IcG3Kl648tte1ToeLETep6gDOKYdTCCyPZbH31RsZKjU29SUT5F fm6NMmJ2MD8wNP7x9XM9v5YtjXNPFVf5J67iTy4y/YBM9UTJtswmimqrS6SZ/Itx sgDI6l+NMS9Un2p0yej0NVSziaU4W0sBBlnZQw6JumW1gLFSNjYXUxHc2M2zEhku xUWsodw3OA0q3ay5EG8ixJjE27aby4JGEE7cE0XpacHiZKr/NWLNZvzJ98I47ZoB ZspoT734Oz4wofwhY2eWP5Xah+cl95GwfmG2oc/t4eOOGPPv6lfxBdjxv0uSf4HE CgHHsVD5hBYBULsOYbWuKZBsWMfJTTH+8XasUqEBnMpgyUqRwpGOaNmymdMgfdOR aCQcNm7p22N/JAize8sjSZIA2NS1E6VZv39NRjZNUsd0BeWIjOz4l1tj4v3MtDoS ZwbjR7xfEC2SboI3VLKcu/VHt9Mii8o+Z1RW8E06jSa9iNcQubzzpnC0li28t2+7 rl8LNnQLOlbziX8mQq18O4cbPYCv8oiRAgMBAAGjgZwwgZkwHQYDVR0OBBYEFKLx HWO2dMviNEUkrBnXXVo8czp/MGoGA1UdIwRjMGGAFKLxHWO2dMviNEUkrBnXXVo8 czp/oUakRDBCMQswCQYDVQQGEwJYWDEVMBMGA1UEBwwMRGVmYXVsdCBDaXR5MRww GgYDVQQKDBNEZWZhdWx0IENvbXBhbnkgTHRkggEBMAwGA1UdEwQFMAMBAf8wDQYJ KoZIhvcNAQELBQADggIBAKx3g0LHqG3KtTSP+l1GxDYkfB/ONVi8KNTA8GKcbQuP IifFaM1Q6UbDYJ2RguLgveWT5Tv9yD5Qclvh8BGha0mpBDyt2iumOaITMD7YLvlq /sXc/oaJYBCKc7HNl/+98iV+8gTs/Kbvadq+SzjqDNQi8eXDf2YHC+DGCK4agTd0 L7WmrRDAkKnJyG1OeI413PYifO4a2rxUNvdv81imlUCvBoeot8j7/lee1fWnqlw7 FJdlXWBNwvv1NNerFEOQbnqipaZ7+WYwuXSThcJ6lB5mSspU9U/CF/GwHSpIo9G6 p9Ka/hXlDZ3u2qwFdnAiPK+fg8yoL/Y4YK5PLHkKbI8w/1/0AU6M57njKKPueCyn 5WxoLF1ovNYm/R4jB8uDiktzUqXhFPQ2yrTe/pglSIf5+iSMxib5fr1yE3EeSJd3 dZ+PcNeFNMjQUKvLabXG6xJD/VLzGsrheAFUdpvGopvRLKgIcbpd+VqS/04Cc0Ll vggS2a0u392+LW4aJ7dymZ3Fep9ucoq/ixGwvsOW0XgwUnTszOAKZOAcLtsZL4xY TvMZvJUe4rgK444rBcg6Lv5ANrI7O4WZpu5bc/DDtCS34O3XrcP3fFATNte9WQiY 6g5+8JAiwAgHzeLsaSkdILHH+rcjf5VbgQxk6uxDCnRWSlbKYjez/oKUp7E96tVs -----END CERTIFICATE----- subject=/C=XX/L=Default City/O=Default Company Ltd issuer=/C=XX/L=Default City/O=Default Company Ltd --- No client certificate CA names sent Peer signing digest: SHA512 Server Temp Key: ECDH, P-256, 256 bits --- SSL handshake has read 2180 bytes and written 415 bytes --- New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES128-GCM-SHA256 Server public key is 4096 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1.2 Cipher : ECDHE-RSA-AES128-GCM-SHA256 Session-ID: 5A18239CC54B7C4C271CE1DDCECF0EDC876B3D67A6F5549D4A649C15AD99A13B Session-ID-ctx: Master-Key: 816039314AF3AAD93599470C537827C1B313BA3AF0DE5502E98834A4C94DA58ED2A4140D1DB2B5CD3022FB6CAA9A773A Key-Arg : None Krb5 Principal: None PSK identity: None PSK identity hint: None Start Time: 1511531420 Timeout : 300 (sec) Verify return code: 18 (self signed certificate) ---

for ambari server host

[root@master ~]# netstat -tnlpa | grep 8440 tcp        0      0 10.4.98.11:35424        51.15.134.161:8440      TIME_WAIT   -
tcp6       0      0 :::8440                 :::*                    LISTEN      4960/java
tcp6       0      0 10.4.98.11:8440         51.15.134.161:35426     TIME_WAIT   -
tcp6       0      0 10.4.98.11:8440         51.15.134.161:35420     TIME_WAIT   -
tcp6       0      0 10.4.98.11:8440         51.15.134.161:35430     TIME_WAIT   -
tcp6       0      0 10.4.98.11:8440         51.15.216.73:49462      TIME_WAIT   -
tcp6       0      0 10.4.98.11:8440         51.15.134.161:35434     TIME_WAIT   -
tcp6       0      0 10.4.98.11:8440         51.15.216.73:49456      TIME_WAIT   -
tcp6       0      0 10.4.98.11:8440         51.15.216.73:49464      TIME_WAIT   - [root@master ~]#
[root@master ~]# netstat -tnlpa | grep 8441 tcp6       0      0 :::8441                 :::*                    LISTEN      4960/java
tcp6       0      0 10.4.98.11:8441         51.15.216.73:58998      TIME_WAIT   - [root@master ~]#

Re: Installing and configuring a hadoop cluster with Ambari

New Contributor

Ambari installation and cluster setup. We are assumed to be having 4 nodes. Node1, Node2, Node3 and Node4. And we are picking Node1 as our Ambari server.gone through the hadoop online training in india

These are installation steps on the RHEL based system, for debian and other systems steps will vary little.

  1. Installation of Ambari: –

From Ambari server node (Node 1 as we decided),not sure if you have gone through the hadoop online training in india here.

http://online-trainings.org/courses/other-courses/hadoop-online-training/

Re: Installing and configuring a hadoop cluster with Ambari

New Contributor

@jai kumar saysinha , Ambari installation and cluster setup. We are assumed to be having 4 nodes. Node1, Node2, Node3 and Node4. And we are picking Node1 as our Ambari server.gone through the hadoop online training in india

These are installation steps on the RHEL based system, for debian and other systems steps will vary little.

  1. Installation of Ambari: –

From Ambari server node (Node 1 as we decided),not sure if you have gone through the hadoop online training in india here.

http://online-trainings.org/courses/other-courses/hadoop-online-training/
Don't have an account?
Coming from Hortonworks? Activate your account here