Member since
09-26-2017
16
Posts
0
Kudos Received
0
Solutions
06-13-2018
08:07 AM
Hello Community, I have a daily ingestion of data in to HDFS . From data into HDFS I generate Hive external tables partitioned by date . My qestion is as follows , should I run MSCK REPAIR TABLE tablename after each data ingestion , in this case I have to run the command each day. Or running it just one time at the table creation is enough . Thanks a lot for your answers Best regards
... View more
- Tags:
- Hive
Labels:
- Labels:
-
Apache Hive
11-24-2017
03:44 PM
Hello, I am trying to register a new host for my HDP cluster but I egt a fail of the registartion with the Amabai server . Below is the log file related to the registration fail , I undertand that I have a SSL certificate problem May you please confirm that only this issue is mentioned in this log file and suggest any idea to cope with this problem ==========================
Creating target directory...
==========================
Command start time 2017-11-24 15:26:16
Connection to node2.hadoop.com closed.
SSH command execution finished
host=node2.hadoop.com, exitcode=0
Command end time 2017-11-24 15:26:17
==========================
Copying ambari sudo script...
==========================
Command start time 2017-11-24 15:26:17
scp /var/lib/ambari-server/ambari-sudo.sh
host=node2.hadoop.com, exitcode=0
Command end time 2017-11-24 15:26:17
==========================
Copying common functions script...
==========================
Command start time 2017-11-24 15:26:17
scp /usr/lib/python2.6/site-packages/ambari_commons
host=node2.hadoop.com, exitcode=0
Command end time 2017-11-24 15:26:17
==========================
Copying OS type check script...
==========================
Command start time 2017-11-24 15:26:17
scp /usr/lib/python2.6/site-packages/ambari_server/os_check_type.py
host=node2.hadoop.com, exitcode=0
Command end time 2017-11-24 15:26:17
==========================
Running OS type check...
==========================
Command start time 2017-11-24 15:26:17
Cluster primary/cluster OS family is redhat7 and local/current OS family is redhat7
Connection to node2.hadoop.com closed.
SSH command execution finished
host=node2.hadoop.com, exitcode=0
Command end time 2017-11-24 15:26:18
==========================
Checking 'sudo' package on remote host...
==========================
Command start time 2017-11-24 15:26:18
Connection to node2.hadoop.com closed.
SSH command execution finished
host=node2.hadoop.com, exitcode=0
Command end time 2017-11-24 15:26:18
==========================
Copying repo file to 'tmp' folder...
==========================
Command start time 2017-11-24 15:26:18
scp /etc/yum.repos.d/ambari.repo
host=node2.hadoop.com, exitcode=0
Command end time 2017-11-24 15:26:18
==========================
Moving file to repo dir...
==========================
Command start time 2017-11-24 15:26:18
Connection to node2.hadoop.com closed.
SSH command execution finished
host=node2.hadoop.com, exitcode=0
Command end time 2017-11-24 15:26:19
==========================
Changing permissions for ambari.repo...
==========================
Command start time 2017-11-24 15:26:19
Connection to node2.hadoop.com closed.
SSH command execution finished
host=node2.hadoop.com, exitcode=0
Command end time 2017-11-24 15:26:19
==========================
Copying setup script file...
==========================
Command start time 2017-11-24 15:26:19
scp /usr/lib/python2.6/site-packages/ambari_server/setupAgent.py
host=node2.hadoop.com, exitcode=0
Command end time 2017-11-24 15:26:19
==========================
Running setup agent script...
==========================
Command start time 2017-11-24 15:26:19
('WARNING 2017-11-24 15:20:50,334 NetUtil.py:116 - Server at https://master:8440 is not reachable, sleeping for 10 seconds...
INFO 2017-11-24 15:20:50,334 HeartbeatHandlers.py:115 - Stop event received
INFO 2017-11-24 15:20:50,335 NetUtil.py:122 - Stop event received
INFO 2017-11-24 15:20:50,335 ExitHelper.py:53 - Performing cleanup before exiting...
INFO 2017-11-24 15:20:50,335 ExitHelper.py:67 - Cleanup finished, exiting with code:0
INFO 2017-11-24 15:20:51,256 main.py:223 - Agent died gracefully, exiting.
INFO 2017-11-24 15:20:51,257 ExitHelper.py:53 - Performing cleanup before exiting...
INFO 2017-11-24 15:26:22,904 main.py:90 - loglevel=logging.INFO
INFO 2017-11-24 15:26:22,905 main.py:90 - loglevel=logging.INFO
INFO 2017-11-24 15:26:22,905 main.py:90 - loglevel=logging.INFO
INFO 2017-11-24 15:26:22,909 DataCleaner.py:39 - Data cleanup thread started
INFO 2017-11-24 15:26:22,914 DataCleaner.py:120 - Data cleanup started
INFO 2017-11-24 15:26:22,916 DataCleaner.py:122 - Data cleanup finished
INFO 2017-11-24 15:26:22,964 PingPortListener.py:50 - Ping port listener started on port: 8670
INFO 2017-11-24 15:26:22,969 main.py:349 - Connecting to Ambari server at https://master.hadoop.com:8440 (51.15.134.161)
INFO 2017-11-24 15:26:22,969 NetUtil.py:62 - Connecting to https://master.hadoop.com:8440/ca
ERROR 2017-11-24 15:26:23,141 NetUtil.py:88 - [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)
ERROR 2017-11-24 15:26:23,142 NetUtil.py:89 - SSLError: Failed to connect. Please check openssl library versions.
Refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1022468 for more details.
WARNING 2017-11-24 15:26:23,145 NetUtil.py:116 - Server at https://master.hadoop.com:8440 is not reachable, sleeping for 10 seconds...
', None)
('WARNING 2017-11-24 15:20:50,334 NetUtil.py:116 - Server at https://master:8440 is not reachable, sleeping for 10 seconds...
INFO 2017-11-24 15:20:50,334 HeartbeatHandlers.py:115 - Stop event received
INFO 2017-11-24 15:20:50,335 NetUtil.py:122 - Stop event received
INFO 2017-11-24 15:20:50,335 ExitHelper.py:53 - Performing cleanup before exiting...
INFO 2017-11-24 15:20:50,335 ExitHelper.py:67 - Cleanup finished, exiting with code:0
INFO 2017-11-24 15:20:51,256 main.py:223 - Agent died gracefully, exiting.
INFO 2017-11-24 15:20:51,257 ExitHelper.py:53 - Performing cleanup before exiting...
INFO 2017-11-24 15:26:22,904 main.py:90 - loglevel=logging.INFO
INFO 2017-11-24 15:26:22,905 main.py:90 - loglevel=logging.INFO
INFO 2017-11-24 15:26:22,905 main.py:90 - loglevel=logging.INFO
INFO 2017-11-24 15:26:22,909 DataCleaner.py:39 - Data cleanup thread started
INFO 2017-11-24 15:26:22,914 DataCleaner.py:120 - Data cleanup started
INFO 2017-11-24 15:26:22,916 DataCleaner.py:122 - Data cleanup finished
INFO 2017-11-24 15:26:22,964 PingPortListener.py:50 - Ping port listener started on port: 8670
INFO 2017-11-24 15:26:22,969 main.py:349 - Connecting to Ambari server at https://master.hadoop.com:8440 (51.15.134.161)
INFO 2017-11-24 15:26:22,969 NetUtil.py:62 - Connecting to https://master.hadoop.com:8440/ca
ERROR 2017-11-24 15:26:23,141 NetUtil.py:88 - [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)
ERROR 2017-11-24 15:26:23,142 NetUtil.py:89 - SSLError: Failed to connect. Please check openssl library versions.
Refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1022468 for more details.
WARNING 2017-11-24 15:26:23,145 NetUtil.py:116 - Server at https://master.hadoop.com:8440 is not reachable, sleeping for 10 seconds...
', None)
Connection to node2.hadoop.com closed.
SSH command execution finished
host=node2.hadoop.com, exitcode=0
Command end time 2017-11-24 15:26:25
Registering with the server...
Registration with the server failed. Thaks a lot in advance for your help Regards
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
11-24-2017
01:59 PM
@Jay Kumar SenSharma I will try this option (ambari server host as a management server only) for my next installation Otherwise for my current installation , below are the differents results I got : [root@node2 xinetd.d]# openssl s_client -connect master.hadoop.com:8440 CONNECTED(00000003) depth=0 C = XX, L = Default City, O = Default Company Ltd verify error:num=18:self signed certificate verify return:1 depth=0 C = XX, L = Default City, O = Default Company Ltd verify return:1 --- Certificate chain 0 s:/C=XX/L=Default City/O=Default Company Ltd i:/C=XX/L=Default City/O=Default Company Ltd --- Server certificate -----BEGIN CERTIFICATE----- MIIFnDCCA4SgAwIBAgIBATANBgkqhkiG9w0BAQsFADBCMQswCQYDVQQGEwJYWDEV MBMGA1UEBwwMRGVmYXVsdCBDaXR5MRwwGgYDVQQKDBNEZWZhdWx0IENvbXBhbnkg THRkMB4XDTE3MTEyMTE0MTk1NloXDTE4MTEyMTE0MTk1NlowQjELMAkGA1UEBhMC WFgxFTATBgNVBAcMDERlZmF1bHQgQ2l0eTEcMBoGA1UECgwTRGVmYXVsdCBDb21w YW55IEx0ZDCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAMEOXHcQHo/X PLQCkSQxg5d/SO62E7wHeB/7m+lYG8SNflnmrZ23f+SGIPVX2cMKt/kzzPfr/vrs se6/cgCn6ep2sJk1woXlyyuj1A2QCzU1Lp9kklX1/1SfbMOxlUUVHhYRtMaCepCr R6sOyNP9TU/o7IcG3Kl648tte1ToeLETep6gDOKYdTCCyPZbH31RsZKjU29SUT5F fm6NMmJ2MD8wNP7x9XM9v5YtjXNPFVf5J67iTy4y/YBM9UTJtswmimqrS6SZ/Itx sgDI6l+NMS9Un2p0yej0NVSziaU4W0sBBlnZQw6JumW1gLFSNjYXUxHc2M2zEhku xUWsodw3OA0q3ay5EG8ixJjE27aby4JGEE7cE0XpacHiZKr/NWLNZvzJ98I47ZoB ZspoT734Oz4wofwhY2eWP5Xah+cl95GwfmG2oc/t4eOOGPPv6lfxBdjxv0uSf4HE CgHHsVD5hBYBULsOYbWuKZBsWMfJTTH+8XasUqEBnMpgyUqRwpGOaNmymdMgfdOR aCQcNm7p22N/JAize8sjSZIA2NS1E6VZv39NRjZNUsd0BeWIjOz4l1tj4v3MtDoS ZwbjR7xfEC2SboI3VLKcu/VHt9Mii8o+Z1RW8E06jSa9iNcQubzzpnC0li28t2+7 rl8LNnQLOlbziX8mQq18O4cbPYCv8oiRAgMBAAGjgZwwgZkwHQYDVR0OBBYEFKLx HWO2dMviNEUkrBnXXVo8czp/MGoGA1UdIwRjMGGAFKLxHWO2dMviNEUkrBnXXVo8 czp/oUakRDBCMQswCQYDVQQGEwJYWDEVMBMGA1UEBwwMRGVmYXVsdCBDaXR5MRww GgYDVQQKDBNEZWZhdWx0IENvbXBhbnkgTHRkggEBMAwGA1UdEwQFMAMBAf8wDQYJ KoZIhvcNAQELBQADggIBAKx3g0LHqG3KtTSP+l1GxDYkfB/ONVi8KNTA8GKcbQuP IifFaM1Q6UbDYJ2RguLgveWT5Tv9yD5Qclvh8BGha0mpBDyt2iumOaITMD7YLvlq /sXc/oaJYBCKc7HNl/+98iV+8gTs/Kbvadq+SzjqDNQi8eXDf2YHC+DGCK4agTd0 L7WmrRDAkKnJyG1OeI413PYifO4a2rxUNvdv81imlUCvBoeot8j7/lee1fWnqlw7 FJdlXWBNwvv1NNerFEOQbnqipaZ7+WYwuXSThcJ6lB5mSspU9U/CF/GwHSpIo9G6 p9Ka/hXlDZ3u2qwFdnAiPK+fg8yoL/Y4YK5PLHkKbI8w/1/0AU6M57njKKPueCyn 5WxoLF1ovNYm/R4jB8uDiktzUqXhFPQ2yrTe/pglSIf5+iSMxib5fr1yE3EeSJd3 dZ+PcNeFNMjQUKvLabXG6xJD/VLzGsrheAFUdpvGopvRLKgIcbpd+VqS/04Cc0Ll vggS2a0u392+LW4aJ7dymZ3Fep9ucoq/ixGwvsOW0XgwUnTszOAKZOAcLtsZL4xY TvMZvJUe4rgK444rBcg6Lv5ANrI7O4WZpu5bc/DDtCS34O3XrcP3fFATNte9WQiY 6g5+8JAiwAgHzeLsaSkdILHH+rcjf5VbgQxk6uxDCnRWSlbKYjez/oKUp7E96tVs -----END CERTIFICATE----- subject=/C=XX/L=Default City/O=Default Company Ltd issuer=/C=XX/L=Default City/O=Default Company Ltd --- No client certificate CA names sent Peer signing digest: SHA512 Server Temp Key: ECDH, P-256, 256 bits --- SSL handshake has read 2180 bytes and written 415 bytes --- New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES128-GCM-SHA256 Server public key is 4096 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1.2 Cipher : ECDHE-RSA-AES128-GCM-SHA256 Session-ID: 5A181F111CDAF77B72E024D4A8ED984010E06D3C4768057738028E565718A5CC Session-ID-ctx: Master-Key: 92BB6A68AF40DC456930AD2D5E46C617DC1858E021760867190D81543D56C42A06562DF3179FF978E66538D005D19985 Key-Arg : None Krb5 Principal: None PSK identity: None PSK identity hint: None Start Time: 1511530257 Timeout : 300 (sec) Verify return code: 18 (self signed certificate) --- closed [root@node2 ~]# openssl s_client -connect master.hadoop.com:8441 CONNECTED(00000003) depth=0 C = XX, L = Default City, O = Default Company Ltd verify error:num=18:self signed certificate verify return:1 depth=0 C = XX, L = Default City, O = Default Company Ltd verify return:1 --- Certificate chain 0 s:/C=XX/L=Default City/O=Default Company Ltd i:/C=XX/L=Default City/O=Default Company Ltd --- Server certificate -----BEGIN CERTIFICATE----- MIIFnDCCA4SgAwIBAgIBATANBgkqhkiG9w0BAQsFADBCMQswCQYDVQQGEwJYWDEV MBMGA1UEBwwMRGVmYXVsdCBDaXR5MRwwGgYDVQQKDBNEZWZhdWx0IENvbXBhbnkg THRkMB4XDTE3MTEyMTE0MTk1NloXDTE4MTEyMTE0MTk1NlowQjELMAkGA1UEBhMC WFgxFTATBgNVBAcMDERlZmF1bHQgQ2l0eTEcMBoGA1UECgwTRGVmYXVsdCBDb21w YW55IEx0ZDCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAMEOXHcQHo/X PLQCkSQxg5d/SO62E7wHeB/7m+lYG8SNflnmrZ23f+SGIPVX2cMKt/kzzPfr/vrs se6/cgCn6ep2sJk1woXlyyuj1A2QCzU1Lp9kklX1/1SfbMOxlUUVHhYRtMaCepCr R6sOyNP9TU/o7IcG3Kl648tte1ToeLETep6gDOKYdTCCyPZbH31RsZKjU29SUT5F fm6NMmJ2MD8wNP7x9XM9v5YtjXNPFVf5J67iTy4y/YBM9UTJtswmimqrS6SZ/Itx sgDI6l+NMS9Un2p0yej0NVSziaU4W0sBBlnZQw6JumW1gLFSNjYXUxHc2M2zEhku xUWsodw3OA0q3ay5EG8ixJjE27aby4JGEE7cE0XpacHiZKr/NWLNZvzJ98I47ZoB ZspoT734Oz4wofwhY2eWP5Xah+cl95GwfmG2oc/t4eOOGPPv6lfxBdjxv0uSf4HE CgHHsVD5hBYBULsOYbWuKZBsWMfJTTH+8XasUqEBnMpgyUqRwpGOaNmymdMgfdOR aCQcNm7p22N/JAize8sjSZIA2NS1E6VZv39NRjZNUsd0BeWIjOz4l1tj4v3MtDoS ZwbjR7xfEC2SboI3VLKcu/VHt9Mii8o+Z1RW8E06jSa9iNcQubzzpnC0li28t2+7 rl8LNnQLOlbziX8mQq18O4cbPYCv8oiRAgMBAAGjgZwwgZkwHQYDVR0OBBYEFKLx HWO2dMviNEUkrBnXXVo8czp/MGoGA1UdIwRjMGGAFKLxHWO2dMviNEUkrBnXXVo8 czp/oUakRDBCMQswCQYDVQQGEwJYWDEVMBMGA1UEBwwMRGVmYXVsdCBDaXR5MRww GgYDVQQKDBNEZWZhdWx0IENvbXBhbnkgTHRkggEBMAwGA1UdEwQFMAMBAf8wDQYJ KoZIhvcNAQELBQADggIBAKx3g0LHqG3KtTSP+l1GxDYkfB/ONVi8KNTA8GKcbQuP IifFaM1Q6UbDYJ2RguLgveWT5Tv9yD5Qclvh8BGha0mpBDyt2iumOaITMD7YLvlq /sXc/oaJYBCKc7HNl/+98iV+8gTs/Kbvadq+SzjqDNQi8eXDf2YHC+DGCK4agTd0 L7WmrRDAkKnJyG1OeI413PYifO4a2rxUNvdv81imlUCvBoeot8j7/lee1fWnqlw7 FJdlXWBNwvv1NNerFEOQbnqipaZ7+WYwuXSThcJ6lB5mSspU9U/CF/GwHSpIo9G6 p9Ka/hXlDZ3u2qwFdnAiPK+fg8yoL/Y4YK5PLHkKbI8w/1/0AU6M57njKKPueCyn 5WxoLF1ovNYm/R4jB8uDiktzUqXhFPQ2yrTe/pglSIf5+iSMxib5fr1yE3EeSJd3 dZ+PcNeFNMjQUKvLabXG6xJD/VLzGsrheAFUdpvGopvRLKgIcbpd+VqS/04Cc0Ll vggS2a0u392+LW4aJ7dymZ3Fep9ucoq/ixGwvsOW0XgwUnTszOAKZOAcLtsZL4xY TvMZvJUe4rgK444rBcg6Lv5ANrI7O4WZpu5bc/DDtCS34O3XrcP3fFATNte9WQiY 6g5+8JAiwAgHzeLsaSkdILHH+rcjf5VbgQxk6uxDCnRWSlbKYjez/oKUp7E96tVs -----END CERTIFICATE----- subject=/C=XX/L=Default City/O=Default Company Ltd issuer=/C=XX/L=Default City/O=Default Company Ltd --- No client certificate CA names sent Peer signing digest: SHA512 Server Temp Key: ECDH, P-256, 256 bits --- SSL handshake has read 2180 bytes and written 415 bytes --- New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES128-GCM-SHA256 Server public key is 4096 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1.2 Cipher : ECDHE-RSA-AES128-GCM-SHA256 Session-ID: 5A18239CC54B7C4C271CE1DDCECF0EDC876B3D67A6F5549D4A649C15AD99A13B Session-ID-ctx: Master-Key: 816039314AF3AAD93599470C537827C1B313BA3AF0DE5502E98834A4C94DA58ED2A4140D1DB2B5CD3022FB6CAA9A773A Key-Arg : None Krb5 Principal: None PSK identity: None PSK identity hint: None Start Time: 1511531420 Timeout : 300 (sec) Verify return code: 18 (self signed certificate) --- for ambari server host [root@master ~]# netstat -tnlpa | grep 8440 tcp 0 0 10.4.98.11:35424 51.15.134.161:8440 TIME_WAIT -
tcp6 0 0 :::8440 :::* LISTEN 4960/java
tcp6 0 0 10.4.98.11:8440 51.15.134.161:35426 TIME_WAIT -
tcp6 0 0 10.4.98.11:8440 51.15.134.161:35420 TIME_WAIT -
tcp6 0 0 10.4.98.11:8440 51.15.134.161:35430 TIME_WAIT -
tcp6 0 0 10.4.98.11:8440 51.15.216.73:49462 TIME_WAIT -
tcp6 0 0 10.4.98.11:8440 51.15.134.161:35434 TIME_WAIT -
tcp6 0 0 10.4.98.11:8440 51.15.216.73:49456 TIME_WAIT -
tcp6 0 0 10.4.98.11:8440 51.15.216.73:49464 TIME_WAIT - [root@master ~]# [root@master ~]# netstat -tnlpa | grep 8441 tcp6 0 0 :::8441 :::* LISTEN 4960/java
tcp6 0 0 10.4.98.11:8441 51.15.216.73:58998 TIME_WAIT - [root@master ~]#
... View more
11-24-2017
10:50 AM
@ Jay Kumar SenSharma Hello , Currently, I have a 3 nodes cluster with one node including both ambari server and ambari agent and the other 2 nodes just include ambari agent. Do you think that this is a proper choice, is it better to have a node only dedicated to ambari server ( without any ambari agent , or master /slave services ) ? the etc/hosts file is the same for the 3 nodes of the cluster and I am aware of the fact the port 8441 in not working properly for the ambari agents but I don't know how to fix this issue Any suggestions about opening the port 8441 properly in the nodes ( I guess this port shall be opened for the 3 nodes of the cluster since I have ambari agents in the 3 nodes ) Thanks a lot in advanc efor your feedback
... View more
11-24-2017
10:24 AM
masterambariserver.pngHello, I need some clarifications about the way to install a hadoop cluster with Ambari , I tried several methods but unfortunately, none of them has worked properly ( I usually end up with a failing start of all the services such as namenode HDFS spark server etc, and heartbeats lost for all the nodes of the cluster) . let's make the hypothesis that I want to install a 3 nodes cluster Usually I install an ambari server in one node , and ambari agents in the 3 nodes , so the node where the ambari server is set will also include an ambari agent as well as master/slave services from the HDP Stack such as namenode datanode , spark server etc, with this method , the established cluster is totally failing since all services can't even start and heartbeats from all the ambari agents are lost. My questions are as follows : Do you have any explanations about the services start/heartbeats fail I got for the method described above ? Shall I install the ambari server in one node , this node will be totally dedicated to the ambari server , so for this node, we won't have any ambari agents or HDP services, in the hosts confirmation step of the ambari wizard install , I intend to not mention this node but I will include it is private SSH key in the table below the registration of hosts,ambariagentslavenode.png the repartition of the master /slave services will not take this node into consideration either . Do you think that with this method I will get better results? I included the ambari agents /ambari server log files for my current installation Thanks a lot in advance for your feedbacks
... View more
Labels:
- Labels:
-
Apache Ambari
11-22-2017
06:58 PM
Hello @ Jay Kumar SenSharma, thanks a lot for your answer : Actually, I installed a 3 nodes cluster with an ambari server running in one node ( the master node) and ambari agents in the 3 hosts of the cluster . in the master node ( the host where the ambari server is run ) , netstat -tnlpa | grep 8441 gives me : tcp6 0 0 :::8441 :::* LISTEN 10983/java in the 3 hosts ,nc -v 51.15.134.161 8441 gives me : Ncat: Version 6.40 ( http://nmap.org/ncat ) Ncat: Connected to 51.15.134.161:8441 Finally find enclosed the 3 log files for 3 ambari agents Otherwise my whole cluster is not working since I can not even start or stop services ( these options are not provided) agentmaster.pngagentnode2.pngagentnode1.pngambarifailing.png Any help with this please , I have to resolve this issue and I feel that it will be too difficult as everything is not working properly Thanks a lot in advance for your advices
... View more
11-21-2017
05:47 PM
Hello , I managed to set a 3 nodes cluster using the local repositories for ambari 2.4.2 and HPP 2.5. The installation of the different services succeeded during the step 9 of the ambari wizard install , but almost all the services failed to start. Once connected to the cluster dashboard via the ambari web UI in the port 8080 , I got 31 alerts with the heartbeats lost for all the services, the master processes ( such as name node secondary namenode , hive server etc) failing to start. I also got a failure message from the namenode faling to connect to the master node in the port 8020. Finally the start/stop options of the different services are disabled via the ambari web UI ( see enclosed scree shots) I already stopped the different ambari agents and ambari server and restarted them but this doesn't resolve the issues I also tried to start the namenode from the CLI and It didn't work either . Any help with this please Best regards
... View more
Labels:
11-19-2017
11:32 PM
I am trying to intsall a HDP cluster using Ambari ( public repository ), I went through the different steps , but at the last strp , the data node intsllation fail and I get the following message : stderr:
<script id="metamorph-234-start" type="text/x-placeholder"></script>Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/hook.py", line 37, in <module>
BeforeInstallHook().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 374, in execute
self.save_component_version_to_structured_out(self.command_name)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 244, in save_component_version_to_structured_out
stack_select_package_name = stack_select.get_package_name()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/stack_select.py", line 109, in get_package_name
package = get_packages(PACKAGE_SCOPE_STACK_SELECT, service_name, component_name)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/stack_select.py", line 223, in get_packages
supported_packages = get_supported_packages()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/stack_select.py", line 147, in get_supported_packages
raise Fail("Unable to query for supported packages using {0}".format(stack_selector_path))
resource_management.core.exceptions.Fail: Unable to query for supported packages using /usr/bin/hdp-select<script id="metamorph-234-end" type="text/x-placeholder"></script>
stdout:
<script id="metamorph-236-start" type="text/x-placeholder"></script>2017-11-20 00:11:25,437 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=None -> 2.6
2017-11-20 00:11:25,438 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-11-20 00:11:25,439 - Group['hdfs'] {}
2017-11-20 00:11:25,441 - Group['hadoop'] {}
2017-11-20 00:11:25,441 - Group['users'] {}
2017-11-20 00:11:25,441 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-11-20 00:11:25,442 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-11-20 00:11:25,443 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users'], 'uid': None}
2017-11-20 00:11:25,444 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2017-11-20 00:11:25,444 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-20 00:11:25,446 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-11-20 00:11:25,451 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2017-11-20 00:11:25,452 - Group['hdfs'] {}
2017-11-20 00:11:25,452 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hdfs']}
2017-11-20 00:11:25,453 - FS Type:
2017-11-20 00:11:25,453 - Directory['/etc/hadoop'] {'mode': 0755}
2017-11-20 00:11:25,454 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-11-20 00:11:25,470 - Repository['HDP-2.6-repo-1'] {'append_to_file': False, 'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.6.3.0', 'action': ['create'], 'components': ['HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2017-11-20 00:11:25,494 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-2.6-repo-1]\nname=HDP-2.6-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.6.3.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-11-20 00:11:25,495 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2017-11-20 00:11:25,496 - Repository['HDP-UTILS-1.1.0.21-repo-1'] {'append_to_file': True, 'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6', 'action': ['create'], 'components': ['HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2017-11-20 00:11:25,499 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-2.6-repo-1]\nname=HDP-2.6-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.6.3.0\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.21-repo-1]\nname=HDP-UTILS-1.1.0.21-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-11-20 00:11:25,499 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2017-11-20 00:11:25,507 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-11-20 00:11:25,594 - Skipping installation of existing package unzip
2017-11-20 00:11:25,594 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-11-20 00:11:25,606 - Skipping installation of existing package curl
2017-11-20 00:11:25,606 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-11-20 00:11:25,618 - Skipping installation of existing package hdp-select
Command failed after 1 tries
<script id
... View more
Labels:
10-27-2017
06:42 PM
Hello , I am facing a special situation , actually I got some problmens with the ambari server which fails to start so I did : ambari-server reset ambari-server setup Now when I access the 127.0.0.1:8080 I got a welcome page asking for the the creation of a cluster by launching the installation wizard . I don't egt the same view that I had the first tieme I used the sandbox please see enclosed picture for the welcome page I get in Ambari When I try to add hosts to cluster via ambari ( see the picture below) I got a fail message ambari2.png ambari3.png Any help with hosts adding and do you have an explanation of the ambari web UI modification after the ambari reset/ setup actions Thanks a lot Regards ambari-problem.png
... View more
Labels:
- Labels:
-
Apache Ambari
10-23-2017
05:43 PM
Hello , even though I edited the pg_hba.conf file as mentioned above, I am asked to enter the postgresql data base password while copying the postgresql JDBC driver into sqoop client library . I am doubting about replacing the 0.0.0.0/0 md5 with the current sandbox IP , May you please guide me about resolving this issue . Otherwise I am using the HDP 2.5 Regards
... View more
10-22-2017
12:57 AM
Hello, I installed the last version of HDP Sandbox.But I am facing issues while trying to ssh it . Currently after I get the following message from the virtual box saying that I can access the webbrowser in port 8888 , I ssh with the command ssh root@127.0.0.1 -p 2222 the problem is that I get the connection to [root@sandbox-host ~]# I can not access the root@sandbox ~ Anyone faced the same issue ? Thanks
... View more
Labels:
- Labels:
-
Docker
09-26-2017
04:44 PM
Hello , I spent the whole day trying to launch the Sandbox Hortonworks following several methods but I didn't succeed . the issue I am facing is that I can not access the sandbox via ssh because I get the connection refused error and for the web browser I get a connection error notification : While using VMware platform I get to this stage : But I can not go further as I can not launch the browser for the different services and I can not access the sandbox with the ssh method. Someone for help please ? I already deactivated pop up , firewall etc but nothing changes
... View more