Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2722 | 04-27-2020 03:48 AM | |
| 5283 | 04-26-2020 06:18 PM | |
| 4448 | 04-26-2020 06:05 PM | |
| 3573 | 04-13-2020 08:53 PM | |
| 5377 | 03-31-2020 02:10 AM |
07-12-2018
11:39 AM
@Raju It looks like you might not have setup the FQDN properly for all your hosts. (Or the hostname might have changed) Ambari associates the FQDN (hostname) in the principal name So ig you are not setting up your host FQDN properly then the keytabs might be generated with incorrect principals. Please check if your Hosts have recently changed their hostname? Vefify the output of the following command in different hosts of your cluster including the problematic host. # hostname -f
# /cat /etc/hosts . Once you fix the hostname, Please try to regenerate the Keytabs from Ambari UI --> Kerberos --> Regenerate Keytabs NOTE: Regenerating Keytabs will require whole cluster restart, hence please find a maintenance window to do that. https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.2.2/bk_ambari-operations/content/how_to_regenerate_keytabs.html . Hadoop relies heavily on DNS, and as such performs many DNS lookups during normal operation. All hosts in your system must be configured for both forward and and reverse DNS. If you are unable to configure DNS in this way, you should edit the /etc/hosts file on every host in your cluster to contain the IP address and Fully Qualified Domain Name of each of your hosts. https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.2.2/bk_ambari-installation-ppc/content/check_dns.html
... View more
07-12-2018
06:57 AM
@Junfeng Chen Error indicates that the "libcurl" version which you are uisng in your Anaconda is too old and does not support "--negotiate" flag with curl command. So can you please let us know how you checked the "libcurl" and Curl version on your hosts? What version of "libcurl" do you see when you run the following command. By any chance do you have multiple Curl versions installed on the host? Example: # curl -V
curl 7.29.0 (x86_64-redhat-linux-gnu) libcurl/7.29.0 NSS/3.34 zlib/1.2.7 libidn/1.28 libssh2/1.4.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz unix-sockets . Can you try upgrading Curl / libcurl on your host? Similar thread: https://community.hortonworks.com/questions/70994/hiveserver2-failing-with-tmptmp23xnha-2tmptmpyf2fo.html
... View more
07-12-2018
03:12 AM
@Zyann Please check your DB config .. and make the changes according to the following and then restart postgresql. # grep 'ambari' /var/lib/pgsql/data/pg_hba.conf
local all ambari,mapred md5
host all ambari,mapred 0.0.0.0/0 md5
host all ambari,mapred ::/0 md5 . If you still face any issue then please share the following details: 1. /var/lib/pgsql/data/pg_hba.conf 2. Output of the following command: # grep 'jdbc' /etc/ambari-server/conf/ambari.properties 3. Are you able to login to DB using the following command? # psql -U ambari ambari -h yourdatabase.example.com
Password for user ambari: bigdata
ambari=> .
... View more
07-12-2018
03:02 AM
@Zyann Can you please check the following files to understand what password ambari is assuming for the Database? # grep 'server.jdbc.user.passwd' /etc/ambari-server/conf/ambari.properties
server.jdbc.user.passwd=/etc/ambari-server/conf/password.dat
# cat /etc/ambari-server/conf/password.dat
bigdata . Also please check if the user is able to access the Database using command line with the mentioned credentials? # psql -U ambari ambari -h yourdatabase.example.com
Password for user ambari: bigdata . You should also check your Postgres Setting to verify if it has given access to 'ambari' ? # grep 'ambari' /var/lib/pgsql/data/pg_hba.conf
local all ambari,mapred md5
host all ambari,mapred 0.0.0.0/0 md5
host all ambari,mapred ::/0 md5 .
... View more
07-12-2018
01:44 AM
1 Kudo
@Steve Kiaie Do you have enough free memory available at the OS? (during crash) And there is not additional JVM options set globally? # free -m
# echo $JAVA_OPTIONS
# echo $HADOOP_OPTS
# echo $HADOOP_CLIENT_OPTS Also this error / crash indicates that it you might have set the system OS memory overcommit setting was 2 Please check this file "/proc/sys/vm/overcommit_memory" This switch knows 3 different settings 0 /1 /2: 0: The Linux kernel is free to overcommit memory (this is the default), a heuristic algorithm is applied to figure out if enough memory is available. 1: The Linux kernel will always overcommit memory, and never check if enough memory is available. This increases the risk of out-of-memory situations, but also improves memory-intensive workloads. 2: The Linux kernel will not overcommit memory, and only allocate as much memory as defined in overcommit_ratio. . Please try to change this setting as a superuser: # echo 0 > /proc/sys/vm/overcommit_memory . Also for more detailed analysis can you please attach the JVM crash file "/home/centos/hs_err_pid4314.log"
... View more
07-11-2018
09:31 AM
@Anurag Mishra Can you please check the "fs.permissions.umask-mode" property value defined inside your cluster. Is this value set differently in your different clusters? Ambari --> HDFS --> Configs --> Advanced --> "Advanced hdfs-site" https://community.hortonworks.com/content/supportkb/48780/how-to-set-a-global-umask-in-hdfs.html https://community.hortonworks.com/articles/62613/umask-vs-hdfs-default-acls.html
... View more
07-11-2018
01:44 AM
@Prakash Punj Please refer to the following article to know the cause and exact resolution for this error which is causing ambari agent to not come up properly: ERROR 2018-07-11 01:20:43,863 NetUtil.py:96 - EOF occurred in violation of protocol (_ssl.c:579) https://community.hortonworks.com/articles/188269/javapython-updates-and-ambari-agent-tls-settings.html .
... View more
07-10-2018
11:31 PM
@Michael Bronson As per support matrix compatible components yes, you can use it. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_support-matrices/content/ch_matrices-ambari.html
... View more
07-10-2018
01:47 PM
@tauqeer khan In HDP 2.5.x Hive LLAP is in Tech Preview Mode. This feature is a technical preview and considered under development. Do not use this feature in your production systems. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.6/bk_command-line-installation/content/intro_new_book.html Please see if you can use HDP 2.6.x: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_release-notes/content/new_features.html
... View more
07-10-2018
10:26 AM
@Anurag Mishra Do you have a valid kerberos ticket? Please check the output of the following command: # klist . If you do not have a valid kerberos ticket then please check if you have a correct keytab? If yes then please try to do a kinit something like this: First check the Principal from the keytab # klist -kte /etc/security/keytabs/hive.service.keytab
Then do the kinit using that principal & keytab Following is just an example, you might have your own Keytab with your own custom principal name so you will need to do the kinit accordingly. Example: # kinit -kt /etc/security/keytabs/hive.service.keytab hive/kamb25101.example.com@EXAMPLE.COM
# klist .
... View more