Member since
03-16-2020
2
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3038 | 04-02-2020 05:54 PM |
04-02-2020
05:54 PM
The problem was regrading multi-homed host configurations. In our cluster, the hostname and host FQDNs were different. In such environments, it is important to make sure _HOST in Hadoop configurations translates to the correct name. This page has this issue covered in more details, but shortly put, _HOST is by default substituted to InetAddress.getLocalHost().getCanonicalHostName().toLowerCase() unless hadoop.security.dns.interface is set: import java.net.InetAddress;
public class CheckHostResolution {
public static void main(String[] args) {
try {
String s = InetAddress.getLocalHost().getCanonicalHostName();
System.out.println(s);
} catch (Exception ex) {
System.err.println(ex);
}
} Using this snippet, you can double-check what _HOST resolves to on a machine. This should match the principal names in the keytabs. In our case, _HOST resolved to the value of /etc/hostname since a DNS was not mentioned in configurations, which was the short version (say: plaza, instead of plaza.localdomain.com). However, in the keytabs generated by Ambari, the principals were the FQDN form plaza.localdomain.com. Hence, what simply solved the problem was updating the order of those names in the /etc/hosts file which is used for resolution. i.e. it used to be: 192.168.100.101 plaza plaza.localdomain.com And the problem was solved by changing it to: 192.168.100.101 plaza.localdomain.com plaza Cheers.
... View more