Member since
05-31-2016
12
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2411 | 07-11-2016 06:29 AM |
07-14-2016
09:02 AM
@Robert Levas, that explains it just fine. Thx for all help, really appreciated.
... View more
07-12-2016
09:05 AM
@Robert Levas, thanks for hint. This actually works! But I was afraid that following kinit (used a lot internally): [root@hadoop-poc2-01:/etc] kinit host/hadoop-poc2-01.my.hadoop.domain.com will be towards HADOOP.COM realm; based on the domain_realm settings... But it actually goes towards PROD.COM realm: [root@hadoop-poc2-01:/etc] kinit host/hadoop-poc2-01.my.hadoop.domain.com
Password for host/hadoop-poc2-01.my.hadoop.domain.com@PROD.COM: Which is good, but I do not understand why it works....
... View more
07-11-2016
11:50 AM
We have several realms in our company and we plan to dedicate one to our Hadoop cluster managed by Ambari. Let's say we have: PROD.COM #default realm to be used by production services
HADOOP.COM #dedicated for Hadoop cluster
And it is mandatory for us to have PROD.COM as default realm in krb5.conf. However with PROD.COM as default realm I always get this error after successful kinit as hdfs: [root@hadoop-poc2-01:/etc] kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-poc2@HADOOP.COM
[root@hadoop-poc2-01:/etc] hadoop fs -ls /
16/07/11 13:28:39 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
16/07/11 13:28:40 WARN ipc.Client: Couldn't setup connection for hdfs-poc2@HADOOP.COM to hadoop-poc2-01.int.na.prod.com/172.30.52.136:8020
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Fail to create credential. (63) - No service creds)] With default realm set to HADOOP.COM it just works. Any hints? Thx
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
07-11-2016
06:29 AM
1 Kudo
So the issue was very likely caused by the fact that reverse lookup for IP address is performed. We do not have PTR records and /etc/hosts contains info about current host only. I have added records for all hosts of the cluster to /etc/hosts and it works now. Please note that I have dfs.namenode.datanode.registration.ip-hostname-check set to false in custom hdfs-site.xml.
... View more
07-11-2016
06:22 AM
The hostname resolution works fine. However the issue is very likely in reverse lookups for IP addresses.
... View more
07-10-2016
03:22 PM
I have successfully enabled Kerberos for Ambari managed cluster. I have used the Wizard to generate the principals and everything. However the datanodes do not connect to namenodes. The reason is following: 2016-07-08 16:10:54,753 INFO ipc.Server (Server.java:doRead(891)) - Socket Reader #1 for port 8020: readAndProcess from client 172.30.52.137 threw exception [org.apache.hadoop.security.authorize.AuthorizationException: User dn/hadoop-poc2-02.int.na.prodxxx.com@HADOOPXXX.COM (auth:KERBEROS) is not authorized for protocol interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol, expected client Kerberos principal is dn/172.30.52.137@HADOOPXXX.COM] They expect principals containing IP address instead of hostnames... I have checked the keytabs and it is generated properly: Keytab name: FILE:dn.service.keytab
KVNO Principal
---- --------------------------------------------------------------------------
1 dn/hadoop-poc2-02.int.na.prodxxx.com@HADOOPXXX.COM
1 dn/hadoop-poc2-02.int.na.prodxxx.com@HADOOPXXX.COM
1 dn/hadoop-poc2-02.int.na.prodxxx.com@HADOOPXXX.COM
1 dn/hadoop-poc2-02.int.na.prodxxx.com@HADOOPXXX.COM
1 dn/hadoop-poc2-02.int.na.prodxxx.com@HADOOPXXX.COM
Any hints?
... View more
Labels:
- Labels:
-
Apache Ambari
06-14-2016
12:07 PM
This what I have actually configured and it is working. Still I do not have distributed HBASE service deployed on my Ambari managed cluster. So my question is wether it will make any benefit to run the distributed HBASE service on my Ambari managed cluster because of the Ambari Metrics in distributed mode.
... View more
06-14-2016
11:55 AM
Hi, I have switched Ambari Metrics from "embedded" mode to "distributed" mode and it seems to work well. However I do not have HBASE service deployed in the cluster, but I assume that Ambari Metrics is using HBASE is standalone mode. The question is wether Ambari Metrics can benefit somehow from the HBASE service running on my cluster.
... View more
Labels:
- Labels:
-
Apache HBase
05-31-2016
01:44 PM
Thanks for pointing me out @Jitendra Yadav, I was able to get it working via config files. However which one of these options in ambari-server setup-security" will achieve the same result? [root@poc3:/etc] ambari-server setup-security
Using python /usr/bin/python
Security setup options...
===========================================================================
Choose one of the following options:
[1] Enable HTTPS for Ambari server.
[2] Encrypt passwords stored in ambari.properties file.
[3] Setup Ambari kerberos JAAS configuration.
[4] Setup truststore.
[5] Import certificate to truststore.
===========================================================================
Enter choice, (1-5):
... View more
05-31-2016
01:39 PM
2 Kudos
I was able to get it working:
On the Ambari Server set these parameters in /etc/ambari-server/conf/ambari.properties: security.server.two_way_ssl.port=5222
security.server.one_way_ssl.port=5223 On the Ambari Agent set "url_port" and "secured_url_port" parameters in [server] section in /etc/ambari-agent/conf/ambari-agent.ini: [server]
url_port=5223
secured_url_port=5222 Then restart both server and agent(s).
... View more