Member since
09-29-2015
362
Posts
242
Kudos Received
63
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1767 | 03-14-2019 01:00 PM | |
2049 | 01-23-2019 04:19 PM | |
8456 | 01-15-2019 01:59 PM | |
6113 | 01-15-2019 01:57 PM | |
14689 | 12-06-2018 02:01 PM |
05-31-2016
02:17 PM
4 Kudos
@Blanca Sanz LDAPS is required when creating principals in an Active Directory. This is because Active Directory will not allow passwords to be set or changed over an insecure channel and Ambari needs to set or update passwords for accounts it manages while enabling Kerberos, regenerating keytab files, or disabling Kerberos. If you are not using and Active Directory as your KDC and LDAP server, then this should not be an issue. For example if your KDC is an MIT KDC and your LDAP server is OpenLDAP. Then you are welcome to use LDAP or LDAPS when syncing users and authenticating with Ambari. Also, if you are manually managing your Kerberos identities, than you can still sync Ambari with your Active Directory using LDAP or LDAPS. However you will be responsible for creating the needed accounts (aka principals) and distributing the keytab files.
... View more
05-25-2016
01:53 AM
1 Kudo
@Predrag Minovic If the LDAP server is and Active Directory, you should make sure that the sync settings are similar to what is presented in this example: https://docs.hortonworks.com/HDPDocuments/Ambari-2.2.1.0/bk_Ambari_Security_Guide/content/_example_active_directory_configuration.html
I think the reason that you are not getting all of the users you expect is because in an Active Directory, the CN is typically auto-generated using the user's first and last name where the sAMAccountName is explicitly set as the userid (or username). However, it is possible to manually set the CN to the username and thus this is probably why you are getting some and not all of the expected results.
... View more
05-10-2016
04:59 PM
I only expect a keytab file to work on the particular host it was distributed to. This is because the service principals have the hostname where the service is running embedded in its name. So it is not recommended to copy them around. That said, you might want to make sure that the hostname of the hosts is being represented the same via the different mechanisms for getting the host's name. For example, hostname -f should be the fully qualified domain name (FQDN) of the host and return the same FQDN that was used to register with Ambari.
... View more
05-10-2016
04:54 PM
@Nicola Marangoni The keytabs are only distributed to the hosts on which they are needed. So I do not expect all keytab file to be distributed to all hosts.
... View more
05-10-2016
02:01 PM
Technically, if the realm matches in the KDC, the /etc/krb5.conf file, and Ambari, all should work. But I have seen that the MIT Kerberos libraries tend to assume the realm is all uppercase - or maybe it is the internal Hadoop Kerberos logic. You can check the MIT libray case by attempting to manually kinit and see if it works. kinit -kt /etc/security/keytabs/rm.service.keytab rm/<res_mgr_host>@hdp23cluster In any case, I would disable Kerberos in Ambari, rebuild the KDC using the uppercase form of the realm, and then re-enable Kerberos. If it doesn't work after this, we can at least rule out the case-sensitivity issue.
... View more
05-10-2016
01:00 PM
@Nicola Marangoni
Unless you changed this when editing your log entry for the post, your realm is incorrect. You have "hdp23cluster" as your realm when it should be in all upper case characters - "HDP23CLUSTER". To change this, your best bet is to disable Kerberos and then re-enable Kerberos with the correct realm.
... View more
04-29-2016
01:52 PM
4 Kudos
Here are some additional details to go with what @jramakrishnan posted: It Will use Active directory as KDC. If you enabled Kerberos specifying to use an Active Directory as the KDC, Ambari will configure the underlying Kerberos infrastructure (via the /etc/krb5.conf file) to use that Active Directory for authentication. This includes when a user manually executes kinit from the command line and when services either use kinit or some internal mechanism to authenticate. There are options for not directly using the Active Directory for services while using the Active Directory for users, if that is desired.
As soon as the user login into the system, AS will generate TGT and TGS will issue a ticket with that TGT. (AS and TGS will lie in AD) The Active Directory is both the AS (Authentication Service) and TGS (Ticket Granting Service) or it is also known as the KDC (Key Distribution Center) When a user logs into a non-windows host, typically no automated routine it performed to authenticate with the KDC (or Active Directory or AS). This is outside my scope of knowledge (I hope to get some experience with this soon), but I believe there are facilities that do or can be configured to do this - SSSD, NSS, PAM. So if an automated facility is not installed than one must manually authenticate with the KDC using the kinit command line utility to set up their credential cache and get their TGT. On a windows host, if a user logs into a machine connected to a domain, I believe that a TGT is granted immediately and is available to applications that know how to use it. For example, I assume that Internet Explorer know how to use it and will provide a Kerberos ticket if needed by web-based applications.
It Will have to create principals and key tabs for all the service users,services, local users, AD users in this active directory.But it is only creating principals and keytabs for service users like hdfs,hive. Ambari will only create the principals and key tabs file for users and services it knows about. These are the internal users like the hdfs administrator (hdfs-xxxx) and the Ambari test user (ambari-qa-xxxx); and service users like hive (hive/_HOST), data node (dn/_HOST), and resource manager (rm/_HOST). However, if only one KDC is in use (many can be used if configured properly), then all users who need access to the Kerberized services will need to be able to authenticate with that KDC and thus will need an account. "Local users" and "AD users" should be the same thing in this scenario since all users need to be in the Active Directory. Typically interactive users do not use keytab files. Since they are interactively authenticating they are able to provide their password. The keytab file is essentially a file containing a table of encrypted keys that the Kerberos infrastructure can be instructed to use when attempting to authenticate a user. This is useful in an non-interactive scenario, like when a service is requesting tickets from the KDC.
The problem which we are facing is, it is not generating keytabs for local linux users, which is restricting them to use the services even thought they have access to those services(created policies in ranger).
As mentioned above, "local users" and "AD users" are the same. So all users need to have accounts in the Active Directory so they can kinit (or authenticate) with it and set up their ticket cache and get their TGT. Also, interactive users should not get keytabs. They are for non-interactive facilities like the hadoop services. Interactive users should be forced to present their password when authenticating with the KDC (or Active Directory). This can happen either automatically, when logging in into the host (if the proper software is installed) or manually using kinit. If you wanted to separate "local users" from "AD users", you can set up a local KDC (MIT KDC) and set a trust relationship between that KDC and the Active Directory. Then all of hadoop users and services, as well as the local users, can go into the MIT KDC, while the "AD users" remain separated in a clean Active Directory.
... View more
04-26-2016
03:44 PM
1 Kudo
@ARUNKUMAR RAMASAMY I think I lost track of this issue... sorry about that. Are you still having issue? The version of Ambari shouldn't make a difference here. Yu should make sure that you can manually connect to the KDC from the command line of the host where Ambari is running. Maybe there is a DNS issue? Make sure the /etc/krb5.conf file is set to point to your KDC, then issue some command like: kadmin -p <ADMIN PRINCIPAL> -q "get_principal <ADMIN PRINCIPAL>" For example: # kadmin -p admin/admin@EXAMPLE.COM -q "get_principal admin/admin@EXAMPLE.COM"
Authenticating as principal admin/admin@EXAMPLE.COM with password.
Password for admin/admin@EXAMPLE.COM:
Principal: admin/admin@EXAMPLE.COM
Expiration date: [never]
Last password change: Mon Apr 25 16:11:27 UTC 2016
Password expiration date: [none]
Maximum ticket life: 1 day 00:00:00
Maximum renewable life: 0 days 00:00:00
Last modified: Mon Apr 25 16:11:27 UTC 2016 (root/admin@EXAMPLE.COM)
Last successful authentication: [never]
Last failed authentication: [never]
Failed password attempts: 0
Number of keys: 6
Key: vno 1, aes256-cts-hmac-sha1-96, no salt
Key: vno 1, aes128-cts-hmac-sha1-96, no salt
Key: vno 1, des3-cbc-sha1, no salt
Key: vno 1, arcfour-hmac, no salt
Key: vno 1, des-hmac-sha1, no salt
Key: vno 1, des-cbc-md5, no salt
MKey: vno 1
Attributes:
Policy: [none]
If it fails, you might get something like: # kadmin -p admin/admin@EXAMPLE.COM -q "get_principal admin/admin@EXAMPLE.COM"
Authenticating as principal admin/admin@EXAMPLE.COM with password.
kadmin: Cannot contact any KDC for requested realm while initializing kadmin interface
... View more
04-25-2016
01:48 PM
5 Kudos
Just to clarify answers from @Benjamin Leonhardi... How does Kerberos work? This is a relatively open ended question. If the answers provided do not answer this, please clarify what about Kerberos you are looking for - as related to Ambari, in general, etc... Do we have to integrate it with our Existing AD? You do not need use your existing Active Directory. However of you have accounts in there that you want to utilize in the Ambari cluster, then you will want to either have Ambari integrate with that AD directly or indirectly (via a trust relationship with an MIT KDC). Is this how it is able to identify users? Ambari, itself, does not use Kerberos as an authentication mechanism - it uses usernames and passwords. However, most of the services can be configured to use Kerberos to identify users. Can we have both AD/LDAP and Kerberos authentication seperately? I believe that only a few services can use LDAP for authentication, where most use Kerberos. So you would probably choose Kerberos over LDAP for access to services. However, access to Ambari can use LDAP and does not use Kerberos for authentication. Therefore, you would probably consider using both if you wanted your users stored in the Active Directory to have access to Ambari and its views. There is an option to use existing AD as KDC. So does this mean it is using AD authentication? Active Directory has several interfaces that may be used for authentication. Two of them are LDAP and Kerberos. Both protocols allow for authentication; however the LDAP interface can be used to query for additional information such as group membership, email addresses, and (first and last) names. With this, I am not sure what is meant by "AD authentication"; but the Active Directory, with out modification, can be used by Ambari and the services in the cluster for authentication. Does AD(KDC) has to be present in same machine I am enabling Kerberos? The KDC (or Active Directory) does not need to be on the same machine as Ambari or any service. It just needs to be accessible via the network to all hosts in the cluster.
... View more
04-21-2016
03:35 PM
6 Kudos
Once Kerberos is enabled, it is possible to get a listing of the expected Kerberos principals and keytab files. This data is typically used when manually managing the Kerberos Identities for the Ambari cluster and may be easily downloaded as a CSV file from the Enable Kerberos Wizard in the Ambari UI. After Kerberos is enabled in Ambari, details about the expected Kerberos Identities may be obtained using the following REST API call: GET /api/v1/clusters/:cluster_name/kerberos_identities Note: replace :cluster_name with the name of your cluster. The result of this query is a JSON formatted document containing a high level listing of the expected identities. {
"href" : "http://ambari-server:8080/api/v1/clusters/c1/kerberos_identities",
"items" : [
{
"href" : "http://ambari-server:8080/api/v1/clusters/c1/kerberos_identities/HTTP%2Fhost-1.example.com%40EXAMPLE.COM",
"KerberosIdentity" : {
"cluster_name" : "c1",
"host_name" : "host-1.example.com",
"principal_name" : "HTTP/host-1.example.com@EXAMPLE.COM"
}
},
{
"href" : "http://ambari-server:8080/api/v1/clusters/c1/kerberos_identities/ambari-qa-c1%40EXAMPLE.COM",
"KerberosIdentity" : {
"cluster_name" : "c1",
"host_name" : "host-1.example.com",
"principal_name" : "ambari-qa-c1@EXAMPLE.COM"
}
},
{
"href" : "http://ambari-server:8080/api/v1/clusters/c1/kerberos_identities/dn%2Fhost-1.example.com%40EXAMPLE.COM",
"KerberosIdentity" : {
"cluster_name" : "c1",
"host_name" : "host-1.example.com",
"principal_name" : "dn/host-1.example.com@EXAMPLE.COM"
}
},
{
"href" : "http://ambari-server:8080/api/v1/clusters/c1/kerberos_identities/hdfs-c1%40EXAMPLE.COM",
"KerberosIdentity" : {
"cluster_name" : "c1",
"host_name" : "host-1.example.com",
"principal_name" : "hdfs-c1@EXAMPLE.COM"
}
},
... To get more information on each identity follow the provided URLs in the output. For example, for ambari-qa-c1@EXAMPLE.COM the URL would be: http://ambari-server:8080/api/v1/clusters/c1/kerberos_identities/ambari-qa-c1%40EXAMPLE.COM Which will yield something like: [
{
"href" : "http://ambari-server:8080/api/v1/clusters/c1/kerberos_identities/ambari-qa-c1%40EXAMPLE.COM",
"KerberosIdentity" : {
"cluster_name" : "c1",
"description" : "/smokeuser",
"host_name" : "host-1.example.com",
"keytab_file_group" : "hadoop",
"keytab_file_group_access" : "r",
"keytab_file_installed" : "true",
"keytab_file_mode" : "440",
"keytab_file_owner" : "ambari-qa",
"keytab_file_owner_access" : "r",
"keytab_file_path" : "/etc/security/keytabs/smokeuser.headless.keytab",
"principal_local_username" : "ambari-qa",
"principal_name" : "ambari-qa-c1@EXAMPLE.COM",
"principal_type" : "USER"
}
},
{
"href" : "http://ambari-server:8080/api/v1/clusters/c1/kerberos_identities/ambari-qa-c1%40EXAMPLE.COM",
"KerberosIdentity" : {
"cluster_name" : "c1",
"description" : "/smokeuser",
"host_name" : "host-2.example.com",
"keytab_file_group" : "hadoop",
"keytab_file_group_access" : "r",
"keytab_file_installed" : "true",
"keytab_file_mode" : "440",
"keytab_file_owner" : "ambari-qa",
"keytab_file_owner_access" : "r",
"keytab_file_path" : "/etc/security/keytabs/smokeuser.headless.keytab",
"principal_local_username" : "ambari-qa",
"principal_name" : "ambari-qa-c1@EXAMPLE.COM",
"principal_type" : "USER"
}
},
... To get the all of the data in one query, indicate that you want all of the field data by appending fields=* to the original query: GET /api/v1/clusters/:cluster_name/kerberos_identities?fields=* This will yield something like the following: {
"href" : "http://ambari-server:8080/api/v1/clusters/c1/kerberos_identities?fields=*",
"items" : [
{
"href" : "http://ambari-server:8080/api/v1/clusters/c1/kerberos_identities/HTTP%2Fhost-1.example.com%40EXAMPLE.COM",
"KerberosIdentity" : {
"cluster_name" : "c1",
"description" : "/spnego",
"host_name" : "host-1.example.com",
"keytab_file_group" : "hadoop",
"keytab_file_group_access" : "r",
"keytab_file_installed" : "true",
"keytab_file_mode" : "440",
"keytab_file_owner" : "root",
"keytab_file_owner_access" : "r",
"keytab_file_path" : "/etc/security/keytabs/spnego.service.keytab",
"principal_local_username" : null,
"principal_name" : "HTTP/host-1.example.com@EXAMPLE.COM",
"principal_type" : "SERVICE"
}
},
{
"href" : "http://ambari-server:8080/api/v1/clusters/c1/kerberos_identities/ambari-qa-c1%40EXAMPLE.COM",
"KerberosIdentity" : {
"cluster_name" : "c1",
"description" : "/smokeuser",
"host_name" : "host-1.example.com",
"keytab_file_group" : "hadoop",
"keytab_file_group_access" : "r",
"keytab_file_installed" : "true",
"keytab_file_mode" : "440",
"keytab_file_owner" : "ambari-qa",
"keytab_file_owner_access" : "r",
"keytab_file_path" : "/etc/security/keytabs/smokeuser.headless.keytab",
"principal_local_username" : "ambari-qa",
"principal_name" : "ambari-qa-c1@EXAMPLE.COM",
"principal_type" : "USER"
}
},
{
"href" : "http://ambari-server:8080/api/v1/clusters/c1/kerberos_identities/dn%2Fhost-1.example.com%40EXAMPLE.COM",
"KerberosIdentity" : {
"cluster_name" : "c1",
"description" : "datanode_dn",
"host_name" : "host-1.example.com",
"keytab_file_group" : "hadoop",
"keytab_file_group_access" : "",
"keytab_file_installed" : "true",
"keytab_file_mode" : "400",
"keytab_file_owner" : "hdfs",
"keytab_file_owner_access" : "r",
"keytab_file_path" : "/etc/security/keytabs/dn.service.keytab",
"principal_local_username" : "hdfs",
"principal_name" : "dn/host-1.example.com@EXAMPLE.COM",
"principal_type" : "SERVICE"
}
},
... In many cases, this may be good enough. However there may be a need to get this data in a different format, especially when using the data in a script. In that case, it is possible to retrieve this as CSV formatted data by appending format=CSV to the query: GET /api/v1/clusters/:cluster_name/kerberos_identities?fields=*&format=CSV The CSV formatted data will look something like: host,description,principal name,principal type,local username,keytab file path,keytab file owner,keytab file owner access,keytab file group,keytab file group access,keytab file mode,keytab file installed
host-1.example.com,/spnego,HTTP/host-1.example.com@EXAMPLE.COM,SERVICE,,/etc/security/keytabs/spnego.service.keytab,root,r,hadoop,r,440,true
host-1.example.com,/smokeuser,ambari-qa-c1@EXAMPLE.COM,USER,ambari-qa,/etc/security/keytabs/smokeuser.headless.keytab,ambari-qa,r,hadoop,r,440,true
host-1.example.com,datanode_dn,dn/host-1.example.com@EXAMPLE.COM,SERVICE,hdfs,/etc/security/keytabs/dn.service.keytab,hdfs,r,hadoop,,400,true
...
host-1.example.com,hdfs,hdfs-c1@EXAMPLE.COM,USER,hdfs,/etc/security/keytabs/hdfs.headless.keytab,hdfs,r,hadoop,r,440,true
host-1.example.com,namenode_nn,nn/host-1.example.com@EXAMPLE.COM,SERVICE,hdfs,/etc/security/keytabs/nn.service.keytab,hdfs,r,hadoop,,400,true
host-1.example.com,zookeeper_zk,zookeeper/host-1.example.com@EXAMPLE.COM,SERVICE,,/etc/security/keytabs/zk.service.keytab,zookeeper,r,hadoop,,400,true
host-2.example.com,/spnego,HTTP/host-2.example.com@EXAMPLE.COM,SERVICE,,/etc/security/keytabs/spnego.service.keytab,root,r,hadoop,r,440,true
host-2.example.com,/smokeuser,ambari-qa-c1@EXAMPLE.COM,USER,ambari-qa,/etc/security/keytabs/smokeuser.headless.keytab,ambari-qa,r,hadoop,r,440,true
host-2.example.com,datanode_dn,dn/host-2.example.com@EXAMPLE.COM,SERVICE,hdfs,/etc/security/keytabs/dn.service.keytab,hdfs,r,hadoop,,400,true
host-2.example.com,secondary_namenode_nn,nn/host-2.example.com@EXAMPLE.COM,SERVICE,hdfs,/etc/security/keytabs/nn.service.keytab,hdfs,r,hadoop,,400,true
host-2.example.com,zookeeper_zk,zookeeper/host-2.example.com@EXAMPLE.COM,SERVICE,,/etc/security/keytabs/zk.service.keytab,zookeeper,r,hadoop,,400,true
... View more