Member since
09-29-2015
362
Posts
242
Kudos Received
63
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1042 | 03-14-2019 01:00 PM | |
1303 | 01-23-2019 04:19 PM | |
6078 | 01-15-2019 01:59 PM | |
3410 | 01-15-2019 01:57 PM | |
8235 | 12-06-2018 02:01 PM |
03-24-2017
11:24 AM
1 Kudo
To manage user role (aka privileges) through the API, there are several entry point that can be used. To set an Ambari administrator: /api/v1/clusters/privileges
Payload: [
{
"PrivilegeInfo": {
"type": "AMBARI",
"permission_name": "AMBARI.ADMINISTRATOR",
"principal_name": "username",
"principal_type": "USER"
}
}
] Notes: Change the principal_name (in the payload) value to the relevant username To set a cluster role: /api/v1/clusters/:CLUSTER_NAME/privileges Payload: [
{
"PrivilegeInfo": {
"permission_name": "PERMISSION_NAME",
"principal_name": "username",
"principal_type": "USER"
}
}
] Notes: Change :CLUSTER_NAME (in the URL) to the relevant cluster's name Change the permission_name (in the payload) value to the relevant permission name CLUSTER.ADMINISTRATOR CLUSTER.OPERATOR SERVICE.ADMINISTRATOR SERVICE.OPERATOR CLUSTER.USER Change the principal_name (in the payload) value to the relevant username To give access to a view: /api/v1/views/:VIEW_TYPE/versions/:VIEW_VERSION/instances/:VIEW_INSTANCE/privileges Payload: [
{
"PrivilegeInfo": {
"permission_name": "VIEW.USER",
"principal_name": "username",
"principal_type": "USER"
}
}
] Notes:
Change :VIEW_TYPE (in the URL) to the relevant view type (i.e., FILES) Change :VIEW_VERSION (in the URL) to the relevant view type's version (i.e., 1.0.0) Change :VIEW_INSTANCE (in the URL) to the relevant view type's version instance (i.e., MyFilesView) Change the principal_name (in the payload) value to the relevant username
... View more
03-22-2017
09:32 PM
So after a conversation with @lmccay, it appears that my assumption/statement about Ranger is incorrect and therefore a compromised service principal compromises all (relevant) services on all clusters that use the same Kerberos realm. But once again, Kerberos is not an authorization mechanism... it is merely an authentication mechanism. I think the only real solution here is to isolate clusters using different Kerberos realms. This can be done by using a local KDC and realm for the cluster-specific principals and creating a one-way trust with an Active Directory (or centralized KDC) for the user accounts. This has a few benefits over the centralized-only solution, including cluster isolation as well as network traffic isolation and distribution of load on the KDC.
... View more
03-22-2017
03:13 PM
1 Kudo
@Roland Simonis I believe that you are confusing authentication with authorization. Kerberos is only an authentication mechanism. It tells who the user is... not what the user can do. In some cases, the lack of who helps with authorization since there is no user to authorize. This is what you are trying to do by not translating certain principal names to local user names. In the scenario you pose, there is a security issue; but, I am not sure that I would blame Kerberos or Ambari's configuration of the Kerberos infrastructure on it. I believe that by installing an authorization service, like Ranger, you should be able protect against unauthorized access to Hive and other services and thus rule out any cross-cluster access issues. If you are looking to proceed with limiting access based on auth-to-local rules, be sure to see Auth-to-local Rules Syntax for information on the syntax of the rules.
... View more
03-20-2017
05:45 PM
3 Kudos
Overview Ambari's Kerberos Descriptor is JSON-formatted document used to help Ambari enable Kerberos for installed services. The descriptor contains the information needed to create the required principals and keytab files. It also declares configuration changes needed by the serviced so they a are configured for Kerberos properly. The Kerberos Descriptor is comprised of the compiled Kerberos descriptors found in the relevant service definitions with user-specified changes applied to it. This combination of data is known as the "Composite Kerberos Descriptor", where the separate parts are known as the "Stack-level Kerberos Descriptor" and the "User-specified Kerberos Descriptor". Each of these descriptors may be obtained from Ambari via is REST API: GET /api/v1/clusters/{CLUSER_NAME}/kerberos_descriptors/COMPOSITE GET /api/v1/clusters/{CLUSER_NAME}/kerberos_descriptors/STACK GET /api/v1/clusters/{CLUSER_NAME}/kerberos_descriptors/USER NOTE: Be sure to replace {CLUSTER_NAME} when the name of the relevant cluster. These REST API calls are for informational purposes only and therefore are read-only. Also, this data is available whether Kerberos is enabled or not. However the User-specified Kerberos Descriptor will most-likely be empty. The Kerberos Descriptor was designed to favor user-supplied change over the stack-level defaults; while maintaining forward compatibility in the even the stack-definitions change by adding new or updating existing component definitions. Because of this, it is expected that the User-specified Kerberos Descriptor is sparse, containing only the changes needed to be applied on top of the stack-level defaults. However as of Ambari 2.4.2, when enabling Kerberos via Ambari's Enable Kerberos Wizard, the complete Kerberos Descriptor is stored as the User-Defined Kerberos Descriptor. Storing the entire Kerberos Descriptor as the User-Defined Kerberos Descriptor is not necessarily a problem since the Composite Kerberos Descriptor will still be valid; and any additions to the Stack-level Kerberos Descriptor will be realized after Ambari server or stack upgrades. Unfortunately, issues can occur when changes to existing pieces of the Stack-level Kerberos Descriptor are encountered durning an upgrade. This is due to ambiguities encountered when upgrading the User-Specified Kerberos descriptor. If an issue with User-Defined Kerberos Descriptor is encountered, it may be necessary to manually edit it. This can be done by Getting the descriptor using Ambari's REST API Editing the descriptor using a text editor Putting the updated descriptor using Ambari's REST API Getting the descriptor using Ambari's REST API To get the User-Specified Kerberos Descriptor, the following REST API call may be issued to Ambari: GET /api/v1/clusters/{CLUSTER_NAME}/artifacts/kerberos_descriptor NOTE: Be sure to replace {CLUSTER_NAME} when the name of the relevant cluster. Notice that the API call access the "artifacts" resource of the cluster rather than "kerberos_descriptors" resource of the cluster. This is due to the storage implementation of the User-Specified Kerberos Descriptor data. If User-Specified Kerberos Descriptor was set, the response will look something like {
"href" : "http://host1.example.com:8080/api/v1/clusters/c1/artifacts/kerberos_descriptor",
"Artifacts" : {
"artifact_name" : "kerberos_descriptor",
"cluster_name" : "c1"
},
"artifact_data" : {
...
}
} The user-specified data will exist under the "artifact_data" section, which was removed to brevity. This API call can be issued using a command-line tool like curl and the output can be stored to a local file. For example: curl -u admin:admin -X GET -o kerberos_descriptor.json http://localhost:8080/api/v1/clusters/c1/artifacts/kerberos_descriptor NOTE: The user credentials ("-u admin:admin") and cluster name ("c1") should be changed for the particular cluster. After the call completes, the User-specified Kerberos Descriptor (with some additional metadata) will be in the file named kerberos_descriptor.json in the local directory.
Editing the descriptor using a text editor Once the User-specified Kerberos Descriptor has been obtained and stored in a local file, it may be edited using a text editor. Other than any fixes (additions, subtractions, etc...), the following lines in the file must be removed: "href" : "http://host1.example.com:8080/api/v1/clusters/c1/artifacts/kerberos_descriptor",
"Artifacts" : {
"artifact_name" : "kerberos_descriptor",
"cluster_name" : "c1"
}, This is metadata that will cause a failure when attempting to store the updated User-specified Kerberos Descriptor. The resulting JSON document should be something like {
"artifact_data" : {
...
}
} The user-specified data will exist under the "artifact_data" section, which was removed to brevity. After all needed changes are made, be sure to save the file.
Putting the updated descriptor using Ambari's REST API After the needed changes are made to the User-specified Kerberos Descriptor, it must be stored in Ambari. This is done by issuing the following API call to Ambari while adding the changed data as the payload: PUT /api/v1/clusters/{CLUSTER_NAME}/artifacts/kerberos_descriptor NOTE: Be sure to replace {CLUSTER_NAME} when the name of the relevant cluster. This API call can be issued using a command-line tool like curl and the payload can be specified via a local file. For example: curl -u admin:admin -X PUT -d @kerberos_descriptor.json http://localhost:8080/api/v1/clusters/c1/artifacts/kerberos_descriptor NOTE: The user credentials ("-u admin:admin") and cluster name ("c1") should be changed for the particular cluster. After the call completes, the User-specified Kerberos Descriptor stored in the file named kerberos_descriptor.json will be used to update the stored data in the artifact resource. Ambari should realize the changes without restarting.
... View more
Labels:
03-19-2017
12:43 AM
6 Kudos
This is a guide to walk a user through enabling high availability for Oozie in an Ambari-managed cluster in which Kerberos has been enabled. It is assume that Ambari 2.4.0 (or above) is installed and Kerberos has been previously set up. However, if Kerberos has not yet been set up, the Kerberos-related items can be performed when Kerberos is enabled. Also, it is expected that an Oozie HA supported database is configured. A database such as MySQL or Oracle needs to have been configured since the Derby database is not sufficient.
Motivation Most documentation properly shows the steps to enable Oozie HA; however when Kerberos is involved, some configuration properties need to be set in an alternate way. According some documentation, when HA is setup in an environment where Kerberos is enabled, the following properties must be set: oozie-site/oozie.authentication.kerberos.principal = *
oozie-site/oozie.authentication.kerberos.keytab = /etc/security/keytabs/oozie.ha.keytab If this is followed in Ambari, when principals and keytab files are created (triggered by various events), Ambari will encounter errors since it will try to create a principle with the name of "*". To avoid this problem, two Ambari-specific properties have been added and should be used to set the values specified above. oozie-site/oozie.ha.authentication.kerberos.principal = *
oozie-site/oozie.ha.authentication.kerberos.keytab = /etc/security/keytabs/oozie.ha.keytab Notice the addition of ".ha" in the property names. This leave the real Oozie properties to be set as oozie-site/oozie.authentication.kerberos.principal = HTTP/_HOST@${realm}
oozie-site/oozie.authentication.kerberos.keytab = /etc/security/keytabs/spngeo.service.keytab Because the configurations used when creating principals and keytabs uses valid values, any operation the requires principals to be create will succeed as expected. Then, when the oozie-site.xml file is built by the Ambari agent, the agent-side logic will perform the proper replacement such that the following properties will be written to the file: <property>
<name>oozie.authentication.kerberos.principal</name>
<value>*</value>
</property>
<property>
<name>oozie.authentication.kerberos.keytab</name>
<value>/etc/security/keytabs/oozie.ha.keytab</value>
</property>
Steps to Follow Given an Ambari-managed cluster where Kerberos is enabled and Oozie is installed, use the following steps to enable high availability for Oozie 1. Determine which host to use as the load balancer. This host does not need to be part of the Ambari-managed cluster, but needs to be bidirectionally available to all host on it. 2. Install a load balancer on the select host. For example, Pen, which is a simple light weight load balancer. For instructions on how to install Pen, see https://www.server-world.info/en/note?os=CentOS_6&p=pen. If following instructions from the example above, the following changes should be made to /etc/pen.conf: PORT needs to be changed from 80 to the port that Oozie is listening on. This is usually 11000. PORT 11000 The SERVER entries need to point to the Oozie server instances (existing and to be installed) SERVER1=<OOZIE_SERVER_HOST_1>:11000
SERVER2=<OOZIE_SERVER_HOST_2>:11000 3. Create the SPNEGO principal for host where the load balancer in installed Creating the SPNEGO principal is only needed if the load balancer is installed on a host not already used in the Ambari-managed cluster. If the host already has services installed on it, Ambari should have previously created the needed principal. The value of the SPNEGO principal is in the form of HTTP/<load balancer host FQDN>@<realm>. For example if the load balancer is installed on a host names load_balancer.example.com and the realm is EXAMPLE.COM, then the principal name is expected to be HTTP/local_balancer.example.com@EXAMPLE.COM. Creating the principal is done differently depending on the type of KDC (or Active Directory) involved. If using an MIT KDC, the following command may be used: kadmin -p <admin principal> -q 'add_principal -randkey HTTP/<load balancer host FQDN>@<realm>' If issuing this command on the same host as the KDC as a user with appropriate privileges, then the following command may be used: kadmin.local -q 'add_principal -randkey HTTP/<load balancer host FQDN>@<realm>' 4. Create the SPNEGO keytab file for host where the load balancer in installed Creating the SPNEGO keytab file is only needed if the load balancer is installed on a host not already used in the Ambari-managed cluster. If the host already has services installed on it, Ambari should have previously created the needed keytab file. Creating the principal is done differently depending on the type of KDC (or Active Directory) involved. If using an MIT KDC, the following command may be used: kadmin -p <admin principal> -q 'xst -k lb.keytab HTTP/<load balancer host FQDN>@<realm>' If issuing this command on the same host as the KDC as a user with appropriate privileges, then the following command may be used: kadmin.local -q 'xst -k lb.keytab HTTP/<load balancer host FQDN>@<realm>' It does not make a difference where the generated keytab file is stored; however, it should be protected again unauthorized access. 5. Create the Oozie HA keyab file Gather the SPNEGO keytab files from the relevant hosts. This includes the host where the existing Oozie server is, the hosts where the new Oozie servers are to be installed, and the load balancer host. Care must be taken not to overwrite the different keytab files since most will have the name, spnego.service.keytab, but will contain a different set of keytabs. One way to do this would be to crate a directory any copy the relevant keytab files into it using a unique name. For example, spnego.service.keytab.<source hostname>. Once all files have been gathered, a composite keytab file may be created using the ktutil command line utility. This is done by executing ktutil on the command line, which invokes a shell. In the shell, a series of read_kt commands are to be issued (once for each keytab file), then a write_kt command is executed to create the composite keytab file. For example: [root@host1 temp_keytabs]# ktutil
ktutil: read_kt spnego.service.keytab.loadbalancer
ktutil: read_kt spnego.service.keytab.host1
ktutil: read_kt spnego.service.keytab.host2
ktutil: write_kt oozie.ha.keytab
ktutil: exit
Note, that care should be taken to protect access to the gathered keytab files. After the composite keytab file is properly distributed, the copied keytab file should be deleted. 6. Distribute the Oozie HA keytab file The composite keytab file, ideally named oozie.ha.keytab, needs to be distributed to each Oozie server host. It does not need to be distributed to the load balancer host, unless that host contains an Oozie server instance. The composite keytab files should be stored as /etc/security/keytabs/oozie.ha.keytab, or in whatever directory is used to store the keytab files. The access control for the file should be set such that only root and the designated Hadoop group (typically hadoop) has read access. For example: chown root:hadoop /etc/security/keytabs/oozie.ha.keytab
chmod 640 /etc/security/keytabs/oozie.ha.keytab [root@host1 ~]# ls -l /etc/security/keytabs/oozie.ha.keytab
-rw-r-----. 1 root hadoop 1434 Mar 15 23:38 /etc/security/keytabs/oozie.ha.keytab 7. Add the additional Oozie server instances Log into Ambari as a user allowed to add new Oozie server instances. This is typically a user with Ambari Administrator, Cluster Administrator, or Cluster Operator privileges. Then browse to the view for each host where the new Oozie server instance is to be installed and select "Oozie Sever" from the component "Add" dropdown menu. New Oozie server instances may not be automatically started. This is ok since the Oozie service will need to be stopped and started later. 8. Update the Oozie-specific configurations to enable HA Log into Ambari as a user allowed to change service configurations. This is typically a user with Ambari Administrator, Cluster Administrator, Cluster Operator, or Service Administrator privileges. Then browse to the Oozie service configuration page and either add or update the following oozie-site properties. Adding a new oozie-site property may be done by clicking on the "Add Property ..." link under the "Custom oozie-site" section. Add or update oozie.zookeeper.connection.string to contain the set of Zookeeper hosts and ports: oozie.zookeeper.connection.string=<zookeeper_host_1>:2181,...,<zookeeper_host_n>:2181 Add or update oozie.services.ext: oozie.services.ext=org.apache.oozie.service.ZKLocksService,org.apache.oozie.service.ZKXLogStreamingService,org.apache.oozie.service.ZKJobsConcurrencyService Set the Oozie server base URL to point to the load balancer: oozie.base.url=http://<loadbalancer.hostname>:11000/oozie Set the (Ambari-custom) Oozie HA-specific Kerberos principal and keytab properties: oozie.ha.authentication.kerberos.principal=*
oozie.ha.authentication.kerberos.keytab=/etc/security/keytabs/oozie.ha.keytab Then, edit the oozie-env "oozie-env template" value Uncomment the OOZIE_HTTP_HOSTNAME variable and set it to be the load balancer host export OOZIE_HTTP_HOSTNAME=load_balancer.example.com Ensure the following line is not commented out: export OOZIE_BASE_URL="http://${OOZIE_HTTP_HOSTNAME}:${OOZIE_HTTP_PORT}/oozie" Finally, save the changes by clicking the "Save" button toward the top of the view and specifying a descriptive value like "Enabled Oozie HA". 9. Stop and start the Oozie service Log into Ambari as a user allowed to start and stop services. This is typically a user with Ambari Administrator, Cluster Administrator, Cluster Operator, Service Administrator, or Service Operatorprivileges. Then browse to the Oozie service page and select "Stop" from the "Service Actions" dropdown. After the Oozie service successfully stops, select "Start" from the "Service Actions" dropdown. Once the Oozie service has successfully started, test out the configuration by invoking the Oozie's service check. This is done by selecting "Run Service Check" from the "Service Actions" dropdown. 10. Locally test accessing Oozie via the load balancer using a Kerberos ticket On any host in the Ambari-managed cluster 1. Login as a user that has a Kerberos identity. For example the Ambari smoke test user. su - ambari-qa 2. Ensure a valid Kerberos ticket cache is established by kinit-ing kinit -kt /etc/security/keytabs/smokeuser.headless.keytab ambari-qa-c1 3. Issue a curl command to access Oozie (using "--negotiate -u:" to indicate Kerberos authentication is to be used) curl -s -o /dev/null -k --negotiate -u: -w '%{http_code}\n' 'http://load_balancer.example.com:11000/oozie/?user.name=oozie' If 200 is returned, the test was a success. If 401 was returned, then there was an issue with Kerberos authentication. Example: [root@host1 ~]# su - ambari-qa
[ambari-qa@host1 ~]$ kinit -kt /etc/security/keytabs/smokeuser.headless.keytab ambari-qa-c1
[ambari-qa@host1 ~]$ curl -s -o /dev/null -k --negotiate -u: -w '%{http_code}\n' 'http://load_balancer.example.com:11000/oozie/?user.name=oozie'
200
... View more
Labels:
03-18-2017
11:46 AM
@abhil sam, There are several ways to do this. The easiest is to take a look at the kdc.conf file, usually at /var/kerberos/krb5kdc/kdc.conf. In this file you will see a block that contains something like the following: [kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88 However, it may not have both properties (kdc_ports, kdc_tcp_ports).
If it has both, than the KDC is listening on both UDP and TCP sockets on the specified port(s). If it only has kdc_ports, than it is listening on UDP only. If it only has kdc_tcp_ports, then it is listening on TCP only. Another way is to use the netcat (nc) utility: TCP: nc -vz -t hostname 88
UDP: nc -vz -u hostname 88 You seem to need to use the actual hostname or FQDN for the UDP socket test to work. I tried localhost and it didn't work. Examples: [root@my_hostname ~]# nc -vz -u my_hostname 88
Connection to my_hostname 88 port [udp/kerberos] succeeded!
[root@my_hostname ~]# nc -vz -t my_hostname 88
Connection to my_hostname 88 port [tcp/kerberos] succeeded!
... View more
03-15-2017
11:25 PM
"Enable case insensitive username rule" is related to how principal names are translated into local username. This happens after the Kerberos authentication process and helps to convert uppercase characters in principal names to lowercase characters which may be needed when Active Directory is involved. If the Active Directory was created with a lowercase realm/domain name it is unlikely that authentication and/or validation attempts will work from the Hadoop cluster. However I think it may be possible if the Active Directory is on Windows 2008 server.
... View more
03-15-2017
05:34 PM
1 Kudo
@Sedat Kestepe The issue is with your realm name - hadoopad.local. Realm names should be in all uppercase characters in both the client (Ambari) configuration as well as on the server (AD, MIT KDC, etc...). So the realm name should be HADOOPAD.LOCAL. If the Active Directory was not set up with the uppercase form, it will need to be fixed.
... View more
03-11-2017
04:04 PM
Make sure your realm name is all uppercase characters. hdfs-hadoopprod@Hortonworks.com should really be hdfs-hadoopprod@HORTONWORKS.COM. Also, the default settings are for the headless/user principal names to include the cluster name. If you choose to stay with this, make sure the clusters have unique names. However, you are welcome to change the unique value for these principal names to anything that avoids a collisions. If principals names are the same in multiple Ambari-managed clusters using the same KDC, one instance of Ambari will wind up changing the passwords out from under the other other instances. This will invalidate the keytab files installed on the hosts and break the clusters.
... View more
03-05-2017
02:21 PM
This should have been automatically created for you if you entered CHRSV@COM in the "Additional Realms" box on the Configure Identities in the Enable Kerberos Wizard. Assuming that you didn't do this, how was the krb5.conf file set up to acknowledge the trusted realm?
... View more