Member since
06-20-2016
251
Posts
196
Kudos Received
36
Solutions
03-27-2018
01:24 PM
7 Kudos
Now that we have the baseline configuration in place for KnoxSSO, as documented in part I of the series, we are ready to configure single sign-on for Ranger. This is a nice convenience for administrators that live in both Ambari and Ranger as part of their daily platform activities. It is also a security improvement relative to local passwords. To configure Ranger for KnoxSSO, we'll need the public key for Knox. Recall that there are a few ways to obtain this, we'll use the following. openssl s_client -connect ${knoxserver}:8443 < /dev/null | openssl x509 -out /tmp/knox.crt You'll want to copy the base64 data between BEGIN CERTIFICATE and END CERTIFICATE lines. We're now ready to configure Ranger: All we need to configure is the Knox SSO provider URL and the SSO public key. The SSO public key is the copied data we just discussed from the Knox certificate. The SSO provider URL is the URL we configured in part I that corresponds to the Knox SSO topology. Now let's try to log in to Ranger using the Quick Link from Ambari. You should be seamlessly logged in as the user that authenticated to the IdP!
... View more
Labels:
02-11-2018
11:37 PM
7 Kudos
The HDF 3.1 release supports single-sign on to NiFi using KnoxSSO. This article will assume you've already completed setting up KnoxSSO, as discussed in part I and part II of this series. We'll further assume that NiFi has been configured for baseline security, as documented in this HCC article. Once the websso topology has been defined in your Knox configuration, the steps to make NiFi a participating application in KnoxSSO are straightforward. A couple of notes: 1) make sure nifi.security.user.login.identity.provider is blank, since you'll be using the KnoxSSO topology's authentication method--i.e., a JWT-based Federation Provider--to gain access to the NiFi UI, and 2) make sure the value for knoxsso.token.ttl is reasonable, the default is 30000 ms, or 30 s. A larger value like 36000000--or 10 hours--likely makes sense for production environments. Once the websso topology has been defined for KnoxSSO, as discussed in part I and part II of this series, we'll need to grab the Knox server's public key in order to configure NiFi as a participating application. You can use this snippet, where ${knoxserver} is the host running the Knox Gateway openssl s_client -connect ${knoxserver}:8443</dev/null| openssl x509 -out /tmp/knox.pem You can then copy the knox.pem file that gets created in /tmp to the NiFi host(s) that require this public key to verify the validity of the token signed by Knox. We'll copy knox.pem to /usr/hdf/current/nifi/conf on the NiFi host(s), for this example. We are now ready to configure NiFi, there are only three properties that are required: Please note that we should replace the nifi.security.user.knox.url value with the KnoxSSO URL specific to our environment.
... View more
Labels:
01-26-2018
02:39 PM
@Sankaru Thumuluru you need to replace FQDN with the fully-qualified domain name of the host running the Ranger Admin service in your environment. Your curl command is returning a 404 because http://<FQDN>:6080/service/public/v2/api/servicedef/name/hive is not a valid URL
... View more
11-28-2017
10:47 PM
5 Kudos
In part I of this series, we reviewed preliminaries related to SSO, including LDAP authentication for Ambari, and we set up an application in Okta that would correspond to our KnoxSSO service provider for the SAML authentication flow. We are now ready to configure Knox within Ambari. We will replace the Form-based IdP configuration that Knox comes with out of the box with the pac4j federation provider. Pac4j is a Java security library and it is used as a federation provider within Knox to support the OAuth, CAS, SAML and OpenID Connect protocols. It must be used for SSO, in association with the KnoxSSO service (and optionally with the SSOCookieProvider for access to REST APIs). In Ambari, we'll navigate to Knox > Config > Advanced knoxsso-topology and add XML similar to the following: <topology>
<gateway>
<provider>
<role>federation</role>
<name>pac4j</name>
<enabled>true</enabled>
<param>
<name>pac4j.callbackUrl</name>
<value>https://sslka-123-master2-1.field.hortonworks.com:8443/gateway/knoxsso/api/v1/websso</value>
</param>
<param>
<name>clientName</name>
<value>SAML2Client</value>
</param>
<param>
<name>saml.identityProviderMetadataPath</name>
<value>https://dev-999.oktapreview.com/app/redacted/sso/saml/metadata</value>
</param>
<param>
<name>saml.serviceProviderMetadataPath</name>
<value>/tmp/sp-metadata.xml</value>
</param>
<param>
<name>saml.serviceProviderEntityId</name>
<value>https://sslka-123-master2-1.field.hortonworks.com:8443/gateway/knoxsso/api/v1/websso?pac4jCallback=true&client_name=SAML2Client</value>
</param>
</provider>
<provider>
<role>identity-assertion</role>
<name>Default</name>
<enabled>true</enabled>
<param>
<name>principal.mapping</name>
<value>slachterman@hortonworks.com=slachterman;</value>
</param>
</provider>
</gateway>
<service>
<role>KNOXSSO</role>
<param>
<name>knoxsso.cookie.secure.only</name>
<value>true</value>
</param>
<param>
<name>knoxsso.token.ttl</name>
<value>30000</value>
</param>
<param>
<name>knoxsso.redirect.whitelist.regex</name>
<value>^https:\/\/(knox-host-fqdn\.example\.com|localhost|127\.0\.0\.1|0:0:0:0:0:0:0:1|::1):[0-9].*$</value>
</param>
</service>
</topology>
A couple of things to note here:
The callback URL for KnoxSSO is the WebSSO endpoint without the URL parameter The identity provider metadata path points to the metadata for the Okta application you configured in part I
You can find this URL in Okta, within your Application configuration on the Sign On tab. See Identity Provider metadata in the screenshot below (under the View Setup Instructions button) The service provider metadata path is a dummy location, this can't be NULL or you will see an exception in the current release The principal mapping converts the username that Okta presents to the shortname used throughout HDP. A regex mapping is available and be more manageable for Produciton Note the knoxsso.cookie.secure.only setting. It's important that the session cookie only be served for HTTPS to protect from session hijacking. The knoxsso.redirect.whitelist.regex must match your environment, knox-host-fqdn.example.com is a dummy placeholder in the topology file above After Knox has been configured, the last step is to configure Ambari for SSO and we'll do this from the command line. You'll need to SSH to the host where Ambari Server is running. As a preliminary step, you'll need the public certificate for Knox. You can use this snippet, where ${knoxserver} is the host running the Knox Gateway: openssl s_client -connect ${knoxserver}:8443 < /dev/null | openssl x509 -out /tmp/knox.crt To configure Ambari, we'll run sudo ambari-server setup-sso For the provider URL, enter you callback URL http://<KNOX_HOST>:<KNOX_PORT>/gateway/knoxsso/api/v1/websso Then paste the public certificate without the header and footer (don't include the BEGIN CERTIFICATE or END CERTIFICATE lines). You can accept all other defaults. Finally, you'll need to run sudo ambari-server restart You should now be all set! After saving the above Knox configuration in Ambari and restarting Knox and any required services, try navigating to Ambari to test. If everything has been set up correctly, you'll be logged in to Ambari after authenticating to Okta. In future posts in this series, we'll take a look at the Form-based IdP that's included with KnoxSSO, setting up SSO for Ranger, and other topics!
... View more
Labels:
11-28-2017
10:20 PM
6 Kudos
Part II can be found here There are many web UIs--think Ambari, Ranger, etc.--in the HDP platform that require user authentication and traditionally every UI required a separate login. With the federation capabilities provided by Apache Knox, the HDP platform can support federated identity and a single sign-on experience for users. In particular, the flexibility of the Knox authentication and federation providers allows KnoxSSO to support generalized authentication events, via exchange of a common JWT-based token. Without the token exchange capabilities offered by KnoxSSO, each UI would need to integrate with each desired IAM solution on its own. KnoxSSO comes with its own Form-based IdP. This allows for easily integrating a form-based login with the enterprise AD/LDAP server. We will cover that in a future article, as we'll focus at present on the integration with Okta using SAML 2.0. You can find more information regarding SAML on Okta's website: https://developer.okta.com/standards/SAML/ This architecture is an example of federated identity. When a user navigates to the Ambari UI, they will be redirected to Okta to authenticate. After authenticating with their Okta credentials (and possibly an MFA mechanism), or if they've already authenticated to Okta and their session is still valid (usually within the day, as session expiration is usually 12 - 18 hours), they will be redirected back to Ambari and transparently logged in to the application. You can sign up for an Okta dev instance at https://developer.okta.com/ You will create a new application in Okta which will contain the required endpoints and metadata for verifying login requests and redirecting users back to the KnoxSSO endpoint after successful authentication. You will need the Single Sign-On URL, which will be of the form http://<KNOX_HOST>:<KNOX_PORT>/gateway/knoxsso/api/v1/websso?pac4jCallback=true&client_name=SAML2Client It's very important that you specify the URL parameter of pac4jCallback=true, otherwise the browser would get stuck in an infinite redirect loop. In Okta, you can also use this URL for Recipient URL, Destination URL, and Audience Restriction. Another thing to note on the Okta side, is that users will log in with an email address and Okta, by default, will pass the full address to KnoxSSO. We will map this to a username in the Knox configuration (alternatively, we could also have configured Okta to just send the Email Prefix). After creating the application in Okta, we are ready to configure the HDP side in Ambari. NOTE: LDAP Authentication for Ambari must have already been enabled for KnoxSSO. Please see the Ambari docs to complete this within your environment. We will pick up with Knox configuration in part two.
... View more
Labels:
11-10-2017
06:29 PM
This should work in NiFi 1.3 as well, there is nothing specific to HDF. That error sounds like you haven't provided core-site.xml as one of the Hadoop Configuration Resources as discussed in the article.
... View more
10-12-2017
06:27 PM
This approach is supported by HDI 3.5. It appears HDI 3.6 relies on Hadoop 2.8 code. I can look into an approach for HDI 3.6, but NiFi bundles Hadoop 2.7.3.
... View more
10-03-2017
04:33 PM
In previous releases of HDP, client-side caching of keys could result in unexpected behavior with WebHDFS. Consider the following steps: 1. Create two keys in ranger KMS: user1_key and user2_key 2. Add two resource based policy one per above user. User1_encr_policy: Allow the Decrypt_EEK permissions to user1 only User2_encr_policy: Allow the Decrypt_EEK permissions to user2 only. 3. Add two encryption zones.
user1_zone (using user1_key) and user2_zone (using user2_key) 4. Run the following command, you may be able to access the content of test.csv file from user1_zone using user2 curl -i -L "http://sandbox.hortonworks.com:50070/webhdfs/v1/customer/user1_zone/test.csv?user.name=user2&op=OPEN" HDP-2.6.1.2 includes HADOOP-13749, which fixes the caching issue. The FS cache and KMS provider cache can be disabled by changing the configuration as follows:
"fs.hdfs.impl.disable.cache", "true"
dfs.client.key.provider.cache.expiry, 0
... View more
05-11-2017
02:37 PM
2 Kudos
Kerberos is a widely-used authentication system and is used throughout the Hadoop ecosystem, in particular, for strong authentication. The KDC, or key distribution center, is the name for the Kerberos server application that exposes the Authentication Service and Ticket Granting Service, as well as hosting the Kerberos principal database. Active Directory is used in many enterprises for Identity and Access Management. A common architecture in enterprise deployment scenarios is to make use of a local cluster KDC (often using the MIT-KDC packaging) to host the service and host principals associated with the cluster, and to configure a one-way trust between the AD domain and the cluster realm. This has the advantage of offloading Kerberos traffic from the domain controller(s) and not all enterprises do not necessarily want to host cluster principals within their AD domain. So how does a one-way trust work? The first thing to note is that the trust is instantiated by the existence of a special cross-realm principal. For example, if realm B.EXAMPLE.COM trusts realm A.EXAMPLE.COM, clients in the realm A.EXAMPLE.COM can authenticate to services in B.EXAMPLE.COM. In order for a client of A.EXAMPLE.COM to access a service in the B.EXAMPLE.COM realm, both realms must share a key for a principal named krbtgt/B.EXAMPLE.COM@A.EXAMPLE.COM (and both keys must have the same key version number associated with them). To access a cross-realm service, the user first contacts their home KDC's AS (AD domain controller which exposes the KDC service, in the scenario at-hand) asking for a TGT that will be used with the TGS of the foreign realm.
If there is a direct trust relationship between the home realm and the foreign realm (practically materialized in shared inter-realm keys, per the above), the home KDC delivers the requested TGT. The user then contacts the cluster MIT-KDC, in the foreign realm, presenting the cross-realm TGT and requesting a service ticket for the service in question. Finally, the user contacts the cluster service in question, presenting the service ticket. Therefore, we can conclude that the AD user needs to be able to contact the MIT-KDC server (usually tcp/88). Please note this means that if the cluster is within a secured network zone which includes the MIT-KDC host, then there needs to be a firewall rule allowing AD clients to contact this host (again, usually over tcp/88). Please see diagram below. References: RFC 5868 Kerberos and its Application in Cross-realm Operations RedHat: Setting up Cross-realm Authentication
... View more
Labels:
05-04-2017
02:08 PM
Hi @Manmeet Kaur, please post this on HCC as a separate question.
... View more