Member since
06-20-2016
251
Posts
196
Kudos Received
36
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
9407 | 11-08-2017 02:53 PM | |
1967 | 08-24-2017 03:09 PM | |
7573 | 05-11-2017 02:55 PM | |
6030 | 05-08-2017 04:16 PM | |
1849 | 04-27-2017 08:05 PM |
03-27-2018
01:24 PM
7 Kudos
Now that we have the baseline configuration in place for KnoxSSO, as documented in part I of the series, we are ready to configure single sign-on for Ranger. This is a nice convenience for administrators that live in both Ambari and Ranger as part of their daily platform activities. It is also a security improvement relative to local passwords. To configure Ranger for KnoxSSO, we'll need the public key for Knox. Recall that there are a few ways to obtain this, we'll use the following. openssl s_client -connect ${knoxserver}:8443 < /dev/null | openssl x509 -out /tmp/knox.crt You'll want to copy the base64 data between BEGIN CERTIFICATE and END CERTIFICATE lines. We're now ready to configure Ranger: All we need to configure is the Knox SSO provider URL and the SSO public key. The SSO public key is the copied data we just discussed from the Knox certificate. The SSO provider URL is the URL we configured in part I that corresponds to the Knox SSO topology. Now let's try to log in to Ranger using the Quick Link from Ambari. You should be seamlessly logged in as the user that authenticated to the IdP!
... View more
Labels:
02-11-2018
11:37 PM
7 Kudos
The HDF 3.1 release supports single-sign on to NiFi using KnoxSSO. This article will assume you've already completed setting up KnoxSSO, as discussed in part I and part II of this series. We'll further assume that NiFi has been configured for baseline security, as documented in this HCC article. Once the websso topology has been defined in your Knox configuration, the steps to make NiFi a participating application in KnoxSSO are straightforward. A couple of notes: 1) make sure nifi.security.user.login.identity.provider is blank, since you'll be using the KnoxSSO topology's authentication method--i.e., a JWT-based Federation Provider--to gain access to the NiFi UI, and 2) make sure the value for knoxsso.token.ttl is reasonable, the default is 30000 ms, or 30 s. A larger value like 36000000--or 10 hours--likely makes sense for production environments. Once the websso topology has been defined for KnoxSSO, as discussed in part I and part II of this series, we'll need to grab the Knox server's public key in order to configure NiFi as a participating application. You can use this snippet, where ${knoxserver} is the host running the Knox Gateway openssl s_client -connect ${knoxserver}:8443</dev/null| openssl x509 -out /tmp/knox.pem You can then copy the knox.pem file that gets created in /tmp to the NiFi host(s) that require this public key to verify the validity of the token signed by Knox. We'll copy knox.pem to /usr/hdf/current/nifi/conf on the NiFi host(s), for this example. We are now ready to configure NiFi, there are only three properties that are required: Please note that we should replace the nifi.security.user.knox.url value with the KnoxSSO URL specific to our environment.
... View more
Labels:
02-06-2018
08:22 PM
No, it is not deterministic as far as Hive 1.X behavior, with the default of hive.support.concurrency=false. Hive 1.X has a non-ACID ZK-based lock manager, however, this makes readers wait and it's not recommended. The ACID implementation doesn't block readers, but is not available in the current HDP releases. It may also be worth looking at EXCHANGE PARTITION, however, this is not exactly atomic, it is just a smaller window for the non-determinism. The way it works without locks is
the files are written to HDFS in a new dir, and then the dir is renamed. This can lead to a race condition without locking. Queries that started on the basis of the old directory could fail (LLAP is an exception to this rule because it uses inodes not filenames as references).
... View more
01-26-2018
02:39 PM
@Sankaru Thumuluru you need to replace FQDN with the fully-qualified domain name of the host running the Ranger Admin service in your environment. Your curl command is returning a 404 because http://<FQDN>:6080/service/public/v2/api/servicedef/name/hive is not a valid URL
... View more
01-11-2018
05:54 AM
Yes, use UpdateAttribute and the expression language to add missing values as appropriate.
... View more
11-28-2017
10:47 PM
5 Kudos
In part I of this series, we reviewed preliminaries related to SSO, including LDAP authentication for Ambari, and we set up an application in Okta that would correspond to our KnoxSSO service provider for the SAML authentication flow. We are now ready to configure Knox within Ambari. We will replace the Form-based IdP configuration that Knox comes with out of the box with the pac4j federation provider. Pac4j is a Java security library and it is used as a federation provider within Knox to support the OAuth, CAS, SAML and OpenID Connect protocols. It must be used for SSO, in association with the KnoxSSO service (and optionally with the SSOCookieProvider for access to REST APIs). In Ambari, we'll navigate to Knox > Config > Advanced knoxsso-topology and add XML similar to the following: <topology>
<gateway>
<provider>
<role>federation</role>
<name>pac4j</name>
<enabled>true</enabled>
<param>
<name>pac4j.callbackUrl</name>
<value>https://sslka-123-master2-1.field.hortonworks.com:8443/gateway/knoxsso/api/v1/websso</value>
</param>
<param>
<name>clientName</name>
<value>SAML2Client</value>
</param>
<param>
<name>saml.identityProviderMetadataPath</name>
<value>https://dev-999.oktapreview.com/app/redacted/sso/saml/metadata</value>
</param>
<param>
<name>saml.serviceProviderMetadataPath</name>
<value>/tmp/sp-metadata.xml</value>
</param>
<param>
<name>saml.serviceProviderEntityId</name>
<value>https://sslka-123-master2-1.field.hortonworks.com:8443/gateway/knoxsso/api/v1/websso?pac4jCallback=true&client_name=SAML2Client</value>
</param>
</provider>
<provider>
<role>identity-assertion</role>
<name>Default</name>
<enabled>true</enabled>
<param>
<name>principal.mapping</name>
<value>slachterman@hortonworks.com=slachterman;</value>
</param>
</provider>
</gateway>
<service>
<role>KNOXSSO</role>
<param>
<name>knoxsso.cookie.secure.only</name>
<value>true</value>
</param>
<param>
<name>knoxsso.token.ttl</name>
<value>30000</value>
</param>
<param>
<name>knoxsso.redirect.whitelist.regex</name>
<value>^https:\/\/(knox-host-fqdn\.example\.com|localhost|127\.0\.0\.1|0:0:0:0:0:0:0:1|::1):[0-9].*$</value>
</param>
</service>
</topology>
A couple of things to note here:
The callback URL for KnoxSSO is the WebSSO endpoint without the URL parameter The identity provider metadata path points to the metadata for the Okta application you configured in part I
You can find this URL in Okta, within your Application configuration on the Sign On tab. See Identity Provider metadata in the screenshot below (under the View Setup Instructions button) The service provider metadata path is a dummy location, this can't be NULL or you will see an exception in the current release The principal mapping converts the username that Okta presents to the shortname used throughout HDP. A regex mapping is available and be more manageable for Produciton Note the knoxsso.cookie.secure.only setting. It's important that the session cookie only be served for HTTPS to protect from session hijacking. The knoxsso.redirect.whitelist.regex must match your environment, knox-host-fqdn.example.com is a dummy placeholder in the topology file above After Knox has been configured, the last step is to configure Ambari for SSO and we'll do this from the command line. You'll need to SSH to the host where Ambari Server is running. As a preliminary step, you'll need the public certificate for Knox. You can use this snippet, where ${knoxserver} is the host running the Knox Gateway: openssl s_client -connect ${knoxserver}:8443 < /dev/null | openssl x509 -out /tmp/knox.crt To configure Ambari, we'll run sudo ambari-server setup-sso For the provider URL, enter you callback URL http://<KNOX_HOST>:<KNOX_PORT>/gateway/knoxsso/api/v1/websso Then paste the public certificate without the header and footer (don't include the BEGIN CERTIFICATE or END CERTIFICATE lines). You can accept all other defaults. Finally, you'll need to run sudo ambari-server restart You should now be all set! After saving the above Knox configuration in Ambari and restarting Knox and any required services, try navigating to Ambari to test. If everything has been set up correctly, you'll be logged in to Ambari after authenticating to Okta. In future posts in this series, we'll take a look at the Form-based IdP that's included with KnoxSSO, setting up SSO for Ranger, and other topics!
... View more
Labels:
11-28-2017
10:20 PM
6 Kudos
Part II can be found here There are many web UIs--think Ambari, Ranger, etc.--in the HDP platform that require user authentication and traditionally every UI required a separate login. With the federation capabilities provided by Apache Knox, the HDP platform can support federated identity and a single sign-on experience for users. In particular, the flexibility of the Knox authentication and federation providers allows KnoxSSO to support generalized authentication events, via exchange of a common JWT-based token. Without the token exchange capabilities offered by KnoxSSO, each UI would need to integrate with each desired IAM solution on its own. KnoxSSO comes with its own Form-based IdP. This allows for easily integrating a form-based login with the enterprise AD/LDAP server. We will cover that in a future article, as we'll focus at present on the integration with Okta using SAML 2.0. You can find more information regarding SAML on Okta's website: https://developer.okta.com/standards/SAML/ This architecture is an example of federated identity. When a user navigates to the Ambari UI, they will be redirected to Okta to authenticate. After authenticating with their Okta credentials (and possibly an MFA mechanism), or if they've already authenticated to Okta and their session is still valid (usually within the day, as session expiration is usually 12 - 18 hours), they will be redirected back to Ambari and transparently logged in to the application. You can sign up for an Okta dev instance at https://developer.okta.com/ You will create a new application in Okta which will contain the required endpoints and metadata for verifying login requests and redirecting users back to the KnoxSSO endpoint after successful authentication. You will need the Single Sign-On URL, which will be of the form http://<KNOX_HOST>:<KNOX_PORT>/gateway/knoxsso/api/v1/websso?pac4jCallback=true&client_name=SAML2Client It's very important that you specify the URL parameter of pac4jCallback=true, otherwise the browser would get stuck in an infinite redirect loop. In Okta, you can also use this URL for Recipient URL, Destination URL, and Audience Restriction. Another thing to note on the Okta side, is that users will log in with an email address and Okta, by default, will pass the full address to KnoxSSO. We will map this to a username in the Knox configuration (alternatively, we could also have configured Okta to just send the Email Prefix). After creating the application in Okta, we are ready to configure the HDP side in Ambari. NOTE: LDAP Authentication for Ambari must have already been enabled for KnoxSSO. Please see the Ambari docs to complete this within your environment. We will pick up with Knox configuration in part two.
... View more
Labels:
11-10-2017
06:29 PM
This should work in NiFi 1.3 as well, there is nothing specific to HDF. That error sounds like you haven't provided core-site.xml as one of the Hadoop Configuration Resources as discussed in the article.
... View more
11-08-2017
06:58 PM
There isn't a processor to do this, but this HCC article covers how to create a script to accomplish this: https://community.hortonworks.com/questions/110551/how-to-remove-a-cache-entry-identifier-from-distri.html
... View more
11-08-2017
02:53 PM
2 Kudos
Yes, that's correct for Spark 2.1.0 (among other versions). Please see https://issues.apache.org/jira/browse/SPARK-19019 Per the JIRA, this is resolved in Spark 2.1.1, Spark 2.2.0, etc.
... View more