Member since
06-20-2016
251
Posts
196
Kudos Received
36
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
9635 | 11-08-2017 02:53 PM | |
2048 | 08-24-2017 03:09 PM | |
7793 | 05-11-2017 02:55 PM | |
6389 | 05-08-2017 04:16 PM | |
1930 | 04-27-2017 08:05 PM |
01-31-2017
05:59 PM
@Vandana K R you need to use curl's negotiate option to authenticate via SPNEGO: kinit -kt /etc/security/keytabs/rangerkms.service.keytab rangerkms/HOST@DOMAIN
curl --negotiate -u : -H 'Content-Type: application/json' http://HOST:9292/kms/v1/key/mykey/_metadata
... View more
01-31-2017
05:07 PM
1 Kudo
The 'Pseudo' identity assertion provider was renamed to 'Default' in KNOX-425 and KNOX-426. The 'Pseudo' provider is still supported for backwards compatibility.
... View more
01-25-2017
07:35 PM
4 Kudos
OpenLDAP is an open-source implementation of the Lightweight Directory Access Protocol and is used for central management of accounts (users, hosts, and services) and can be used in concert with a KDC to provide authentication within the Hadoop ecosystem. Fundamentally, LDAP functions like a database in many ways and can be used to store any information. We will assume that you have a fresh CentOS 7 host available that will host OpenLDAP. Make sure you have network connectivity between any clients and this server and that DNS resolution is working. Let's ssh to the host, and install, as root, the packages we need with yum: yum -y install openldap compat-openldap openldap-clients openldap-servers openldap-servers-sql openldap-devel We'll also start the LDAP daemon (called slapd) and enable it to auto-start on system boot: systemctl start slapd.service
systemctl enable slapd.service Next, run the slappasswd command to create an LDAP root password. Please take note of this root password, the entire hashed value that is returned as output and starts {SSHA}, as you'll use it throughout this article.
We'll now configure the OpenLDAP server in a couple of steps. We'll create LDIF text files and then use the ldapmodify command to push the configuration to the server. These will ultimately land in /etc/openldap/slapd.d but the files shouldn't be edited manually. The first file will update the variables for olcSuffix, the domain name for which your server LDAP server provides account information, and olcRootDN, the root distinguished name (DN) user who has unrestricted administrative access. My domain is field.hortonworks.com, or dc=field,dc=hortonworks,dc=com and my root DN is cn=ldapadm,dc=field,dc=hortonworks,dc=com. Create the following db.ldif file using vi or your favorite editor. dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcSuffix
olcSuffix: dc=field,dc=hortonworks,dc=com
dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcRootDN
olcRootDN: cn=ldapadm,dc=field,dc=hortonworks,dc=com
dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcRootPW
olcRootPW: {SSHA}theHashedPasswordValueFromSlapPasswd
We'll then push this config: ldapmodify -Y EXTERNAL -H ldapi:/// -f db.ldif
We'll next restrict monitor access to the ldapadm user: dn: olcDatabase={1}monitor,cn=config
changetype: modify
replace: olcAccess
olcAccess: {0}to * by dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=external, cn=auth" read by dn.base="cn=ldapadm,dc=field,dc=hortonworks,dc=com" read by * none And push that config change: ldapmodify -Y EXTERNAL -H ldapi:/// -f monitor.ldif In order to communicate securely with the OpenLDAP server, we'll need a certificate and associated private key. These would likely be obtained from our PKI Administrator in a production environment, but a self-signed certificate and associated private key can be created in development environments, using a command like below: openssl req -new -x509 -nodes -out /etc/openldap/certs/myldap.field.hortonworks.com.cert -keyout /etc/openldap/certs/myldap.field.hortonworks.com.key -days 365 Set the owner and group permissions to ldap:ldap for both files. We'll then create certs.ldif to configure OpenLDAP for secure communication over LDAPS: dn: cn=config
changetype: modify
replace: olcTLSCertificateFile
olcTLSCertificateFile: /etc/openldap/certs/myldap.field.hortonworks.com.cert
dn: cn=config
changetype: modify
replace: olcTLSCertificateKeyFile
olcTLSCertificateKeyFile: /etc/openldap/certs/myldap.field.hortonworks.com.key We can then push the config file and finally test the configuration: ldapmodify -Y EXTERNAL -H ldapi:/// -f certs.ldif
slaptest -u We're now ready to set up the initial LDAP database: First, copy the sample database configuration file to /var/lib/ldap and update the file permissions. cp /usr/share/openldap-servers/DB_CONFIG.example /var/lib/ldap/DB_CONFIG
chown ldap:ldap /var/lib/ldap/* Next, add the cosine and nis LDAP schemas. ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/cosine.ldif
ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/nis.ldif
ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/inetorgperson.ldif Finally, generate base.ldif file for your domain. dn: dc=field,dc=hortonworks,dc=com
dc: field
objectClass: top
objectClass: domain
dn: cn=ldapadm,dc=field,dc=hortonworks,dc=com
objectClass: organizationalRole
cn: ldapadm
description: LDAP Manager
dn: ou=People,dc=field,dc=hortonworks,dc=com
objectClass: organizationalUnit
ou: People
dn: ou=Group,dc=field,dc=hortonworks,dc=com
objectClass: organizationalUnit
ou: Group
We'll now push these changes to OpenLDAP using the ldapadm user (sometimes referred to as the bind user): ldapadd -x -W -D "cn=ldapadm,dc=field,dc=hortonworks,dc=com" -f base.ldif You'll be prompted for the root password. From here, I prefer to use a GUI to create additional users. Apache Directory Studio is a nice multi-platform tool and can be downloaded here. Within Apache Directory Studio, you can create a new connection in the lower left-hand pane and use the following configuration: With the following authentication information: Once you connect successfully you can create your organizational structure and users accordingly. These steps are based on the valuable tutorial provided here.
... View more
Labels:
01-25-2017
06:50 PM
4 Kudos
Apache Ranger uses an embedded Tomcat server to provide the Web UI functionality for administration of Ranger. A previous HCC article provided details on maintenance of the log files that are managed by the log4j configuration, including xa_portal.log, ranger_admin_perf.log, xa_portal_sql.log.
We're going to focus on maintenance of the access_log* logs that get automatically generated by Tomcat, but which are not managed by this log4j configuration. With embedded Tomcat, the configuration is contained within the code for the AccessLogValve (as you can see, it uses an hourly rotation pattern unless overridden by ranger.accesslog.dateformat).
We'll use the logrotate application in CentOS/RHEL to manage these access_log* logs as the number of files can grow large without rotation and removal in place. You can check to see how many of these files you have on your Ranger Admin node by running (there would be one access_log* file per hour for each day during which the service has ran continuously):
ls /var/log/ranger/admin | cut -d '.' -f 1 | uniq -c
Within /etc/logrotate.d, we'll create a configuration specific to these Ranger logs, as the configuration for logrotate, in /etc/logrotate.conf by default, will include these application-spcific configurations as well.
Create a new file (as root) ranger_access in /etc/logrotate.d in your favorite editor and then insert:
/var/log/ranger/admin/access_log* {
daily
copytruncate
compress
dateext
rotate 5
maxage 7
olddir /var/log/ranger/admin/old
missingok
}
This is just an example logrotate configuration. I'll make note of a couple items, please see the man page for details on each of these options and some additional examples.
The copytruncate option ensures that Tomcat can keep writing to the same file handle (as opposed to writing to a newly-created file which requires recycling Tomcat)
The compress option will use gzip by default
Maxage limits how old the files are that will be kept
Olddir indicates that logs are moved into the directory for rotation
Logrotate will be invoked daily as a cronjob by default, due to the existence of the logrotate file in /etc/cron.daily. You can run logrotate manually by specifying the configuration:
sudo /usr/sbin/logrotate /etc/logrotate.conf
Note that logrotate keeps the state of files in the /var/lib/logrotate.status, and it uses the date of last execution captured there as the reference of what to do with a logfile. You can also run logrotate with the -d flag to test your configuration (this won't actually do anything, it will just produce output regarding what would happen).
sudo /usr/sbin/logrotate -d /etc/logrotate.conf 2> /tmp/logrotate.debug
As a result of this configuration, only 5 days worth of logs are kept, they're kept in the ./old directory, and they're compressed. This ensures that the Ranger admin access_log* logs data does not grow unmanageably large.
... View more
Labels:
01-16-2017
10:22 PM
Thanks @Arpan Rajani appreciate the feedback and additional info. Yes, ownership of the file is important (there is a chown step in the instructions above).
... View more
01-08-2017
07:12 PM
5 Kudos
In order to secure access to the Zeppelin UI, we will want to enable TLS (as well as authentication) to ensure confidentiality of communication and to assure the identity of the Zeppelin server. Zeppelin uses Jetty as the underlying HTTP server, so we'll refer to Jetty documentation. In this how-to we'll use a self-signed certificate. In Production environments, you will likely obtain a CA-issued certificate or a trusted root certificate from your PKI team specific to your environment. Since self-signed certificates won't be trusted by your browser by default, we'll show how to trust this certificate on OS X 10.11.6 with Chrome version 55.0.2883.95 (other OS/browser combinations are out of the scope of this article). To generate the self-signed certificate, we'll use the openssl and keytool utilities as follows (see this Jetty doc for reference): openssl genrsa -des3 -out zeppelin.key
openssl req -new -x509 -key zeppelin.key -out zeppelin.crt
keytool -keystore keystore -import -alias zeppelin -file zeppelin.crt -trustcacerts
openssl pkcs12 -inkey zeppelin.key -in zeppelin.crt -export -out zeppelin.pkcs12
keytool -importkeystore -srckeystore zeppelin.pkcs12 -srcstoretype PKCS12 -destkeystore keystore These steps, respectively: 1) create a new private key 2) create a new self-signed certificate using this key 3) imports this self-signed certificate into a new keystore (called "keystore") 4) creates a PCKS12 file that combines the private key and certificate chain 5) converts this PCKS12 file to JKS format and imports it into the keystore We'll then need to move this keystore to the appropriate location with the appropriate ownership and permissions: mv keystore /usr/hdp/zeppelin-server/conf
chown zeppelin:zeppelin /usr/hdp/zeppelin-server/conf Finally, we'll configure Zeppelin to use TLS in Ambari. There is currently a bug affecting HDP 2.5.0 and 2.5.3 regarding using relative paths for the keystore and truststore. This bug was introduced by ZEPPELIN-1319, namely, when using a relative path like conf/keystore, Zeppelin server is unreachable and the error in the logs is as below. ZEPPELIN-1810 fixes the bug introduced by ZEPPELIN-1319. The error looks like: FAILED SslContextFactory@6cd166b8(/usr/hdp/current/zeppelin-server/conf/null,/usr/hdp/current/zeppelin-server/conf/null): java.io.FileNotFoundException: /etc/zeppelin/2.5.0.0-1245/0/null (No such file or directory) However, with absolute paths for the keystore and truststore paths, such as /usr/hdp/current/zeppelin-server/conf/keystore, Zeppelin server starts normally and is reachable over HTTPS. Now we need to ensure that our Chrome browser trusts this self-signed certificate. We need to copy the certificate to our Desktop (click the broken HTTPS link > Details > View Certificate and drag and drop to the desktop). We can then import the certificate into our OS X keychain and set it as trusted: Make sure you restart Chrome. After doing so, you should see the green lock icon next to the HTTPS URL and should no longer see a browser warning,
... View more
Labels:
01-08-2017
06:44 PM
2 Kudos
In HDP 2.5, Zeppelin Notebook supports the ability to impersonate a user's security context when accessing a data set. This is critical to allow multi-tenant, fine-grained access using Ranger authorization policies. In order to support this, the user will first need to authenticate to the Zeppelin UI. (In Part 2, we harden this configuration to use LDAPS).
Zeppelin provides a few different authentication mechanisms (based on what the underlying Apache Shiro project supports); we'll use OpenLDAP in this walkthrough.
In this article, we'll assume you already have an OpenLDAP server installed in your environment, which contains the users to which you want to provide access to the Zeppelin UI (I plan to create a supporting article regarding OpenLDAP installation). We'll further assume that these users exist as local OS users (this is outside the scope of this article, but we can use a solution like SSSD for this).
Our organizational structure is as follows:
With this structure, our search base will be restricted to dc=field,dc=hortonworks,dc=com and our user DN template will be cn={0},ou=People,dc=field,dc=hortonworks,dc=com as the People OU contains the users.
The Zeppelin configuration in question can be found in the shiro_ini_content sub-section of the Advanced zeppelin-env section on the Config tab in Ambari. The configuration should like the below, with the search base, user DN template, and URL specific to your environment (my OpenLDAP server lives at sslkdc.field.hortonworks.com):
[main]
ldapRealm = org.apache.zeppelin.server.LdapGroupRealm
ldapRealm.contextFactory.environment[ldap.searchBase] = dc=field,dc=hortonworks,dc=com
ldapRealm.userDnTemplate = cn={0},ou=People,dc=field,dc=hortonworks,dc=com
ldapRealm.contextFactory.url = ldap://sslkdc.field.hortonworks.com:389
ldapRealm.contextFactory.authenticationMechanism = SIMPLE
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
securityManager.sessionManager = $sessionManager
securityManager.sessionManager.globalSessionTimeout = 86400000
shiro.loginUrl = /api/login
securityManager.realm = $ldapRealm
[roles]
admin = *
[urls]
/** = authc
Make sure you specify securityManager.realm = $ldapRealm as in the last line as this isn't part of the sample shiro.ini included with Zeppelin. After these changes are saved, make sure to restart Zeppelin. Now, navigate to the Zeppelin UI. With authentication enabled, you should not see any notebooks accessible until a user logs in. I'll now login using user1 (full DN is cn=user1,ou=People,dc=field,dc=hortonworks,dc=com, based on the userDnTemplate value above) and the password stored in OpenLDAP. After authenticating, I can navigate to the notebooks to which user1 has access: In the next article in this series, we'll show how to use this authenticated subject to support impersonated access to data assets.
... View more
Labels:
12-20-2016
05:34 PM
@jzhang good call, I changed to yarn-cluster mode for the Livy interpreter and was not able to reproduce the error in HDP 2.5.
... View more
12-19-2016
06:28 PM
@jzhang I am seeing this issue in an HDP 2.5 cluster (Zeppelin 0.6.0 and Spark 1.6.2). In which HDP release was the fix backported?
... View more
12-19-2016
12:36 AM
1 Kudo
I had the same issue as described here: https://community.hortonworks.com/questions/69697/getting-error-user-session-not-found-403-when-usin.html Changing livy.superusers in the Custom Livy conf in Spark configuration so that the cluster name is in lowercase allowed that first 403 error to go away. I am now running into another issue where the error in the UI is "Cannot start spark" and in the logs there appears to be an issue authenticating to the Hive metastore using Kerberos. This may be https://issues.apache.org/jira/browse/SPARK-13478 for Spark 1.6.2 and Zeppelin 0.6.0, I'm researching further.
... View more