Member since
09-29-2015
155
Posts
205
Kudos Received
18
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
8533 | 02-17-2017 12:38 PM | |
1372 | 11-15-2016 03:56 PM | |
1918 | 11-11-2016 05:27 PM | |
15659 | 11-11-2016 12:16 AM | |
3144 | 11-10-2016 06:15 PM |
11-15-2016
03:56 PM
@Sunile Manjee Answer is Yes. In HDP 2.5 Spark Column Security is available with LLAP and Ranger integration You get Fine-Grained
Column Level Access Control for SparkSQL. Fully
dynamic policies per user. Doesn’t require views. Use
Standard Ranger policies and tools to control access and masking policies. Flow: 1.SparkSQL gets data locations known as
“splits” from HiveServer and plans query. 2.HiveServer2 authorizes access using
Ranger. Per-user policies like row filtering are applied. 3.Spark gets a modified query plan
based on dynamic security policy.
4.Spark reads data from LLAP.
Filtering / masking guaranteed by LLAP server.
... View more
11-12-2016
12:49 AM
where in livy is the hive configuration
the equivalent of %jdbc - hive.url parameter. When I am running %livy.sql show
tables, I get an empty result. I looked under livy interpreters section in
zeppelin and also in ambari livy configs section but did not see an explicit hive
url metastore location defined.
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache Zeppelin
11-12-2016
12:20 AM
@vshukla any suggestions?
... View more
11-11-2016
10:17 PM
I am running into a situation where my livy session is killed after timing out, but when i try to rerun the livy paragraph a new session does not get created. I get this error:"Exception: Session not found, Livy server would have restarted, or lost session." I can solve it by bouncing zeppelin notebook service but that is innefficient in multi-user environment. Any suggestions?
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache Zeppelin
11-11-2016
05:27 PM
2 Kudos
@pankaj singh I documented this and have the list of interpreters working
use this tutorial: https://community.hortonworks.com/content/kbentry/65449/ow-to-setup-a-multi-user-active-directory-backed-z.html This is the critical section in shiro.ini: sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
securityManager.sessionManager = $sessionManager
securityManager.sessionManager.globalSessionTimeout = 86400000
Here is the excerpt of valid shiro.ini [users] # List of users with their password allowed to access Zeppelin. # To use a different strategy (LDAP / Database / ...) check the shiro doc at http://shiro.apache.org/configuration.html#Configuration-INISections #admin = password1 #user1 = password2, role1, role2 #user2 = password3, role3 #user3 = password4, role2 # Sample LDAP configuration, for user Authentication, currently tested for single Realm [main] activeDirectoryRealm = org.apache.zeppelin.server.ActiveDirectoryGroupRealm #activeDirectoryRealm.systemUsername = CN=binduser,OU=ServiceUsers,DC=sampledcfield,DC=hortonworks,DC=com activeDirectoryRealm.systemUsername = binduser activeDirectoryRealm.systemPassword = xxxxxx activeDirectoryRealm.principalSuffix = @your.domain.name #activeDirectoryRealm.hadoopSecurityCredentialPath = jceks://user/zeppelin/zeppelin.jceks activeDirectoryRealm.searchBase = DC=sampledcfield,DC=hortonworks,DC=com activeDirectoryRealm.url = ldaps://ad01.your.domain.name:636 activeDirectoryRealm.groupRolesMap = "CN=hadoop-admins,OU=CorpUsers,DC=sampledcfield,DC=hortonworks,DC=com":"admin" activeDirectoryRealm.authorizationCachingEnabled = true sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager cacheManager = org.apache.shiro.cache.MemoryConstrainedCacheManager securityManager.cacheManager = $cacheManager securityManager.sessionManager = $sessionManager securityManager.sessionManager.globalSessionTimeout = 86400000 #ldapRealm = org.apache.shiro.realm.ldap.JndiLdapRealm #ldapRealm.userDnTemplate = uid={0},cn=users,cn=accounts,dc=example,dc=com #ldapRealm.contextFactory.url = ldap://ldaphost:389 #ldapRealm.contextFactory.authenticationMechanism = SIMPLE #sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager #securityManager.sessionManager = $sessionManager # 86,400,000 milliseconds = 24 hour #securityManager.sessionManager.globalSessionTimeout = 86400000 shiro.loginUrl = /api/login [roles] admin = * [urls] # anon means the access is anonymous. # authcBasic means Basic Auth Security # To enfore security, comment the line below and uncomment the next one /api/version = anon /api/interpreter/** = authc, roles[admin] /api/credential/** = authc, roles[admin] /api/configurations/** = authc, roles[admin] #/** = anon /** = authc #/** = authcBasic
... View more
11-11-2016
12:19 AM
It does work in Kerberized cluster you will need to create keytabs for zeppelin and livy service account.
... View more
11-11-2016
12:16 AM
You can load dynamic library to livy interpreter by set livy.spark.jars.packages property to comma-separated list of maven coordinates of jars to include on the driver and executor classpaths. The format for the coordinates should be groupId:artifactId:version. Example Property Example Description livy.spark.jars.packages io.spray:spray-json_2.10:1.3.1 Adding extra libraries to livy interpreter
https://zeppelin.apache.org/docs/0.7.0-SNAPSHOT/interpreter/livy.html#adding-external-libraries
... View more
11-10-2016
06:15 PM
1 Kudo
Hi Deepak, see my how-to tutorial: https://community.hortonworks.com/content/kbentry/65449/ow-to-setup-a-multi-user-active-directory-backed-z.html If you are using self-signed certificate, Download the SSL certificate to where zeppelin is running <code>mkdir -p /etc/security/certificates store the certificate in this directory Import certificate for zeppelin to work with the self signed certificate. <code>cd /etc/security/certificates keytool -import -alias sampledcfieldcloud -file ad01.your.domain.name.cer -keystore /usr/jdk64/jdk1.8.0_77/jre/lib/security/cacerts keytool -list -v -keystore /usr/jdk64/jdk1.8.0_77/jre/lib/security/cacerts | grep sampledcfieldcloud Create home directory in hdfs for the user that you will login: <code>hdfs dfs -mkdir /user/hadoopadmin hdfs dfs -chown hadoopadmin:hdfs /user/hadoopadmin Enable multi-user zeppelin use ambari -> zeppelin notebook configs expand the Advanced zeppelin-env and look for shiro.ini entry. Below is configuration that works with our sampledcfield Cloud. <code>[users] # List of users with their password allowed to access Zeppelin. # To use a different strategy (LDAP / Database / ...) check the shiro doc at http://shiro.apache.org/configuration.html#Configuration-INISections #admin = password1 #user1 = password2, role1, role2 #user2 = password3, role3 #user3 = password4, role2 # Sample LDAP configuration, for user Authentication, currently tested for single Realm [main] activeDirectoryRealm = org.apache.zeppelin.server.ActiveDirectoryGroupRealm #activeDirectoryRealm.systemUsername = CN=binduser,OU=ServiceUsers,DC=sampledcfield,DC=hortonworks,DC=com activeDirectoryRealm.systemUsername = binduser activeDirectoryRealm.systemPassword = xxxxxx activeDirectoryRealm.principalSuffix = @your.domain.name #activeDirectoryRealm.hadoopSecurityCredentialPath = jceks://user/zeppelin/zeppelin.jceks activeDirectoryRealm.searchBase = DC=sampledcfield,DC=hortonworks,DC=com activeDirectoryRealm.url = ldaps://ad01.your.domain.name:636 activeDirectoryRealm.groupRolesMap = "CN=hadoop-admins,OU=CorpUsers,DC=sampledcfield,DC=hortonworks,DC=com":"admin" activeDirectoryRealm.authorizationCachingEnabled = true sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager cacheManager = org.apache.shiro.cache.MemoryConstrainedCacheManager securityManager.cacheManager = $cacheManager securityManager.sessionManager = $sessionManager securityManager.sessionManager.globalSessionTimeout = 86400000 #ldapRealm = org.apache.shiro.realm.ldap.JndiLdapRealm #ldapRealm.userDnTemplate = uid={0},cn=users,cn=accounts,dc=example,dc=com #ldapRealm.contextFactory.url = ldap://ldaphost:389 #ldapRealm.contextFactory.authenticationMechanism = SIMPLE #sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager #securityManager.sessionManager = $sessionManager # 86,400,000 milliseconds = 24 hour #securityManager.sessionManager.globalSessionTimeout = 86400000 shiro.loginUrl = /api/login [roles] admin = * [urls] # anon means the access is anonymous. # authcBasic means Basic Auth Security # To enfore security, comment the line below and uncomment the next one /api/version = anon /api/interpreter/** = authc, roles[admin] /api/credential/** = authc, roles[admin] /api/configurations/** = authc, roles[admin] #/** = anon /** = authc #/** = authcBasic Grant Livy ability to impersonate Use Ambari to update core-site.xml, restart YARN & HDFS after making this change. <code><property> <name>hadoop.proxyuser.livy.groups</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.livy.hosts</name> <value>*</value> </property> Restart hdfs and yarn after this update. After running the livy notebook make sure the yarn logs show the logged in user as the user that is running, hadoopadmin is the user that is logged in the zeppelin notebook. You should see 2 applications running the livy-session-X and the zeppelin app running in yarn <code>application_1478287338271_0003 hadoopadmin livy-session-0 application_1478287338271_0002 zeppelin Zeppelin
... View more
11-08-2016
05:29 PM
6 Kudos
How to setup a multi user (Active Directory backed) zeppelin integrated with ldap and using Livy Rest server . Pre-requisites: Setup the LDAP/AD integration for ambari using this lab (Enable Active Directory Authentication for Ambari):https://github.com/HortonworksUniversity/Security_Labs#lab-1 If you are using self-signed certificate, Download the SSL certificate to where zeppelin is running <code>mkdir -p /etc/security/certificates
store the certificate in this directory Import certificate for zeppelin to work with the self signed certificate. <code>cd /etc/security/certificates
keytool -import -alias sampledcfieldcloud -file ad01.your.domain.name.cer -keystore /usr/jdk64/jdk1.8.0_77/jre/lib/security/cacerts
keytool -list -v -keystore /usr/jdk64/jdk1.8.0_77/jre/lib/security/cacerts | grep sampledcfieldcloud
Create home directory in hdfs for the user that you will login: <code>hdfs dfs -mkdir /user/hadoopadmin
hdfs dfs -chown hadoopadmin:hdfs /user/hadoopadmin
Enable multi-user zeppelin use ambari -> zeppelin notebook configs expand the Advanced zeppelin-env and look for shiro.ini entry. Below is configuration that works with our sampledcfield Cloud. <code>[users]
# List of users with their password allowed to access Zeppelin.
# To use a different strategy (LDAP / Database / ...) check the shiro doc at http://shiro.apache.org/configuration.html#Configuration-INISections
#admin = password1
#user1 = password2, role1, role2
#user2 = password3, role3
#user3 = password4, role2
# Sample LDAP configuration, for user Authentication, currently tested for single Realm
[main]
activeDirectoryRealm = org.apache.zeppelin.server.ActiveDirectoryGroupRealm
#activeDirectoryRealm.systemUsername = CN=binduser,OU=ServiceUsers,DC=sampledcfield,DC=hortonworks,DC=com
activeDirectoryRealm.systemUsername = binduser
activeDirectoryRealm.systemPassword = xxxxxx
activeDirectoryRealm.principalSuffix = @your.domain.name
#activeDirectoryRealm.hadoopSecurityCredentialPath = jceks://user/zeppelin/zeppelin.jceks
activeDirectoryRealm.searchBase = DC=sampledcfield,DC=hortonworks,DC=com
activeDirectoryRealm.url = ldaps://ad01.your.domain.name:636
activeDirectoryRealm.groupRolesMap = "CN=hadoop-admins,OU=CorpUsers,DC=sampledcfield,DC=hortonworks,DC=com":"admin"
activeDirectoryRealm.authorizationCachingEnabled = true
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
cacheManager = org.apache.shiro.cache.MemoryConstrainedCacheManager
securityManager.cacheManager = $cacheManager
securityManager.sessionManager = $sessionManager
securityManager.sessionManager.globalSessionTimeout = 86400000
#ldapRealm = org.apache.shiro.realm.ldap.JndiLdapRealm
#ldapRealm.userDnTemplate = uid={0},cn=users,cn=accounts,dc=example,dc=com
#ldapRealm.contextFactory.url = ldap://ldaphost:389
#ldapRealm.contextFactory.authenticationMechanism = SIMPLE
#sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
#securityManager.sessionManager = $sessionManager
# 86,400,000 milliseconds = 24 hour
#securityManager.sessionManager.globalSessionTimeout = 86400000
shiro.loginUrl = /api/login
[roles]
admin = *
[urls]
# anon means the access is anonymous.
# authcBasic means Basic Auth Security
# To enfore security, comment the line below and uncomment the next one
/api/version = anon
/api/interpreter/** = authc, roles[admin]
/api/credential/** = authc, roles[admin]
/api/configurations/** = authc, roles[admin]
#/** = anon
/** = authc
#/** = authcBasic
Grant Livy ability to impersonate Use Ambari to update core-site.xml, restart YARN & HDFS after making this change. <code><property>
<name>hadoop.proxyuser.livy.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.livy.hosts</name>
<value>*</value>
</property>
Restart hdfs and yarn after this update. After running the livy notebook make sure the yarn logs show the logged in user as the user that is running, hadoopadmin is the user that is logged in the zeppelin notebook. You should see 2 applications running the livy-session-X and the zeppelin app running in yarn <code>application_1478287338271_0003 hadoopadmin livy-session-0
application_1478287338271_0002 zeppelin Zeppelin
Troubleshooting, explore zeppelin and livy log files: <code>tail -f /var/log/zeppelin/zeppelin-zeppelin-az1secure0.log
tail -f /var/log/zeppelin/zeppelin-interpreter-livy-zeppelin-az1secure0.log
Next Steps: This multi-part article shows how to Secure Spark with Ranger using Zeppelin and Livy for Multi-user access Securing Spark with Ranger using Zeppelin and Livy for Multi-user access - Part 1 References: https://zeppelin.apache.org/docs/0.6.0/interpreter/livy.html#faqhttp://dev.hortonworks.com.s3.amazonaws.com/HDPDocuments/HDP2/HDP-2-trunk/bk_command-line-installation/content/ch21s07s02.html http://dev.hortonworks.com.s3.amazonaws.com/HDPDocuments/HDP2/HDP-2-trunk/bk_command-line-installation/content/configuring_zep.html
... View more
Labels:
10-18-2016
05:02 PM
2 Kudos
it should be: yarn-cluster verified against my cluster
... View more