Member since
06-20-2016
251
Posts
196
Kudos Received
36
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
9564 | 11-08-2017 02:53 PM | |
2025 | 08-24-2017 03:09 PM | |
7741 | 05-11-2017 02:55 PM | |
6279 | 05-08-2017 04:16 PM | |
1899 | 04-27-2017 08:05 PM |
10-08-2016
08:11 PM
4 Kudos
In HDF 2.0, administrators can secure access to individual NiFi components in order to support multi-tenant authorization. This provides organizations the ability to create least privilege policies for distinct groups of users.
For example, let's imagine we have a NiFi Team and a Hadoop Team at our company and the Hadoop Team can only access dataflows they've created, whereas the NiFi Team can access all dataflows. NiFi 1.0 in HDF 2.0 can use different authorizers, such as file-based policies (managed within NiFi) and Ranger-based policies (managed within Ranger), as well as custom, pluggable authorizers.
In this example, we'll use Ranger. For more detail on configuring Ranger as the authorizer for NiFi, please see
this article. To separate the different teams' dataflows, we'll create separate process groups for each team. In NiFi, access policies are inheritable, supporting simpler policy management with the flexibility of overriding access at the component level. This means that all processors, as well as any nested process groups, within the Hadoop Team's root process group will be accessible by the Hadoop Team automatically.
Let's see an example of the canvas when nifiadmin, a member of the NiFi team, is logged in.
On the other hand, when hadoopadmin, a member of the Hadoop Team is logged in, we'll see a different representation, given the different level of access.
When hadoopadmin drills down into the NiFi Team's process group (notice the title is blank without read access), notice that this user cannot make any changes (the toolbar items are grayed out).
Let's take a look at how this was configured in Ranger. The nifiadmin user has full access to NiFi, so has read and write access to all resources.
Since the hadoopadmin user has more restrictive access, we'll configure separate policies in Ranger for this user. Firstly, hadoopadmin will need read and write access to the /flow resource in order to access the UI and modify any dataflows.
Secondly, this user needs a policy for the root Hadoop Team process group. In order to configure this, we need to capture the globally unique identifier, or GUID, associated with this process group, which is visible and can be copied from the NiFi UI. The Ranger policy will provide read and write access to this process group within the /process-groups resource. Notice that the hadoopadmin can modify the dataflow within the Hadoop Team process group (the toolbar items are not grayed out and new processors can be dragged and dropped onto the canvas).
... View more
Labels:
05-04-2017
02:08 PM
Hi @Manmeet Kaur, please post this on HCC as a separate question.
... View more
11-11-2016
05:27 PM
2 Kudos
@pankaj singh I documented this and have the list of interpreters working
use this tutorial: https://community.hortonworks.com/content/kbentry/65449/ow-to-setup-a-multi-user-active-directory-backed-z.html This is the critical section in shiro.ini: sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
securityManager.sessionManager = $sessionManager
securityManager.sessionManager.globalSessionTimeout = 86400000
Here is the excerpt of valid shiro.ini [users] # List of users with their password allowed to access Zeppelin. # To use a different strategy (LDAP / Database / ...) check the shiro doc at http://shiro.apache.org/configuration.html#Configuration-INISections #admin = password1 #user1 = password2, role1, role2 #user2 = password3, role3 #user3 = password4, role2 # Sample LDAP configuration, for user Authentication, currently tested for single Realm [main] activeDirectoryRealm = org.apache.zeppelin.server.ActiveDirectoryGroupRealm #activeDirectoryRealm.systemUsername = CN=binduser,OU=ServiceUsers,DC=sampledcfield,DC=hortonworks,DC=com activeDirectoryRealm.systemUsername = binduser activeDirectoryRealm.systemPassword = xxxxxx activeDirectoryRealm.principalSuffix = @your.domain.name #activeDirectoryRealm.hadoopSecurityCredentialPath = jceks://user/zeppelin/zeppelin.jceks activeDirectoryRealm.searchBase = DC=sampledcfield,DC=hortonworks,DC=com activeDirectoryRealm.url = ldaps://ad01.your.domain.name:636 activeDirectoryRealm.groupRolesMap = "CN=hadoop-admins,OU=CorpUsers,DC=sampledcfield,DC=hortonworks,DC=com":"admin" activeDirectoryRealm.authorizationCachingEnabled = true sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager cacheManager = org.apache.shiro.cache.MemoryConstrainedCacheManager securityManager.cacheManager = $cacheManager securityManager.sessionManager = $sessionManager securityManager.sessionManager.globalSessionTimeout = 86400000 #ldapRealm = org.apache.shiro.realm.ldap.JndiLdapRealm #ldapRealm.userDnTemplate = uid={0},cn=users,cn=accounts,dc=example,dc=com #ldapRealm.contextFactory.url = ldap://ldaphost:389 #ldapRealm.contextFactory.authenticationMechanism = SIMPLE #sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager #securityManager.sessionManager = $sessionManager # 86,400,000 milliseconds = 24 hour #securityManager.sessionManager.globalSessionTimeout = 86400000 shiro.loginUrl = /api/login [roles] admin = * [urls] # anon means the access is anonymous. # authcBasic means Basic Auth Security # To enfore security, comment the line below and uncomment the next one /api/version = anon /api/interpreter/** = authc, roles[admin] /api/credential/** = authc, roles[admin] /api/configurations/** = authc, roles[admin] #/** = anon /** = authc #/** = authcBasic
... View more
09-12-2016
08:34 PM
1 Kudo
@John Park just a comment that using the built in Ambari Infra SolrCloud deployment is likely simplest for using Solr to index the Ranger audit data. See https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.0/bk_security/content/using_apache_solr_for_ranger_audits.html
... View more
11-11-2016
10:04 PM
Awesome! This worked for me. The timing could have not been better. I was working on setting up Zeppelin with OpenLDAP and livy today (HDP 2.5) and this was one of the issue I had to solve. Thank you!
... View more
05-16-2018
12:59 PM
Hi all,
the instructions given in the article avoid to mention a setup pre-requisite which is "Installing VirtualBox Guest Additions" on the guest machine, that is to say installing it on our CentOS based HDP Sandbox VM. Simple ? .... Not really ! The devil is inside the details. In Virtualbox documentation Chapter 4 Guest Additions, it is said;
So we have a pre-requisite of our pre-requisite, which consists of "preparing [our] guest for building external kernel modules" And our HDP 2.6.4 doesn't seem ready: the running kernel (4.4.x id I'm right) and the kernel-devel and kernel-headers versions do not match), which is a condition that is checked during the installation process. Trying to update the kernel-devel/kernel-headers to match the 4.4.x version, I ended up with a conflict and I was unable to update the kernel-headers with the 4.4.x version. Removing the package (before reinstalling the 4.4.x version) doesn't seem an option neither, because of its dependancies: At the end, I didn't succeed in mouting shared folders into my HDP 2.6.4. Any help will be very appreciated. Philippe My config: host: Windows 10 + VirtualBox 5.2.8 guest: HDP 2.6.4 Usefull links:
https://www.if-not-true-then-false.com/2010/install-virtualbox-with-yum-on-fedora-centos-red-hat-rhel/ https://www.if-not-true-then-false.com/2010/install-virtualbox-guest-additions-on-fedora-centos-red-hat-rhel/ https://access.redhat.com/discussions/3075051 http://ftp.colocall.net/pub/elrepo/archive/kernel/el7/x86_64/RPMS/
... View more
07-25-2016
07:27 PM
I am running this on the HDP Sandbox VM. I changed zeppelin.server.addr to sandbox.hortonworks.com, which is the /etc/hosts entry that points to 127.0.0.1 on my machine, and this resolved the issue in Chrome.
... View more
07-20-2016
01:22 PM
@slachterman Thanks..ll do
... View more
09-27-2016
10:15 AM
I have encountered the same issue. After specifying the first user "it1" as "Delegating admin", managing grant/revokes worked. I guess, a better policy would be to create a admin user "hive_admin" and delegate all admin activities to this user.
... View more
- « Previous
- Next »