Member since
04-09-2019
254
Posts
140
Kudos Received
34
Solutions
06-01-2017
08:39 PM
Nice and descriptive article @spolavarapu, keep it up !!!
... View more
05-28-2017
11:48 AM
Hello @Sharan Teja Malyala What you are looking for is rolesByGroup feature available in HDP 2.6. Please check this article to know how to use that. Hope this helps !
... View more
05-28-2017
11:36 AM
12 Kudos
Update: This is an update to my previous article on the same topic. This covers the new features added in HDP 2.6 (and Zeppelin 0.7).
Motivation: Starting HDP 2.6, a new Shiro configuration implementation has been added in Zeppelin to handle LDAP/Active Directory authentication and authorization. It fixes lot of known issues (Bind issue, limited search/filter options, Group based authorization etc.) present in earlier versions and this should be used for any kind of LDAP/AD authentication + authorization going forward. Configuration: 1. While most of the configuration steps remain same from the previous article, the following "shiro_init_content" is where the most of the magic happen: Note: Before pasting this configuration in your Zeppelin configuration, please change the Active Directory details to suit your AD environment. # Sample LDAP configuration, for Active Directory user Authentication, currently tested for single Realm
[main]
ldapRealm=org.apache.zeppelin.realm.LdapRealm
ldapRealm.contextFactory.systemUsername=cn=ldap-reader,ou=ServiceUsers,dc=lab,dc=hortonworks,dc=net
ldapRealm.contextFactory.systemPassword=SomePassw0rd
ldapRealm.contextFactory.authenticationMechanism=simple
ldapRealm.contextFactory.url=ldap://ad.somedomain.net:389
# Ability to set ldap paging Size if needed; default is 100
ldapRealm.pagingSize=200
ldapRealm.authorizationEnabled=true
ldapRealm.searchBase=OU=CorpUsers,DC=lab,DC=hortonworks,DC=net
ldapRealm.userSearchBase=OU=CorpUsers,DC=lab,DC=hortonworks,DC=net
ldapRealm.groupSearchBase=OU=CorpUsers,DC=lab,DC=hortonworks,DC=net
ldapRealm.userObjectClass=person
ldapRealm.groupObjectClass=group
ldapRealm.userSearchAttributeName = sAMAccountName
# Set search scopes for user and group. Values: subtree (default), onelevel, object
ldapRealm.userSearchScope = subtree
ldapRealm.groupSearchScope = subtree
ldapRealm.userSearchFilter=(&(objectclass=person)(sAMAccountName={0}))
ldapRealm.memberAttribute=member
# Format to parse & search group member values in 'memberAttribute'
ldapRealm.memberAttributeValueTemplate=CN={0},OU=CorpUsers,DC=lab,DC=hortonworks,DC=net
# No need to give userDnTemplate if memberAttributeValueTemplate is provided
#ldapRealm.userDnTemplate=
# Map from physical AD groups to logical application roles
ldapRealm.rolesByGroup = "hadoop-admins":admin_role,"hadoop-users":hadoop_users_role
# Force usernames returned from ldap to lowercase, useful for AD
ldapRealm.userLowerCase = true
# Enable support for nested groups using the LDAP_MATCHING_RULE_IN_CHAIN operator
ldapRealm.groupSearchEnableMatchingRuleInChain = true
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
### If caching of user is required then uncomment below lines
cacheManager = org.apache.shiro.cache.MemoryConstrainedCacheManager
securityManager.cacheManager = $cacheManager
securityManager.sessionManager = $sessionManager
securityManager.realms = $ldapRealm
# 86,400,000 milliseconds = 24 hour
securityManager.sessionManager.globalSessionTimeout = 86400000
shiro.loginUrl = /api/login
[urls]
# This section is used for url-based security.
# You can secure interpreter, configuration and credential information by urls. Comment or uncomment the below urls that you want to hide.
# anon means the access is anonymous.
# authc means Form based Auth Security
# To enfore security, comment the line below and uncomment the next one
#/api/version = anon
/api/interpreter/** = authc, roles[admin_role,hadoop_users_role]
/api/configurations/** = authc, roles[admin_role]
/api/credential/** = authc, roles[admin_role,hadoop_users_role]
#/** = anon
/** = authc Lets discuss the new configuration options here: 2. ldapRealm.rolesByGroup = "hadoop-admins":admin_role,"hadoop-users":hadoop_users_role This line maps the AD groups "hadoop-admins" and "hadoop-users" to custom roles which can be used in [urls] section to control access to various Zeppelin users. Note that the short group names are to be used instead of fully qualified names like "cn=hadoop-admins,OU=CorpUsers,DC=lab,DC=hortonworks,DC=net". The role names can be set to any name but the same names should be used in the [urls] section. 3. ldapRealm.groupSearchEnableMatchingRuleInChain = true A very powerful option to search all the groups that a given user is member of in a single query. An LDAP search query with this option traverses the LDAP group hierarchy till the root to find out all the groups. Specially useful for nested groups. More info can be found here. Caution : This option can cause performance overhead (slow to log in etc.) if LDAP hierarchy is not setup optimally. 4. ldapRealm.userSearchFilter=(&(objectclass=person)(sAMAccountName={0})) Use this search filter to limit scope of user results when looking for user's Distinguished Name (DN). This is used only If userSearchBase and userSearchAttributeName are defined. If these two are not defined, then userDnTemplate is used to look for user's DN.
... View more
05-23-2017
01:59 AM
2 Kudos
Motivation: When Knox is configured for perimeter security, the end users need to depend heavily on cURL tool or the browser to access the Hadoop services exposed via Knox. Similarly, Hive queries can be submitted by using WebHCat (Templeton) service via Knox. User can also set various parameters required for Hive job to run correctly. cURL Command Syntax: Here's the cURL command syntax which can be used to submit a Hive Job via Knox: $curl -ivk -u <username>:<password> -d <Hive parameters> [-d ...] https://<knox-server-FQDN>:8443/gateway/<topology>templeton/v1/hive" Complete list of Hive parameters can be found in WebHCat cURL Command Reference. The most important Hive parameters are:
Hive Query : -d execute="<Hive-Query>" OR Hive Program : -d file="/hdfs/path/to/hive/program" Specifies a Hive query string using 'execute' OR HDFS file name of Hive program to run using 'file'. It is mandatory to provide either "execute" OR "file" option.
Hive Configuration : -d define="NAME=VALUE" Any Hive configuration values like 'hive.execution.engine' or '' can be set by using 'define'. Multiple 'define's can be provided on cURL command. One caveat, cURL can't seem to be processing the double equal symbol in "define=NAME=VALUE" correctly. It would convert that into "defineNAME=VALUE" erroneously. Fix is to escape one equal symbol with URL-encoded equivalent. Meaning, any 'define' should be provided like this: -d define="hive.execution.engine%3Dmr"
Output directory in HDFS : -d statusdir="/hdfs/path/to/output/directory" Specifies a HDFS location where the output (and error) of the Hive job execution will be written to. Once the job is finished (either success or failure), this location can be checked for stdout, stderr and exit code of the Hive query / program.
Example: With this knowledge, here's a working example cURL command which submits Hive Select query as a job to the cluster via Knox. The output will be a YARN job id which can be used further to track the job progress in Resource Manager UI. # curl -ivk -u hr1:passw0rd -d execute="select+*+from+hivetest;" -d statusdir="/user/hr1/hive.output7" -d define="hive.execution.engine%3Dmr" "https://knox-server.domain.com:8443/gateway/default/templeton/v1/hive"
* About to connect() to knox-server.domain.com port 8443 (#0)
* Trying 127.0.0.1... connected
* Connected to knox-server.domain.com (127.0.0.1) port 8443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* warning: ignoring value of ssl.verifyhost
* skipping SSL peer certificate verification
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
* subject: CN=knox-server.domain.com,OU=Test,O=Hadoop,L=Test,ST=Test,C=US
* start date: Apr 07 23:02:54 2017 GMT
* expire date: Apr 07 23:02:54 2018 GMT
* common name: knox-server.domain.com
* issuer: CN=knox-server.domain.com,OU=Test,O=Hadoop,L=Test,ST=Test,C=US
* Server auth using Basic with user 'hr1'
> POST /gateway/default/templeton/v1/hive HTTP/1.1
> Authorization: Basic aHIxOkJhZc3Mjmq==
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.21 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: knox-server.domain.com:8443
> Accept: */*
> Content-Length: 98
> Content-Type: application/x-www-form-urlencoded
>
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Date: Fri, 19 May 2017 02:13:58 GMT
Date: Fri, 19 May 2017 02:13:58 GMT
< Set-Cookie: JSESSIONID=1k52mpj6ot9rm1nwi2dc9qcvu;Path=/gateway/default;Secure;HttpOnly
Set-Cookie: JSESSIONID=1k52mpj6ot9rm1nwi2dc9qcvu;Path=/gateway/default;Secure;HttpOnly
< Expires: Thu, 01 Jan 1970 00:00:00 GMT
Expires: Thu, 01 Jan 1970 00:00:00 GMT
< Set-Cookie: rememberMe=deleteMe; Path=/gateway/default; Max-Age=0; Expires=Thu, 18-May-2017 02:13:58 GMT
Set-Cookie: rememberMe=deleteMe; Path=/gateway/default; Max-Age=0; Expires=Thu, 18-May-2017 02:13:58 GMT
< Content-Type: application/json; charset=UTF-8
Content-Type: application/json; charset=UTF-8
< Server: Jetty(7.6.0.v20120127)
Server: Jetty(7.6.0.v20120127)
< Content-Length: 31
Content-Length: 31
<
* Connection #0 to host knox-server.domain.com left intact
* Closing connection #0
{"id":"job_1495157584958_0016"} Hope this helps you out!
... View more
Labels:
05-10-2017
06:29 AM
Hey @Timothy Spann, this is a really cool demo involving all my favorites Hadoop, Rpi, NiFi and IoT. Great job & Keep it up!
... View more
05-09-2017
08:26 PM
Hello @bhagan, this is a good article. Could you please add few screenshots showing the actual input/output of image processing? It's a bit harder to follow and imagine how it is going to work. Thanks !
... View more
04-27-2017
04:57 PM
Nice work @Arpit Khare ! This is going to be quite useful for the folks around. Thank you and keep it up !!
... View more
04-01-2017
02:56 AM
9 Kudos
Motivation: In past, there have been many instances here in Community as well as outside, where HDP 2.5 Sandbox users wouldn't be able to enable Kerberos / install Yum packages like SmartSense, KMS, Kerberos client / Access Zeppelin UI etc. like these: 1. https://community.hortonworks.com/questions/82455/zeppelin-ui-returns-error-503-hdp-25-sandbox.html 2. https://community.hortonworks.com/questions/68361/fail-to-install-kerberos-client-in-sandbox-25.html 3. https://community.hortonworks.com/questions/60349/enabling-kerberos-in-hdp-25-kerberos-clientpy-does.html 4. https://community.hortonworks.com/questions/68234/fail-to-install-kerberos-in-hdp-sandbox-25.html 5. https://community.hortonworks.com/questions/78464/ranger-kms-cannot-install-successfully-with-ambari.html This article is aimed at resolving all these issues for once & all ! Target Audience: All the users who are using HDP 2.5 Sandbox image for VirtualBox / VMWare. Problem: There is a known bug in Docker OverlayFS, Linux Kernel 3.10 & ext4 filesystem. Due to this bug, the HDP 2.5.0 Sandbox can have problems installing/deleting some services and files from the ext4 filesystem on the Docker ‘sandbox’ container. The behavior usually is a failure on the install/delete/removal with the following low level error message: [Error 22]Invalid argument Reference Bugs: https://github.com/docker/docker/issues/10294 https://github.com/docker/docker/issues/12488 https://github.com/docker/docker/issues/20640
Solution: As of this writing, the available workaround is to - Upgrade the Linux Kernel on the "host" Docker system (which is actually running your HDP 2.5 Docker image). Steps: 1. Login to HDP 2.5 VirtualBox "console" (please note, not regular SSH login !) using root user credentials. 2. Stop the ‘sandbox’ docker process: # docker stop sandbox 3. Confirm the sandbox image is stopped: # docker ps -a 4. Purge the both rescue files in /boot partition because the /boot partition is too small (this is required for next step) # rm /boot/vmlinuz-0-rescue-1fd5af1a38de4420b2f283cdbbc38136
rm: remove regular file ‘/boot/vmlinuz-0-rescue-1fd5af1a38de4420b2f283cdbbc38136’? y
# rm /boot/initramfs-0-rescue-1fd5af1a38de4420b2f283cdbbc38136.img
rm: remove regular file ‘/boot/initramfs-0-rescue-1fd5af1a38de4420b2f283cdbbc38136.img’? Y 5. Upgrade the Linux Kernel of the ‘host’ docker system: Old: Kernel Version: 3.10.0-327.el7.x86_64 New: Kernel Version: 3.10.0-514.6.1.el7.x86_64 # yum update kernel
Resolving Dependencies
--> Running transaction check
---> Package kernel.x86_64 0:3.10.0-514.6.1.el7 will be installed
--> Processing Dependency: linux-firmware >= 20160830-49 for package: kernel-3.10.0-514.6.1.el7.x86_64
--> Running transaction check
---> Package linux-firmware.noarch 0:20150904-43.git6ebf5d5.el7 will be updated
---> Package linux-firmware.noarch 0:20160830-49.git7534e19.el7 will be an update
--> Processing Conflict: kernel-3.10.0-514.6.1.el7.x86_64 conflicts xfsprogs < 4.3.0
--> Restarting Dependency Resolution with new changes.
--> Running transaction check
---> Package xfsprogs.x86_64 0:3.2.2-2.el7 will be updated
---> Package xfsprogs.x86_64 0:4.5.0-9.el7_3 will be an update
--> Processing Conflict: kernel-3.10.0-514.6.1.el7.x86_64 conflicts kmod < 20-9
--> Restarting Dependency Resolution with new changes.
--> Running transaction check
---> Package kmod.x86_64 0:20-5.el7 will be updated
---> Package kmod.x86_64 0:20-9.el7 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
=====================================================================================================================================================================================================
Package Arch Version Repository Size
=====================================================================================================================================================================================================
Installing:
kernel x86_64 3.10.0-514.6.1.el7 updates 37 M
Updating:
kmod x86_64 20-9.el7 base 115 k
xfsprogs x86_64 4.5.0-9.el7_3 updates 895 k
Updating for dependencies:
linux-firmware noarch 20160830-49.git7534e19.el7 base 31 M
Transaction Summary
=====================================================================================================================================================================================================
Install 1 Package
Upgrade 2 Packages (+1 Dependent package)
Total size: 70 M
Is this ok [y/d/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Updating : linux-firmware-20160830-49.git7534e19.el7.noarch 1/7
Updating : kmod-20-9.el7.x86_64 2/7
Installing : kernel-3.10.0-514.6.1.el7.x86_64 3/7
Updating : xfsprogs-4.5.0-9.el7_3.x86_64 4/7
Cleanup : linux-firmware-20150904-43.git6ebf5d5.el7.noarch 5/7
Cleanup : xfsprogs-3.2.2-2.el7.x86_64 6/7
Cleanup : kmod-20-5.el7.x86_64 7/7
Verifying : kmod-20-9.el7.x86_64 1/7
Verifying : kernel-3.10.0-514.6.1.el7.x86_64 2/7
Verifying : xfsprogs-4.5.0-9.el7_3.x86_64 3/7
Verifying : linux-firmware-20160830-49.git7534e19.el7.noarch 4/7
Verifying : linux-firmware-20150904-43.git6ebf5d5.el7.noarch 5/7
Verifying : xfsprogs-3.2.2-2.el7.x86_64 6/7
Verifying : kmod-20-5.el7.x86_64 7/7
Installed:
kernel.x86_64 0:3.10.0-514.6.1.el7
Updated:
kmod.x86_64 0:20-9.el7
xfsprogs.x86_64 0:4.5.0-9.el7_3
Dependency Updated:
linux-firmware.noarch 0:20160830-49.git7534e19.el7
Complete!
6. Update the grub menu config to remove rescue boot option # grub2-mkconfig -o /boot/grub2/grub.cfg 7. Now restart the entire HDP Sandbox using the 'reboot' command and check that everything comes up without any issue. Notice the new Linux Kernel option now being shown during startup. Before Kernel upgrade: After Kernel Upgrade: 8. Once HDP services are back, you are ready to retry the operation which was earlier failing. Hope this helps! P.S. - A BIG thanks & shout out to my colleague "Darwin Traver" @dtraver for finding the missing pieces in this puzzle & putting them together, wouldn't be possible without his efforts !!!
... View more
Labels:
04-01-2017
01:42 AM
Many thanks @Hajime San for writing this one. Now haproxy can't frighten me any more 🙂 Cheers !
... View more
- « Previous
-
- 1
- 2
- Next »