Member since
12-03-2016
91
Posts
27
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
12183 | 08-27-2019 10:45 AM | |
3453 | 12-24-2018 01:08 PM | |
12135 | 09-16-2018 06:45 PM | |
2678 | 12-12-2016 01:44 AM |
04-03-2019
07:56 PM
According tto the documentation in this link: https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/data-operating-system/content/partition_a_cluster_using_node_labels.html the information in this articulo is wrong because if you assing all the nodes to 3 different exclusive Node Label partitions, AND you do not set the default Node Label in the default Queue then you will not have resources available to run any Job sent to the default queue without an explicit label.
... View more
04-01-2019
03:23 PM
If you set these properties by hand, take into account that in HDP 3.x the route for the Hive Warehouse has been changed to: /warehouse/tablespace/managed/hive Also in the Ambari configuration for Druid 3.1 the property is set to: hive.druid.storage.storageDirectory = {{druid_storage_dir}} which is expanded to /apps/druid/warehouse, a different path from the Hive Warehouse.
... View more
02-26-2019
03:23 AM
You where able to make Hive Warehouse Connector work with Kerberos in the Spark Zeppelin interpreter when using USER IMPERSONATION, or only running with the "zeppelin" user?? In my case al the other Spark client interfaces (spark-shell, pyspark, spark-submit, etc) are working and I'm experiencing the same problem as you when trying to use Livy with HWC. But in my case, the Spark Zeppelin interpreter is also not working when I enable impersonation, which is needed to impose authorization restrictions to Hive data access with Ranger. If you were able to make HWC work with Spark Interpreter in Zeppelin AND IMPERSONATION enabled; I would be very grateful if you could share the changes you have made in the interpreter's configuration to make this work. The Zeppelin Spark Interpreter with impersonation disabled is working with HWC, but I NEED data access authorization, so this is not an option for me. Best Regards
... View more
12-24-2018
05:38 PM
I followed this with HDP 2.6.5 and the HBaseUI became accessible in the given URL but has many errors and links not working inside. I posted a question on how to fix this and then the answer resolving most of these issues here: https://community.hortonworks.com/questions/231948/how-to-fix-knox-hbase-ui.html You are welcome to test this and include these fixes in your article if you find it appropriate. Best regards
... View more
12-24-2018
01:44 PM
WARNING: when doing the previous changes to the installed service.xml and rewrite.xml (under data/services/...) DO NOT create a backup copy (ex. rewrite.xml.orig) or move the original version to a "backup" sub-folder under this path!! Knox will load ALL the xml files it finds under "/usr/hdp/current/knox-server/data/services" (weird but real!) and this will trigger many strange and confusing behaviors!! I've wasted a few hours trying to make the above work, just because of this 😞
... View more
12-24-2018
01:08 PM
Thank you very much @scharan for your response I tried these steps and indeed posted an
answer based on this to my own question because after some minor fixed (some trivial errors as duplicated slashes on rewrite patterns) these service definitions seemed to fix the header links problems. But after some testing I
found out that this version introduced many other problems with the internal links and realized that the new situation was worst than with the original HDP version so I reverted these changes and deleted my answer. Then I decided to patch the service definition in HDP 2.6.5 adding the missing rewrite rules to the rewrite.xml file as follows: <rule dir="OUT" name="HBASEUI/hbase/outbound/tables" pattern="/tablesDetailed.jsp">
<rewrite template="{$frontend[url]}/hbase/webui/tablesDetailed.jsp"/>
</rule>
- <rule dir="OUT" name="HBASEUI/hbase/outbound/logs" pattern="/logs/">
+ <rule dir="OUT" name="HBASEUI/hbase/outbound/procedures" pattern="/procedures.jsp">
+ <rewrite template="{$frontend[url]}/hbase/webui/procedures.jsp"/>
+ </rule>
+ <rule dir="OUT" name="HBASEUI/hbase/outbound/regionserver/nohome" pattern="/rs-status/">
+ <rewrite template="{$frontend[url]}/hbase/webui/"/>
+ </rule>
+
+ <rule dir="OUT" name="HBASEUI/hbase/outbound/logs" pattern="/logs">
<rewrite template="{$frontend[url]}/hbase/webui/logs/"/>
</rule> This solved most of the problems but the links to the Home ("/") and Metric Dump ("/jmx") were still wrong, event when the outgoing rewrite rule for the "/jmx" was present in the rewrite.xml: <rule dir="OUT" name="HBASEUI/hbase/outbound/jmx" pattern="/jmx">
<rewrite template="{$frontend[url]}/hbase/webui/jmx"/>
</rule> Some more testing showed me that Knox has a bug (or feature?) that prevents an outgoing implicit rule for matching when the length of the pattern has less than 4 chars! (ex. jmx) To fix these problems I followed the article Proxying a UI using Knox to create a rewrite filter rule and apply this to the "response.body" to the "text/html" pages. These are the needed changes in service.xml: <routes>
<route path="/hbase/webui/">
<rewrite apply="HBASEUI/hbase/inbound/master/root" to="request.url"/>
+ <rewrite apply="HBASEUI/hbase/outbound/master/filter" to="response.body"/>
</route>
<route path="/hbase/webui/**">
<rewrite apply="HBASEUI/hbase/inbound/master/path" to="request.url"/>
<rewrite apply="HBASEUI/hbase/outbound/headers" to="response.headers"/>
+ <rewrite apply="HBASEUI/hbase/outbound/master/filter" to="response.body"/>
</route>
<route path="/hbase/webui/**?**">
<rewrite apply="HBASEUI/hbase/inbound/master/query" to="request.url"/>
@@ -30,9 +32,11 @@
</route>
<route path="/hbase/webui/regionserver/**?{host}?{port}">
<rewrite apply="HBASEUI/hbase/inbound/regionserver/home" to="request.url"/>
+ <rewrite apply="HBASEUI/hbase/outbound/master/filter" to="response.body"/>
</route>
<route path="/hbase/webui/master/**?{host}?{port}">
<rewrite apply="HBASEUI/hbase/inbound/master/home" to="request.url"/>
+ <rewrite apply="HBASEUI/hbase/outbound/master/filter" to="response.body"/>
</route>
<route path="/hbase/webui/logs?**">
<rewrite apply="HBASEUI/hbase/outbound/headers" to="response.headers"/> An then add these rules to the end of rewrite.xml: <rule dir="OUT" name="HBASEUI/hbase/outbound/master/filter/home">
<rewrite template="{$frontend[path]}/hbase/webui/"/>
</rule>
<rule dir="OUT" name="HBASEUI/hbase/outbound/master/filter/jmx">
<rewrite template="{$frontend[path]}/hbase/webui/jmx"/>
</rule>
<filter name="HBASEUI/hbase/outbound/master/filter">
<content type="text/html">
<apply path="/jmx" rule="HBASEUI/hbase/outbound/master/filter/jmx"/>
<apply path="/" rule="HBASEUI/hbase/outbound/master/filter/home"/>
</content>
</filter>
Now finally all the header links are OK in all the pages (including the internal ones to the RegionServer nodes status) and all the HBaseUI seems to be working as expected. The only remaining problem seems to be with some very internal and rarely used links at the "Regions" section inside the RegionServer nodes subpages (this will require more rewrite tweaks). This worked for me and I hope this will help someone to make the HBaseUI usable through Knox.
... View more
12-22-2018
11:35 PM
In an Ambari HDP 2.6.5 install after configuring a Knox topology to expose Hbase UI with following tags: <service>
<role>HBASEUI</role>
<url>http://{{hbase_master_host}}:16010</url>
</service>
you will able to access the HBase UI in the URL https://knoxserver:8443/gateway/ui/hbase/webui/ but many of the links at the top menu bar, as for example "Procedures" or "Local Logs", will point to the wrong URL without the gateway prefix. For example the "Procedures" menu will point to "https://knoxserver:8443/procedures.jsp" and gives you a "Not Found" error. How to fix this?
... View more
Labels:
12-02-2018
04:13 PM
I found problems trying to validate the information in this article, and after doing my own research I have to say that it's inaccurate and in some aspects simply wrong. First of all it must be clear that the "ranger.ldap.*" set of parameters which should be configured under Ambari under "Ranger >> Config >> Advanced >> Ldap Settings" and "Advanced ranger-admin-site" are related only to Ranger Admin UI authentication and this has nothing to do with Ranger Usersync (different properties, different code, different daemon) which must be configured completely in the "Ranger >> Config >> Ranger User Info" section. This article mix the two set of LDAP related configurations for two different components in Ranger and this is confusing and not correct. All that I state here may be verified by looking at the source code of the following classes from the "ranger" and "spring-ldap" projects in GitHub: apache.ranger.security.handler.RangerAuthenticationProvider
org.springframework.security.ldap.authentication.BindAuthenticator
org.springframework.security.ldap.authentication.LdapAuthenticationProvider/AbstractLdapAuthenticationProvider Talking about "Ranger Admin LDAP authentication" the only two parameters you will need are the following: ranger.ldap.url = http://ldap-host:389
ranger.ldap.user.dnpattern = uid={0},ou=users,dc=example,dc=com<br> This is because the RangerAuthenticationProvider class first uses the method "getLdapAuthentication()" which in place will use the Spring's BindAuthenticator class with default parameters except from the previous properties. This will try to do a BIND as the DN obtained from "ldapDNPattern" replacing "{0}" with the username, and if this succeeds, the authentication will be granted to the user and nothing else is used!! The only case were the remaining "ranger.ldap.*" parameters are used is when "getLdapAuthentication()" fails, for example by setting the wrong value for "ldap.user.dnpattern" as in the example above, where the LDAP manager's DN will be used to do the Bind with the username's provided password. When the call to "getLdapAuthentication()" fails, Ranger will next try a call to the more specialized method "getLdapBindAuthentication()" and is this method that will use all the other "ranger.ldap.{bind|user|group}.*" properties! This time BindAuthenticator will be configured to bind with the provided "ranger.ldap.bind.dn/password" and will search for the user entry and their groups with the other properties, etc ... But even in this case there is another IMPORTANT error in the article above:
the pattern in "ranger.ldap.group.searchfilter" is wrong, because this is handled by the class DefaultLdapAuthoritiesPopulator and this will replace the '{0}' with the DistinguishedName (DN) of the user (NOT the username) and instead ist the '{1}' that will be replaced with the username. So, if you want to use the configuration above, you should replace {0} with {1} or even better just use "member={0}' as your group searchfilter. Regarding the "authorization" phase, in both the previous methods, if the authorization succeeds, then the group/role authorities for the user will be searched from LDAP (using Spring's DefaultLdapAuthoritiesPopulator class), but ONLY if both
rangerLdapGroupSearchBase AND rangerLdapGroupSearchFilter are defined
and not empty. But even in this case (I have still not tested this, but looking at the code It seems clear) I'm almost sure that the list of "grantedAuths"
obtained from LDAP are never used by Ranger because at the
end of both "getLDAP*Authentication()" methods the grantedAuths list is overwritten using the chain of calls to the following methods: authentication = getAuthenticationWithGrantedAuthority(authentication) >> List<GrantedAuthority> grantedAuths = getAuthorities(authentition.getName()); // pass username only >>>> roleList = (org.apache.ranger.biz.UserMgr)userMgr.getRolesByLoginId(username); //overwrite from Ranger DB I don't know if this is the desired behavior or it's a bug in the current RangerAuthenticationProvider that will be changed in the future (otherwise is not clear why to use the LdapAuthoritiesPopulator upstream), but it's the way it seems to be done right now. In conclusion, for Ranger Admin authentication, IF you just provides the right value for the "ranger.ldap.url" and "ranger.ldap.user.dnpattern" property, none
of the remaining "ranger.ldap.group.*" parameters will be used and the user roles
will be managed by Ranger from the "Admin UI -> Users" interface.
... View more
10-29-2018
06:48 PM
This information (as many others) is wrong in the official HDP Security course from Hortonworks. In the HDFS Encryption presentations of the course it states that to create an HDFS admin user to manage EZ is enough with setting the following (copy/paste here): dfs.cluster.administrators=hdfs,encrypter
hadoop.kms.blacklist.DECRYPT_EEK=hdfs,encrypter
... View more
10-27-2018
09:58 PM
Nice work Greg, a wonderful, clear and very detailed article! Based on this
and other articles of yours in this forum, it seems clear that the people of
Hortonworks should seriously consider giving you a greater participation
in the elaboration and supervision of the (currently outdated, fuzzy, inexact and overall low qualilty) material presented in their
expensive official courses. The difference in quality between his work and the work published there is so great that it is sometimes rude. All my respects Mr!!
... View more