Member since
12-14-2015
89
Posts
7
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3112 | 08-20-2019 04:30 AM | |
3263 | 08-20-2019 12:29 AM | |
2186 | 10-18-2018 05:32 AM | |
3410 | 12-15-2016 10:52 AM | |
944 | 11-10-2016 09:21 AM |
06-30-2016
10:34 AM
Hi community, working through the documentation, I stumbled about some pages regarding Ranger Plugins when enabling Kerberos (Link). The documentation states the requirement to create some extra users for lookup purposes (such as rangerhdfslookup) for HDFS, HBase, Hive and Knox. The HDP documentation is the only place I found this information. Is this a mandatory requirement? Why is this user needed? Hope you can clear this up for me. Best regards, Benjamin
... View more
Labels:
- Labels:
-
Apache Ranger
06-24-2016
02:50 PM
@Robert Levas I just tried changing the permissions and restarting the cluster. As it turns out, some Ambari services rely on using that keytab with their user on every restart. In particular: The WebHCat server does not start, failing to execute the following command: Execute['/usr/bin/kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-mycluster@REALM.DE;'] {'path': ['/bin'], 'user': 'hcat'}
All other services appear to work correctly.
... View more
06-24-2016
02:14 PM
Thank you for the insights! This sounds reasonable but still a bit risky. If the initial creation the file structure is the only reason for the hdfs-keytab to be group-readable, I'm considering to change the permissions to 400 manually and re-change it to 440, when installing new components. This sounds more secure than beeing super-restless on securing the hadoop-group. Any thoughts on that approach?
... View more
06-24-2016
12:10 PM
Hey community,
during setting up Kerberos in an HDP 2.3.4 cluster with Ambari, I was wondering the following:
Why does Ambari set the linux permissions for headless keytabs (e.g /etc/security/keytabs/hdfs.headless.keytab) to 440 rather than 400?
I feel that this creates a security risk, as everyone in the hadoop group could impersonate hdfs and thus effectively work as superuser on the cluster. Or is the group "hadoop" supposed to be a super-protected group only used by Hadoop Service Accounts?
Hope you can clear this up for me.
Thanks, Benjamin
edit: It seems to be connected to AMBARI-13695, created by @Robert Levas
... View more
Labels:
- Labels:
-
Apache Ambari
05-13-2016
07:19 AM
@Alex Miller This makes no difference. Still the page https://172.18.10.163:8443/gateway/default/yarn/ is loaded, but static resources or pages like https://172.18.10.163:8443/gateway/default/yarn/apps/ACCEPTED are not loaded. edit: I found the error. In my topology file, I previously added a custom stanza (role. SERVICE-TEST) for which I created no service-definition. That made Knox behave weird. After removing that block, the YARN-UI over Knox works. Thanks, Alex
... View more
05-12-2016
09:30 AM
Thanks for sharing that! I followed your instructions and did the same for yarnui (adapting the paths slightly).
The root- and logs-redirection works, but many other redirections (especially those with {**} in the end) are not used by Knox.
Example: When calling https://172.18.10.163:8443/gateway/default/yarn, the site loads, but the static resources do not load. In /var/log/knox/gateway.log it says:
2016-05-12 11:13:34,109 DEBUG hadoop.gateway (GatewayFilter.java:doFilter(110)) - Received request: GET /yarn
2016-05-12 11:13:34,147 INFO hadoop.gateway (KnoxLdapRealm.java:getUserDn(556)) - Computed userDn: uid=guest,ou=people,dc=hadoop,dc=apache,dc=org using dnTemplate for principal: guest
2016-05-12 11:13:34,227 INFO hadoop.gateway (AclsAuthorizationFilter.java:init(62)) - Initializing AclsAuthz Provider for: YARNUI
2016-05-12 11:13:34,228 DEBUG hadoop.gateway (AclsAuthorizationFilter.java:init(70)) - ACL Processing Mode is: AND
2016-05-12 11:13:34,229 DEBUG hadoop.gateway (AclParser.java:parseAcls(59)) - No ACLs found for: YARNUI
2016-05-12 11:13:34,230 INFO hadoop.gateway (AclsAuthorizationFilter.java:doFilter(85)) - Access Granted: true
2016-05-12 11:13:34,434 DEBUG hadoop.gateway (UrlRewriteProcessor.java:rewrite(155)) - Rewrote URL: https://172.18.10.163:8443/gateway/default/yarn, direction: IN via implicit rule: YARNUI/yarn/inbound/root to URL: http://resourcemanagerhost.local:8088/cluster
[...]
2016-05-12 11:13:35,074 DEBUG hadoop.gateway (GatewayFilter.java:doFilter(110)) - Received request: GET /yarn/static/jquery/jquery-ui-1.9.1.custom.min.js
2016-05-12 11:13:35,417 DEBUG hadoop.gateway (GatewayFilter.java:doFilter(110)) - Received request: GET /yarn/static/jquery/jquery-1.8.2.min.js
That's the end of file. Nothing is logged after that.
I'm using HDP 2.3.4.7 with Knox 0.6.0.
I would appreciate your help, @Alex Miller or @Kevin Minder.
Thanks!
... View more
05-04-2016
08:42 AM
Hey guys, after finding out where to find the time a HDFS checkpoint last occured in a HA environment (JMX of NameNode), I would like to integrate that information into Ambari. Is there a way to create a widget for Ambari telling me, how long ago the last checkpoint happened? Like "8h 30m ago" I see, that I can create a widget showing me the LastCheckpointTime from JMX (in epoch format), but that number is not really intuitive to our Administrators. Even a conversion from epoch to human readable time would be awesome. Thanks and best, Benjamin
... View more
Labels:
- Labels:
-
Apache Ambari
05-03-2016
05:15 PM
Thanks, that clears it up! A follow up question: Is there a way to create a widget for Ambari telling me, how long ago the last checkpoint happened? I see, that I can create a widget showing me the LastCheckpointTime from JMX, but that number is not really intuitive to our Administrators.
... View more
05-02-2016
09:52 AM
1 Kudo
Hey guys, I got a question concerning monitoring/operating HDFS: The time since last checkpoint is one of the metrics I want to keep an eye on, i.e. the time since the edits and fsimage were last consolidated to a new fsimage. In a non-HA environment, you can easily find the time on the Secondary Namenode WebUI at snn-address:50090. My question is, where to find these informations in a HA-environment. Neither the Activce NameNode nor the Standby NameNode seem to show similar information. Best, Benjamin
... View more
Labels:
- Labels:
-
Apache Hadoop
12-16-2015
03:00 PM
Thanks for those insights. I recently got into trouble having the same UUID on all nodes, so I learned that the hard way 😉 Just for completness: The CM Agent UUID is stored in /var/lib/cloudera-scm-agent/uuid Do you have additional similar hints when automating cluster deployment using CM and custom scripting?
... View more
- « Previous
- Next »