Member since
12-03-2016
91
Posts
27
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
12396 | 08-27-2019 10:45 AM | |
3520 | 12-24-2018 01:08 PM | |
12573 | 09-16-2018 06:45 PM | |
2774 | 12-12-2016 01:44 AM |
04-01-2019
03:23 PM
If you set these properties by hand, take into account that in HDP 3.x the route for the Hive Warehouse has been changed to: /warehouse/tablespace/managed/hive Also in the Ambari configuration for Druid 3.1 the property is set to: hive.druid.storage.storageDirectory = {{druid_storage_dir}} which is expanded to /apps/druid/warehouse, a different path from the Hive Warehouse.
... View more
07-19-2018
12:43 AM
This is a useful article, but I would be better by explaining what the different main configurations do instead of listing interpretation of the best use case (as perceived by the creator) for each combination. By knowing what each of these few options do or how do they affect the behavior regarding the matching of users and groups from LDAP, I'm pretty sure most of us IT professionals will be able to find out in which case each combination is more appropriate for our use case. Indeed that is a recurrent problem with Ranger documentation in HDP and with many other aspects of security components, you usually will find out "subjective" interpretation of what combination of settings are best for this or that scenario, but the objective description of how each options behaves is much harder to find, and sometimes the only way to find out this is going to the source code.
... View more
08-31-2018
02:49 AM
Yes, you will have to use basically the same configuration done when using a combination of OpenLDAP and MIT KDC for authentication. The only difference is you will be using AD as your LDAP server instead of OpenLDAP, and of course you will have to consider the different schemas for users/groups (samAccountName vs uid, etc).
... View more
07-25-2018
07:17 PM
read alsto about a limitation of Python..
... View more
01-17-2019
04:52 PM
If you want to be sure whether one of the two components is updating the password, consider checking the md5sum before and after you change it (and make sure to change it to a different value).
... View more
12-24-2018
05:38 PM
I followed this with HDP 2.6.5 and the HBaseUI became accessible in the given URL but has many errors and links not working inside. I posted a question on how to fix this and then the answer resolving most of these issues here: https://community.hortonworks.com/questions/231948/how-to-fix-knox-hbase-ui.html You are welcome to test this and include these fixes in your article if you find it appropriate. Best regards
... View more
12-23-2016
05:46 PM
@Jay SenSharma Forgive me, I used the browser's debug tool and I found out by myself I was using the wrong URL (copy/paste error). I replaced "services" by "requests" as should be and Now it's working as you stated. Sorry and thank you.
... View more
12-12-2016
01:44 AM
It seems I have finally fixed this: it was really a timeout problem and the higher values I had set were not enough. I increased views.request.read.timeout to 40 seconds: views.request.read.timeout.millis = 40000 and now it's allways working (until now).
... View more
09-13-2018
12:53 PM
At least for version 2.6.3 and above the section "Running import script on kerberized cluster" is wrong. You don't need to provide any of the options (properties) indicated (except maybe the debug one If you want to have debug output) because they are automatically detected and included in the script. Also at least in 2.6.5 a direct execution of the script in a Kerberized cluster will fail because of the CLASSPATH generated into the script. I had to edit this replacing many single JAR files by a glob inside their parent folder in order for the command to run without error. If you have this problem see answer o "Atlas can't see Hive tables" question.
... View more
09-13-2018
01:29 AM
I have experienced this problem after changing Ambari Server to run as a non privileged user "ambari-server". In my case I can see in the logs (/var/log/ambari-server/ambari-server.log) the following: 12 Sep 2018 22:06:57,515 ERROR [ambari-client-thread-6396] BaseManagementHandler:61 - Caught a system exception while attempting to create a resource: Error occured during stack advisor command invocation: Cannot create /var/run/ambari-server/stack-recommendations
This error happens because in CentOS/RedHat /var/run is really a symlink to /run which is a tmpfs filesystem mounted at boot time from RAM. So if I manually create the folder with the required privileges it won't survive a reboot and because the unprivileged user running Ambari Server is unable to create the required directory the error occurs. I was able to partially fix this using systemd-tmpfiles feature by creating a file /etc/tmpfiles.d/ambari-server.conf with following content: d /run/ambari-server 0775 ambari-server hadoop -
d /run/ambari-server/stack-recommendations 0775 ambari-server hadoop - With this file in place running "systemd-tmpfiles --create" will create the folders with the required privileges. According to the following RedHat documentation this should be automagically run at boot time to setup everything: https://developers.redhat.com/blog/2016/09/20/managing-temporary-files-with-systemd-tmpfiles-on-rhel7/ However sometimes this doesn't happens (I don't know why) and I have to run the previous command manually to fix this error.
... View more
- « Previous
- Next »