Member since
08-08-2013
339
Posts
132
Kudos Received
27
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
14844 | 01-18-2018 08:38 AM | |
1585 | 05-11-2017 06:50 PM | |
9202 | 04-28-2017 11:00 AM | |
3444 | 04-12-2017 01:36 AM | |
2847 | 02-14-2017 05:11 AM |
03-11-2018
10:21 PM
Hi I am having exactly the same issue with plugins after enabling ranger ssl.
... View more
02-10-2016
01:23 PM
1 Kudo
Hi @Artem Ervits , @Neeraj Sabharwal , at the end, using Ranger policies for Hive-on-top-of-HBase works as supposed to do so, by defining Hive-Policy and HBase-Policy for the involved tables. The issue I had was the following, although I really don't understand why it is like it is: switching back to Ranger-HTTP from HTTPS left the policy_mgr_url starting with HTTPS://<ranger-admin>:<port>; on the HBase-REGIONSERVERS, thereby the REGIONSERVERS were complaining that they cannot grab latest Ranger policies due to SSL error. This was the reason why my HBase policies were never applied, because they never got fetched by the REGIONSERVERS. Now the point that is confusing me: why the REGIONSERVERS ???? On the HBase-Master nodes there was no error, they had received the latest HBase-policies and therefore in the Ranger-Audit the agents heartbeat has been updated (and therefore I thought everything's fine). Isn't it the similar behaviour of Ranger-plugin like in HDFS, that the plugin just hooks into the "master"-process Namenode , what is the role of Ranger-in-Regionserver here ?
... View more
02-06-2016
11:30 AM
@Gerd Koenig Nice! Please pick one best answer and accept it as best answer so that we all can go home 😛
... View more
01-28-2016
08:25 PM
1 Kudo
Hi @Neeraj Sabharwal , thanks...a restart brought back Ambari into life.....right after opening a ticket 😉
... View more
01-27-2016
12:28 PM
Hi @Sai ram , looks like you are using Ranger and you do not have a Ranger-HDFS-policy which allows the user hive to write to "/flume" On the one hand, the solution from @Neeraj Sabharwal is granting permissions on HDFS level and solves your problem, on the other hand, if you want to go with Ranger I'd recommend to create/adjust Ranger-HDFS-policies for certain folders/users (and do a, at least, chmod 700 on HDFS level itself to prevent accessing folders/files "by accident")
... View more
01-27-2016
05:50 PM
1 Kudo
Hi @mkataria , sure, I'll try my best. First click on service 'HDFS' in Ambari, then In the next dialog, create one config-group per Nodemanager , provide a corresponding name and assign that node to that config group Then get back to the "general" HDFS config page (picture 1), select a config group and adjust the log destination for that particular Nodemanager-node (==config-group). ...and restart HDFS 😉 Regards, Gerd
... View more
01-25-2016
02:23 PM
@Robin Dong Hi Robin, Please hit accept on the best answer as part of the best practice to close the thread.
... View more
01-24-2016
08:03 PM
Hi @Ancil McBarnett , thank you so much! ....stupid me 😉
... View more
01-21-2016
03:17 PM
1 Kudo
Thanks @Neeraj . Just to give you feedback of another 'solution'. In the meantime I received two more datanodes back (which were failing during installation time). After adding those hosts and restarting HDFS the corrupt block error disappeared without any further file deletion or HDFS re-formatting Regards, Gerd
... View more