Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

How does Ranger HDFS plugin avoid reading tampered local policy cache file?

avatar
Contributor

I modified(tampered) a local policy file written by Ranger HDFS Plugin to test against illegal or malicious operation, but authorization rules are not changed. For example:

1. user "ohide" cannot read /user/ohide

2. admit user "ohide" to read /user/ohide by Ranger

3. confirm user "ohide" can read /user/ohide

4. delete an entry added by step 2 from a local policy cache file in NameNode host (where Ranger HDFS Plugin running)

5. try to read /user/ohide by user "ohide" and succeeded.

This behavior is appropriate I think, but I do not know and want to know how to avoid not to read tampered policy cache file. Does anyone know the answer of my question?

1 ACCEPTED SOLUTION

avatar
Master Guru

The Plugin connects to Ranger to get the updated policy file and writes it to a local file in case it has to restart. It doesn't monitor the local files for changes to reload them. Because why would it.

I assume if you stop ranger ( to make sure it cannot be contacted for an update ) then restart HDFS he would take your changes. ( Didn't try it but sounds like a reasonable assumption ). Now to exploit this you would need to force an hdfs restart and block access to ranger I suppose.

Now how to stop people from tampering it? Just make sure random people cannot become root or users from the hadoop group on your system. the policy cache files can only be written by hive, ranger users etc. Once somebody is root on any node on the cluster you will have a hard time stopping anybody from doing things esp. if they can log in to the master servers. On the client you still might have a chance.

View solution in original post

2 REPLIES 2

avatar
Master Guru

The Plugin connects to Ranger to get the updated policy file and writes it to a local file in case it has to restart. It doesn't monitor the local files for changes to reload them. Because why would it.

I assume if you stop ranger ( to make sure it cannot be contacted for an update ) then restart HDFS he would take your changes. ( Didn't try it but sounds like a reasonable assumption ). Now to exploit this you would need to force an hdfs restart and block access to ranger I suppose.

Now how to stop people from tampering it? Just make sure random people cannot become root or users from the hadoop group on your system. the policy cache files can only be written by hive, ranger users etc. Once somebody is root on any node on the cluster you will have a hard time stopping anybody from doing things esp. if they can log in to the master servers. On the client you still might have a chance.

avatar
Contributor
@Benjamin Leonhardi

Thank you very much for your answer! I understood well!