Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Ranger HDFS policy not taking effect even after restart

avatar
Rising Star

Steps done:

  1. Disabled “HDFS Global Allow”.
  2. Created new policy for Marketing group (Read/Execute enabled) "/apps/hive/warehouse/xademo.db/customer_details

PS- Policy sync successful as checked in Ranger->Audit->Plugins

Problem

User from a different group (e.g. user it1 user from IT group) was freely able to drop the Hive table "customer_details"

Troubleshooting done so far:

hadoop fs -ls /apps/hive/warehouse/xademo.db

drwxrwxrwx - hive hdfs 0 2016-03-14 14:52 /apps/hive/warehouse/xademo.db/customer_details

It seems HDFS permissions is taking precedence over Ranger policies?

1 ACCEPTED SOLUTION

avatar

This is a common point of confusion, so I did some tests which I hope will clarify.

TEST 1:

Setup:

1. Create a user directory /user/myuser with 777 permissions in HDFS

2. Make a policy in ranger that allows user mktg1 only read access to /user/myuser

Result: 1. It always allows mktg1 to write

2. Ranger Audit says "Access Enforcer" is hadoop-acl

This is expected behaviour

EXPLANATION: The way a Ranger policy normally works is it searches until it either runs out of options, or it allows access. So, in this case, it first checks the Ranger policy, sees it can't write, then checks HDFS permissions, sees it CAN write and then allows the write.

In order to avoid this situation, you must totally lock down filesystem permissions. That is something like chmod 700. Then you can administer access via Ranger policies.

Ranger policies can only allow access; if nothing allows access (including by default HDFS permissions) then it will deny.

TEST 2:

Setup:

1. Create a user directory /user/myuser with 000 permissions in HDFS

2. Make a policy in ranger that allows user mktg1 read+execute access to /user/myuser

Result:

1. As the user it1:

[it1@sandbox conf]$ hadoop fs -ls /user/myuser 
ls: Permission denied: user=it1, access=READ_EXECUTE, inode="/user/myuser":hdfs:hdfs:d--------- 

2. As the user mktg1:

[mktg1@sandbox conf]$ hadoop fs -ls /user/myuser 
Found 10 items 
-rw-r--r-- 1 root hdfs 529 2015-06-24 12:30 /user/myuser/test.csv 

“Access Enforcer” is xasecure-acl in the Ranger Audit UI

3. As the user mktg1:

[mktg1@sandbox ~]$ hdfs dfs -put test.txt /user/myuser 
put: Permission denied: user=mktg1, access=WRITE, inode="/user/myuser":hdfs:hdfs:d--------- 

File system permissions mean that no one is allowed to access the directory, but the Ranger policy allows mktg1 to read it, but not write.

View solution in original post

5 REPLIES 5

avatar
Super Guru
@AT

ranger policies always takes precedence and then HDFS permissions.

You can disable fallback method. pls check this - http://hortonworks.com/blog/best-practices-in-hdfs-authorization-with-apache-ranger/

avatar
Expert Contributor

HDFS permission have always precedence on Ranger permission.

Good start is deny on hdfs and enable on ranger.

avatar

This is a common point of confusion, so I did some tests which I hope will clarify.

TEST 1:

Setup:

1. Create a user directory /user/myuser with 777 permissions in HDFS

2. Make a policy in ranger that allows user mktg1 only read access to /user/myuser

Result: 1. It always allows mktg1 to write

2. Ranger Audit says "Access Enforcer" is hadoop-acl

This is expected behaviour

EXPLANATION: The way a Ranger policy normally works is it searches until it either runs out of options, or it allows access. So, in this case, it first checks the Ranger policy, sees it can't write, then checks HDFS permissions, sees it CAN write and then allows the write.

In order to avoid this situation, you must totally lock down filesystem permissions. That is something like chmod 700. Then you can administer access via Ranger policies.

Ranger policies can only allow access; if nothing allows access (including by default HDFS permissions) then it will deny.

TEST 2:

Setup:

1. Create a user directory /user/myuser with 000 permissions in HDFS

2. Make a policy in ranger that allows user mktg1 read+execute access to /user/myuser

Result:

1. As the user it1:

[it1@sandbox conf]$ hadoop fs -ls /user/myuser 
ls: Permission denied: user=it1, access=READ_EXECUTE, inode="/user/myuser":hdfs:hdfs:d--------- 

2. As the user mktg1:

[mktg1@sandbox conf]$ hadoop fs -ls /user/myuser 
Found 10 items 
-rw-r--r-- 1 root hdfs 529 2015-06-24 12:30 /user/myuser/test.csv 

“Access Enforcer” is xasecure-acl in the Ranger Audit UI

3. As the user mktg1:

[mktg1@sandbox ~]$ hdfs dfs -put test.txt /user/myuser 
put: Permission denied: user=mktg1, access=WRITE, inode="/user/myuser":hdfs:hdfs:d--------- 

File system permissions mean that no one is allowed to access the directory, but the Ranger policy allows mktg1 to read it, but not write.

avatar
Expert Contributor

Simple example for better understand. Nice @Ana Gillan 🙂

avatar
Rising Star

@Ana Gillan @Sagar Shimpi

Thanks, got partial resolution. Ranger Hive plugin applies only to Hiveserver2 and not to CLI.

But in below mentioned hive table file, how user mktg1 is able to query it using HIVE CLI?

[hive@sandbox ~]$ hadoop fs -ls /apps/hive/warehouse/xademo.db/customer_details/acct.txt

---------- 3 hive hdfs 1532 2016-03-14 14:52 /apps/hive/warehouse/xademo.db/customer_details/acct.txt

[mktg1@sandbox ~]$ hive

hive> use xademo; OK Time taken: 1.737 seconds

hive> select * from customer_details limit 10;

OK

PHONE_NUM PLAN REC_DATE STAUS BALANCE IMEI REGION 5553947406 6290 20130328 31 0 012565003040464 R06 7622112093 2316 20120625 21 28 359896046017644 R02 5092111043 6389 20120610 21 293 012974008373781 R06 9392254909 4002 20110611 21 178 357004045763373 R04 7783343634 2276 20121214 31 0 354643051707734 R02 5534292073 6389 20120223 31 83 359896040168211 R06 9227087403 4096 20081010 31 35 356927012514661 R04 9226203167 4060 20060527 21 450 010589003666377 R04 9221154050 4107 20100811 31 3 358665019197977 R04 Time taken: 6.467 seconds, Fetched: 10 row(s)