Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Ranger policy not working

Solved Go to solution
Highlighted

Ranger policy not working

ranger-screenshot.pngI am using the default policy for HDFS provided after ranger installation. IT is not working as expected.

this is my hdfs permission . Attached is the screeshot of ranger policy. now if an user arun is trying to access hdfs, he shouldnt be as only hadoop,rangerlookupuser and ambari-qa have the permissions as per the policy. Am i doing anything wrong. Or how do i restrict an user named arun using ranger. any thoughts would be grear

 hadoop fs -ls /
Found 9 items
drwxrwxrwx   - yarn   hadoop          0 2017-03-14 05:48 /app-logs
drwxr-xr-x   - hdfs   hdfs            0 2017-03-14 05:45 /apps
drwxr-xr-x   - yarn   hadoop          0 2017-03-14 05:45 /ats
drwxr-xr-x   - hdfs   hdfs            0 2017-03-14 05:46 /hdp
drwxr-xr-x   - mapred hdfs            0 2017-03-14 05:46 /mapred
drwxrwxrwx   - mapred hadoop          0 2017-03-14 05:46 /mr-history
drwxr-xr-x   - hdfs   hdfs            0 2017-03-28 07:39 /ranger
drwxrwxrwx   - hdfs   hdfs            0 2017-03-28 04:54 /tmp
drwxr-xr-x   - hdfs   hdfs            0 2017-03-28 09:54 /user




1 ACCEPTED SOLUTION

Accepted Solutions

Re: Ranger policy not working

Expert Contributor

@ARUN

Hi,

HDFS permissions is managed by a combination of ranger + native HDFS permissions (POSIX). Just because you've set ranger policies for those 3 users, doesnt mean they are the only users who are allowed to access HDFS. In your case, arun is still able to access hdfs because all folders in HDFS have 'r' access for others (eg. /tmp - drwxrwxrwx)

The link below has best practicess in managing HDFS permissions with ranger and native hadoop permissions:

https://hortonworks.com/blog/best-practices-in-hdfs-authorization-with-apache-ranger/

One of the important steps is to change HDFS umask to 077 from 022. This will prevent any new files or folders to be accessed by anyone other than the owner.

As an example you can do the below:

As hdfs user:

1. hdfs dfs -mkdir /tmp/ranger_test

2 hdfs dfs -chmod 700 /tmp/ranger_test (folder permission becomes "drwx------" - changing umask to 077 will do this for future files)

3. switch to ARUN user

4. hdfs dfs -ls /tmp/ranger_test (you should get an error along the lines of: "

ls: Permission denied: user=arun, access=READ_EXECUTE, inode="/tmp/ranger_test":hdfs:hdfs:drwx------"

5. Add a policy in ranger to allow arun access to /tmp/ranger_test

6. try to access the /tmp/ranger_test folder with arun

Hope this helps,

View solution in original post

4 REPLIES 4
Highlighted

Re: Ranger policy not working

Contributor

What do you exactly mean by "if an user arun is trying to access hdfs"? Are you trying to access a file/folder with the "hadoop fs" command while you are logged into linux as user "arun"?

Highlighted

Re: Ranger policy not working

Yes, the user arun issues a command

hadoop fs -ls /

Since ranger allows only 3 users as mentioned in the screenshot. arun should not be able to access / (in hdfs). but it is not the case

Re: Ranger policy not working

Expert Contributor

@ARUN

Hi,

HDFS permissions is managed by a combination of ranger + native HDFS permissions (POSIX). Just because you've set ranger policies for those 3 users, doesnt mean they are the only users who are allowed to access HDFS. In your case, arun is still able to access hdfs because all folders in HDFS have 'r' access for others (eg. /tmp - drwxrwxrwx)

The link below has best practicess in managing HDFS permissions with ranger and native hadoop permissions:

https://hortonworks.com/blog/best-practices-in-hdfs-authorization-with-apache-ranger/

One of the important steps is to change HDFS umask to 077 from 022. This will prevent any new files or folders to be accessed by anyone other than the owner.

As an example you can do the below:

As hdfs user:

1. hdfs dfs -mkdir /tmp/ranger_test

2 hdfs dfs -chmod 700 /tmp/ranger_test (folder permission becomes "drwx------" - changing umask to 077 will do this for future files)

3. switch to ARUN user

4. hdfs dfs -ls /tmp/ranger_test (you should get an error along the lines of: "

ls: Permission denied: user=arun, access=READ_EXECUTE, inode="/tmp/ranger_test":hdfs:hdfs:drwx------"

5. Add a policy in ranger to allow arun access to /tmp/ranger_test

6. try to access the /tmp/ranger_test folder with arun

Hope this helps,

View solution in original post

Highlighted

Re: Ranger policy not working

Contributor

@ARUN HDFS acls are used as fallback when no ranger policy exist for any given HDFS resource. You may turn off xasecure.add-hadoop-authorization flag under HDFS configs to have only ranger acls.

Don't have an account?
Coming from Hortonworks? Activate your account here