Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Users can upload and create directories using file view without home directory on /users in HDP 2.4 Sandbox?

Highlighted

Users can upload and create directories using file view without home directory on /users in HDP 2.4 Sandbox?

Expert Contributor

Guys,

I was trying Files View which comes configured with Ambari in HDP Sandbox 2.4. To my surprise, I could upload the files and create directories under root and other directories through Ambari Files View even though I did not create the /user/<userName> directory on the HDFS. This is true for all the newly added users through Ambari UI and also for the users admin/maria_dev,etc.

Is it a bug or limitation removal for HDP 2.4 Sandbox ?

Regards,

SS!

2 REPLIES 2

Re: Users can upload and create directories using file view without home directory on /users in HDP 2.4 Sandbox?

New Contributor

Have a look in Ambari >> Services >> HDFS >> Configs >> Advanced ...

In the "Filter" box top right, type the text "proxy". In custom core-site, you will probably find the following settings:

hadoop.proxyuser.root.groups=*
hadoop.proxyuser.root.hosts=*

which essentially gives Linux "root" privileges to the Ambari service. See this for more information. In a production system, you would probably want to use the syntax

hadoop.proxyuser.ambariusr.groups=* 

instead, where ambariusr is not root, and has limited privileges.

More information on Hadoop Proxy Users can be found here (Apache Hadoop site).

Hope this helps.

Re: Users can upload and create directories using file view without home directory on /users in HDP 2.4 Sandbox?

I was interested to see if this was really having to do with the HDFS Ambari View or just with the HDFS permissions themselves, so I followed some notes in https://github.com/HortonworksUniversity/Essentials/blob/master/demos/mapreduce/README.md to create a local user "it1" as well as a home directory for it on HDFS. Then I found out I could do the same wide-open writes as you described. NOTE: I removed a bunch of the irrelevant folder contents to shorten the listing.

[root@sandbox ~]# su - it1

[it1@sandbox ~]$ hdfs dfs -ls /
Found 13 items
  ... DELETED LINES ...
drwxr-xr-x   - hdfs   hdfs            0 2016-03-31 17:17 /user
[it1@sandbox ~]$ hdfs dfs -put /etc/group /groups.txt
[it1@sandbox ~]$ hdfs dfs -ls /
Found 14 items
  ... DELETED LINES ...
-rw-r--r--   3 it1    hdfs         1196 2016-05-16 18:21 /groups.txt
drwxr-xr-x   - hdfs   hdfs            0 2016-03-31 17:17 /user

[it1@sandbox ~]$ hdfs dfs -put /etc/group /user/groups.txt
[it1@sandbox ~]$ hdfs dfs -ls /user
Found 14 items
  ... DELETED LINES ...
-rw-r--r--   3 it1       hdfs       1196 2016-05-16 18:22 /user/groups.txt
drwxr-xr-x   - it1       hdfs          0 2016-05-16 18:21 /user/it1
drwxr-xr-x   - maria_dev hdfs          0 2016-05-09 19:33 /user/maria_dev

Since I was able to put these files into / and /user even though I should not have been able to, it shows it has nothing to do with the Ambari View. My next hunch was if the "hadoop superuser" nuclear option, described in https://martin.atlassian.net/wiki/x/AoCpAQ, was used, but the dfs.permissions.superusergroup was still set to hdfs and I verified that root, maria_dev, and it1 were not in that group with a "lid -g hdfs" command.

Then it dawned on me that like I found out when playing with Ranger in my https://github.com/HortonworksUniversity/Essentials/blob/master/demos/ranger/README.md demo (which had a wide-open policy in Ranger for Hive) that there might be a similar general rule established in Ranger for HDFS and as the follow screenshot shows, there is.

4243-wideopenhdfs.png

Easy fix, just disable it and save it (and then wait at least 30 seconds for the cache rule to expire) and try it again.

[it1@sandbox ~]$ hdfs dfs -put /etc/group /groups2.txt
put: Permission denied: user=it1, access=WRITE, inode="/groups2.txt._COPYING_":hdfs:hdfs:drwxr-xr-x

[it1@sandbox ~]$ hdfs dfs -put /etc/group /user/groups2.txt

put: Permission denied: user=it1, access=WRITE, inode="/user/groups2.txt._COPYING_":hdfs:hdfs:drwxr-xr-x

Yep, that's it! Now... for the Sandbox, I'd just re-enable that rule as we're just messin 'round with this environment anyway!! Good luck!!

Don't have an account?
Coming from Hortonworks? Activate your account here