Community Articles

Find and share helpful community-sourced technical articles.
Announcements
Celebrating as our community reaches 100,000 members! Thank you!
Labels (2)
avatar
Rising Star

USE CASE

ACCESS MODEL [Let's say in this case TERADATA]

  • User U1 can only read all tables in Database D1 & D2
  • User U2 can Read Database D1 and INSERT , UPDATE , DELETE , SELECT all tables in Database D2. User U2 Cannot DROP or Create Table in Database D2.
  • User U3 can SELECT,INSERT,UPDATE,DELETE,DROP,CREATE (ALL ACCESS) on Database D1 and D2

OBJECTIVE :- Want to have same model on Hadoop with one improvement.

We will have Storage Groups and ACLs – grouping the tables of same subject area. One Database may have more than one storage Group. Say SG11 , SG12 and SG21 and SG22 (SG11 and SG12 are associated with database D1 and SG21 & SG22 with D2)

  • User U1 should read all Tables in D1 and D2 .
  • User U2 should only INSERT,UPDATE,DELETE and SELECT Tables covered by SG11 ( in Database D1) - U2 will not be able to Update tables in SG12 ( in Database D1) but can read
  • User U3 can do all operations on SG11,SG12 ( D1) and SG21,SG22 (D2) and is OWNER of all the objects in D1 and D2 .

OUR TARGET

  • U3 is admin user and is Owner of the object.
  • U2 is batch Id and can write (insert , update , delete , select) to its storage group objects . U2 an read all objects in all selected Storage Groups.
  • U1 is regular user and can read selected storage Groups. (there is more to it but do not want to complicate)

CURRENT PLAN

  • U1 gets “r” via SG1* and SG2*
  • U2 gets “rwx” via SG11 and “r” via SG12 (U2 can drop a table due to SG11) .
  • We grant U2 a role HIVE that has UPDATE, DELETE,INSERT,SELECT but no DROP – It has ACL that allows these operations at File level without being OWNER.
  • U2 tries to Drop a table in SG11 but Hive Role/ authentication does not allow this. U2 can still update rows in table of SG11.

SOLUTION

  • Create users u1,u2,u3
  • Create users under /user/* on hdfs
  • create 4 storage dirs under hdfs
    • /data/[sg11,sg12,sg21,sg22]
  • Grant ACLs to storage directories above:
    • [hdfs@node1 root]$ hdfs dfs -setfacl -m user:u3:rwx /data/sg12[hdfs@node1 root]$ hdfs dfs -setfacl -m user:u3:rwx /data/sg21
    • [hdfs@node1 root]$ hdfs dfs -setfacl -m user:u3:rwx /data/sg22
  • Create 2 databases d1 and d2.
  • hive> desc database d1; OK d1 hdfs://node1.example.com:8020/data/sg11 u3 USER Time taken: 0.275 seconds, Fetched: 1 row(s)
  • hive> desc database d2; OK d2 hdfs://node1.example.com:8020/data/sg21 u3 USER Time taken: 0.156 seconds, Fetched: 1 row(s)
  • [hdfs@node1 root]$ hdfs dfs -setfacl -m user:u2:rwx /data/sg11
  • [hdfs@node1 root]$ hdfs dfs -setfacl -m user:u2:r-- /data/sg12
  • [hdfs@node1 root]$ hdfs dfs -getfacl /data/sg11
  • hdfs@node1 root]$ hdfs dfs -getfacl /data/sg12

11087-file1.png

  • As user U3 see the following:

11088-file2.pngfile2.png(267.4 kB)

  • To get the above working we need to have the following settings:
    • hive.users.in.admin.role = root,hive,u3
    • Choose Authorization = SQLAUTH
    • hive.server2.enable.doAs=true
  • So this works as expected.

863 Views
Version history
Last update:
‎08-17-2019 06:10 AM
Updated by:
Contributors