Support Questions

Find answers, ask questions, and share your expertise

How are multiple Ranger resource based services affecting authorization ?

avatar
Explorer

One can create multiple resource based services in Ranger Service Manager, but it seems only one is active at any moment in time. True ? What determines this ?

E.g. I created 2 identical HDFS services: "hdfs_service1", and "hdfs_service2", both enabled. Then I create a number of policies in each.

Are both services active at the same time ? What determines which is active, and the policies that will be enforced ?

Furthermore, I can create 2 identical Tag based service: "tag1", and "tag2". Now I set the tag service of "hdfs_service1" to "tag1", and "hdfs_service2" to "tag2". Again, which one will be active ?

1 ACCEPTED SOLUTION

avatar
Explorer

No answers, so I had do do more digging.

It turns out one can only have ONE HDFS (or Hive for that matter) resource based service active at one time, and the value is set in /etc/hadoop/conf/ranger-hdfs-security.xml , key ranger.plugin.hdfs.service.name

Moreover, when set through Ambari, the service name is always <cluster_name>_hadoop (some say it's _hdfs, but I definitely see it as _hadoop in HDP 2.6.3

So, play as much as you want in the Ranger UI, create services, change names, that is just a a fake UI. The real work is done in the /etc/hadoop/conf/ranger-hdfs-security.xml

Thanks guys for making it so straightforward.

View solution in original post

1 REPLY 1

avatar
Explorer

No answers, so I had do do more digging.

It turns out one can only have ONE HDFS (or Hive for that matter) resource based service active at one time, and the value is set in /etc/hadoop/conf/ranger-hdfs-security.xml , key ranger.plugin.hdfs.service.name

Moreover, when set through Ambari, the service name is always <cluster_name>_hadoop (some say it's _hdfs, but I definitely see it as _hadoop in HDP 2.6.3

So, play as much as you want in the Ranger UI, create services, change names, that is just a a fake UI. The real work is done in the /etc/hadoop/conf/ranger-hdfs-security.xml

Thanks guys for making it so straightforward.