Support Questions
Find answers, ask questions, and share your expertise

Single ranger instance and multiple clusters

Hi, is it possible to manage multiple clusters with only one single ranger? If so, what will be the configurations in ambari and ranger?

10 REPLIES 10

Cloudera Employee

Hi @Pooja Kamle, Sure It is possible to have single Ranger instance and multiple clusters, In that scenario, all the Clusters will need to point to the same Ranger Instance.

To achieve that all you will need to do is install Ranger on all clusters, and configure ranger.externalurl to the same Ranger Instance you want to use, you can stop Ranger service on the other clusters.

@vsuvagia hey thanks for your reply. In that case, do we need to create a repository in Ranger for every cluster? Say, if I have hdfs-plugin enabled on 2 clusters, then 2 hdfs repositories will be created in Ranger. Is my understanding correct?

Cloudera Employee

Yes, if you want different policies to apply to the 2 clusters, you need 2 repositories. If the very same policies can be used, then it's enough to have one repository

Cloudera Employee
@Pooja Kamle,

It will depend on 2 things,

  • The name you assign to different clusters, as Ambari will create a repository in Ranger using the format cluster-name_servicename, say clustername is test_cluster and services are hadoop, hive, hbase... then Ambari will create services with format test_cluster_hadoop, test_cluster_hive, test_cluster_hbase ...
  • If all the clusters have the same name then a single repository will do, as all the services of different clusters will point to the same repository.
  • Do you want uniform policies across all the clusters for all the services, If no, then the different repositories will not matter much, as it will help you gain control over all the clusters independently.

Cloudera Employee

Additionally If you want the services of all the clusters to use the same repository, then for each service you will need to over-ride the parameter ranger.plugin.<service>.service.name where <service> is each of the plugin name. and use the same name for all the clusters, this way the services of all the clusters will point to the same repository.

@vsuvagia

Yes, I need to apply different policies across different clusters as the cluster names will be different. I am confused with how Ambari of other clusters will identify this Ranger instance?

Cloudera Employee

@Pooja Kamle, You will need to over-ride the default value for the property ranger.externalurl of the all the clusters, and change it to the URL of the Ranger instance you want to use.

The property ranger.externalurl will be available under Ranger configs.

Hi @vsuvagia

I have overridden the property "ranger.externalurl". Now, when I try to restart hdfs service, it doesnt start due to "Connection to Ranger Admin failed"

I suppose it is not able to contact the Ranger set in "ranger.externalurl". Do you find anything odd in this?

Cloudera Employee

@Pooja Kamle

There doesn't seem anything odd, its just that you will need to configure other clusters in such a way that main Ranger host is reachable from the hosts.