Member since
06-20-2016
251
Posts
196
Kudos Received
36
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
9636 | 11-08-2017 02:53 PM | |
2049 | 08-24-2017 03:09 PM | |
7797 | 05-11-2017 02:55 PM | |
6392 | 05-08-2017 04:16 PM | |
1930 | 04-27-2017 08:05 PM |
09-26-2016
06:44 PM
@Qi Wang by objects I am thinking of tables, see my updated answer. I haven't tested using a function in that way but as long as all the functions are deterministic (like GetDate()), that may work.
... View more
09-26-2016
06:19 PM
No problem @Kent Brodie, please feel free to upvote and/or accept helpful answers.
... View more
09-26-2016
06:15 PM
1 Kudo
Filter conditions are static in Ranger 0.6, so there is no way to populate a variable like @userID dynamically. You would need to define groups as appropriate for each user's compliant access. Filter conditions can reference other objects, but this doesn't help you much in this case, as you would need separate "filter tables" for each user which results in the same administrative overhead you are seeking to avoid. By other objects, I mean other tables not tied to this specific policy. For instance, if I have normalized data model with a customer table and an address table, with an associated foreign key relationship, I can create a filter condition like customerID in (select customerID from customerAddress where state = 'TX').
... View more
09-26-2016
06:10 PM
Hi @Kent Brodie, it's important to distinguish authentication and authorization in security discussions. Ranger manages the authorization aspect of security, assuring that users have compliant access policies defined for the assets to which they require access. Authentication--proving that user identity is genuine--is not managed by Ranger. In secured clusters, authentication is managed via Kerberos, via integration with Active Directory (or another Kerberos implementation such as MIT-KDC, etc.). Users authenticate to their KDC, obtaining a ticket-granting ticket or TGT, and present this TGT to the various Hadoop services in order to prove their identity, that they are who they say they are. Ranger uses this identity--proven to be genuine by the Kerberos protocol--in its mapping of policies to assets. Yes, you are on the right track in thinking about Knox. Knox is a gateway to your secured Hadoop services, and can be a centralized point for enforcement of authentication. By integrating Knox and your AD infrastructure, you can enforce authorization at this gateway to the cluster services. Please let us know what further questions you have.
... View more
09-22-2016
06:47 PM
1 Kudo
@Vasilis Vagias you may need to run import-hive.sh, found in /usr/hdp/current/atlas-server/hook-bin. Otherwise, there may be a communication issue, identifiable in the logs, as far as the Hive-Atlas bridge. See http://atlas.incubator.apache.org/Bridge-Hive.html
... View more
09-21-2016
01:48 AM
The change I am suggesting is dfs.permissions.superusergroup=operator
... View more
09-20-2016
10:07 PM
I believe dfs.permissions.superusergroup can only contain a single value. If you change dfs.permissions.superusergroup to just 'operator' is the behavior as expected? User hdfs will have still normal superuser access with this configuration change, since it starts the NameNode process.
... View more
09-19-2016
11:15 PM
9 Kudos
Many access control policies require additional context outside of the resources and security principals that are used by default to evaluate policy decisions. For example, knowledge regarding the time of day and the geographic source of attempted access may dictate whether that access is allowed or denied.
Ranger policy evaluation occurs in three distinct steps, 1) request creation, 2) policy evaluation, and 3) post-evaluation. In order to extend the Ranger plugin for a particular service, the request context has to be enriched. For example, enriching the authorization request with the time of day or originating geographic location is needed to evaluate those policies effectively.
The first step to enriching the request context is to register a Context Enricher with the service in question. For this example, we will use Hive and our goal will be to enrich the context with its geographic origination. Ranger 0.6 includes the RangerFileBasedGeolocationProvider, which can be used to add this context based on a data file in the IP2Location format, as in the below example. We will store this data file on the Ranger server in /etc/ranger/geo.
IP_FROM,IP_TO,COUNTRY_CODE,COUNTRY_NAME,REGION,CITY
10.0.0.255,10.0.3.0,US,United States,California,Santa Clara
20.0.100.80,20.0.100.89,US,United States,Colorado,Broomfield
20.0.100.110,20.0.100.119,US,United States,Texas,Irving
We will register this Context Enricher for the Hive service using the Ranger API. We can first retrieve the service definition via the following GET request, where the authentication credentials and Ranger host URI are updated as appropriate.
curl -u admin:admin -X GET http://$RANGER_HOST:6080/service/public/v2/api/servicedef/name/hive
We will then post this service definition back to the Ranger API, with the following entries added
"contextEnrichers": [
{
"itemId": 1,
"name": "GeoEnricher",
"enricher": "org.apache.ranger.plugin.contextenricher.RangerFileBasedGeolocationProvider",
"enricherOptions": {
"FilePath": "/etc/ranger/geo/geo.txt",
"IPInDotFormat": "true"
}
}
]
We will also need to register the Policy Conditions with this service definition. The Policy Conditions will be used to evaluate the authorization request, along with the usual conditions such as security principal, type of access, and object. In this case, the condition will be based on the value of the LOCATION_COUNTRY_CODE mapped to the IP range in which the source IP falls.
"policyConditions": [
{
"itemId": 1,
"name": "location-outside",
"label": "Accessed from outside of location?",
"description": "Accessed from outside of location?",
"evaluator": "org.apache.ranger.plugin.conditionevaluator.RangerContextAttributeValueNotInCondition",
"evaluatorOptions": {
"attributeName": "LOCATION_COUNTRY_CODE"
}
}
]
We will post this updated service definition by saving the changes in hiveService2.json and using the Ranger API to commit the change.
curl -v -H 'Content-Type: application/json' -u admin:admin -X PUT --data @hiveService2.json http://$RANGER_HOST:6080/service/public/v2/api/servicedef/name/hive
We can now see these Policy Conditions when adding new Hive policies to Ranger. This policy condition uses the LOCATION_COUNTRY_CODE field, so the condition value is a country code like 'US'
... View more
Labels:
09-12-2016
08:34 PM
1 Kudo
@John Park just a comment that using the built in Ambari Infra SolrCloud deployment is likely simplest for using Solr to index the Ranger audit data. See https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.0/bk_security/content/using_apache_solr_for_ranger_audits.html
... View more
09-10-2016
06:54 PM
I would also add that the HiveServer2 Interactive service and HiveServer2 may be deployed to different hosts.
... View more