Member since
10-20-2015
92
Posts
78
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4092 | 06-25-2018 04:01 PM | |
6902 | 05-09-2018 05:36 PM | |
2424 | 03-16-2018 04:11 PM | |
7573 | 05-18-2017 12:42 PM | |
6323 | 03-28-2017 06:42 PM |
12-25-2016
08:00 PM
3 Kudos
Below are some examples of how you would achieve this:
Case1: Restrict to users in a single group – In this example, only the users who are members of “scientist” group are only allowed to login to ranger admin.
User search filter parameter would look something like this:
(&(sAMAccountName={0})(memberof=cn=scientist,ou=groups,dc=hwqe,dc=hortonworks,dc=com))
Case2: Restrict to users in multiple groups – In this example, only the users who are members of either “scientist” group OR “analyst” group are allowed to login to ranger admin.
User search filter parameter would look something like this:
(&(sAMAccountName={0})(|((memberof=cn=scientist,ou=groups,dc=hwqe,dc=hortonworks,dc=com)(memberof=cn=analyst,ou=groups,dc=hwqe,dc=hortonworks,dc=com))))
Case3: Restrict to given list of users – In this example, only the users whose cn (or common name) starts with “sam r” are allowed to login to ranger admin.
User search filter parameter would look something like this:
(&(sAMAccountName={0})(cn=sam r*))
... View more
Labels:
12-25-2016
07:48 PM
Problem:
Time to live for collection data was never set on Solr Cloud server and caused the disk and nodes to fill up with too many documents due to ranger audit.
Solution:
1. Delete the collection through Solr api because the ranger archive was stored in HDFS anyways. (The following cleared up disk space)
http://<solr_host>:<solr_port>/solr/admin/collections?action=DELETE&name=collection
2. Set the configuration in solr to have a time to live.
a. Download each of the following configs from zookeeper: schema.xml, solrconfig.xml, and managed-schema.xml
/opt/hostname-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <zookeeper host>:<zookeeper port> -cmd get /ranger_audits/configs/ranger_audits/solrconfig.xml >/tmp/solrconfig.xml
/opt/hostname-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <zookeeper host>:<zookeeper port> -cmd get /ranger_audits/configs/ranger_audits/schema.xml >/tmp/schema.xml
/opt/hostname-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <zookeeper host>:<zookeeper port> -cmd get /ranger_audits/configs/ranger_audits/managed-schema >/tmp/managed-schema
b. Added the following to solrconfig
<updateRequestProcessorChain name="add-unknown-fields-to-the-schema">
<processor class="solr.DefaultValueUpdateProcessorFactory">
<str name="fieldName">_ttl_</str>
<str name="value">+90DAYS</str>
</processor>
<processor class="solr.processor.DocExpirationUpdateProcessorFactory">
<int name="autoDeletePeriodSeconds">86400</int>
<str name="ttlFieldName">_ttl_</str>
<str name="expirationFieldName">_expire_at_</str>
</processor>
<processor class="solr.FirstFieldValueUpdateProcessorFactory">
<str name="fieldName">_expire_at_</str>
</processor>
c. Added the following to schema.xml and managed-schema.xm
<field name="_expire_at_" type="tdate" multiValued="false" stored="true" docValues="true"/>
<field name="_ttl_" type="string" multiValued="false" indexed="true" stored="true"/>
d. Uploaded each edited file to zookeeper.
/opt/hostname-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <zookeeper host>:<zookeeper port> -cmd putfile ranger_audits/configs/ranger_audits/solrconfig.xml /tmp/solrconfig.xml
/opt/hostname-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <zookeeper host>:<zookeeper port> -cmd putfile ranger_audits/configs/ranger_audits/schema.xml /tmp/schema.xml
/opt/hostname-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <zookeeper host>:<zookeeper port> -cmd putfile ranger_audits/configs/ranger_audits/managed-schema /tmp/managed-schema
3.Make sure on each node the ranger_audit replicas were removed from the solr directories in the local filesystem.
4. Lastly we issue the create command
http://<host>:8983/solr/admin/collections?action=CREATE&name=ranger_audits&collection.configName=ranger_audits&numShards=2
MORE INFO:
Check out more Solr General info and Ambari-infra TTL info here: https://community.hortonworks.com/articles/63853/solr-ttl-auto-purging-solr-documents-ranger-audits.html
... View more
Labels:
12-25-2016
07:30 PM
Also, see https://issues.apache.org/jira/browse/KNOX-762
... View more
12-25-2016
07:28 PM
httpclient-451jar.zip
... View more
12-25-2016
07:24 PM
ERRORS: from HDFS log: 016-10-30 17:44:04,226 ERROR impl.CloudSolrClient (CloudSolrClient.java:requestWithRetryOnStaleState(903)) - Request to collection ranger_audits failed due to (401) org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at https://hwx.com:8886/solr/ranger_audits_shard1_replica1: Expected mime type application/octet-stream but got text/html. <html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 401 Authentication required</title>
</head>
<body><h2>HTTP ERROR 401</h2>
<p>Problem accessing /solr/ranger_audits_shard1_replica1/update. Reason:
<pre> Authentication required</pre></p><hr><i><small>Powered by Jetty://</small></i><hr/>
From Hive log: 016-10-30 17:50:46,189 WARN [org.apache.ranger.audit.queue.AuditBatchQueue1]: provider.BaseAuditHandler (BaseAuditHandler.java:logFailedEvent(374)) - failed
to log audit event: {"repoType":3,"repo":"AA_Prod_hive","reqUser":"alex","evtTime":"2016-10-30 17:50:43.587","access":"USE","resource":"default","resType":"@
database","action":"_any","result":0,"policy":-1,"enforcer":"ranger-acl","sess":"cf4d0c81-c4df-483b-ab51-aa7bb5cb1633","cliType":"HIVESERVER2","cliIP":"172.26
.205.88","reqData":"show tables","agentHost":"hwx.com","logType":"RangerAudit","id":"d41d25ee-d198-475d-a288-11d6cc76535c-0","seq_num":1
,"event_count":1,"event_dur_ms":0,"tags":[],"additional_info":"{\"remote-ip-address\":172.26.1.1, \"forwarded-ip-addresses\":[]"}
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at https://hwx:8886/solr/ranger_audits_shard1_re
plica1: Expected mime type application/octet-stream but got text/html. <html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 401 Authentication required</title>
Debug:
Once DEBUG log for krb5 is enabled (-Dsun.security.krb5.debug=true) we can see in both hdfs (log is hadoop-hdfs-namenode-hwx.com.out) and hive (log is hive-server2.out) the same issue as from the Knox ticket (it tries to use the HTTPS/_HOST principal instead of HTTP/_HOST as it's standard with spnego):
>>KRBError:
sTime is Sun Oct 30 17:50:08 PDT 2016 1477875008000
suSec is 518135
error code is 7
error Message is Server not found in Kerberos database
sname is HTTPS/host@HWX.COM
msgType is 30
ROOT CAUSE:
There is a defect in httpclient 4.5.2 that got introduced in HDP 2.5.
WORKAROUND:
Downgrade all the httpclient at 4.5.2 for ranger to 4.5.1 This will be fixed in a future release.
... View more
Labels:
12-22-2016
09:49 PM
However, in Ambari 2.4.x and up it should create the principal and keytabs automatically. I have seen where this didn't happen prior to 2.4.2 on 2.4.0.1 and 2.4.1
... View more
12-22-2016
09:01 PM
In 2.4.2 you have to manually setup Ambari principal and keytab https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_Security_Guide/content/_set_up_kerberos_for_ambari_server.html I see the same documentation for 2.5.3. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_security/content/_set_up_kerberos_for_ambari_server.html
... View more
12-20-2016
05:26 PM
Looks like we currently don't have this in our docs yet. Per engineering, I will file a documentation bug. https://issues.apache.org/jira/browse/HDFS-6261
... View more
12-19-2016
09:09 PM
Labels:
- Labels:
-
Apache Hadoop
12-16-2016
06:50 PM
If you are not using ranger hbase policies to grant permission then you will have to use hbase shell to grant the permission. Forexample,
R - represents read privilege.
W - represents write privilege.
X - represents execute privilege.
C - represents create privilege.
A - represents admin privilege.
hbase(main):018:0> grant 'sami','RWXCA','default'
... View more
- « Previous
- Next »