Member since
10-20-2015
92
Posts
78
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
933 | 06-25-2018 04:01 PM | |
1678 | 05-09-2018 05:36 PM | |
462 | 03-16-2018 04:11 PM | |
2306 | 05-18-2017 12:42 PM | |
1847 | 03-28-2017 06:42 PM |
03-11-2020
06:10 AM
Thx, D! It’s works at Ranger v2.0 from new CDP Data Center, BareMetal version! Regards, Caseiro.
... View more
02-25-2020
02:37 AM
Restart Zookeeper and the restart ambari infra solr
... View more
09-20-2019
04:21 AM
Hello @dvillarreal, Thank you for the script files, makes life easy! Just a small edit: The proc.psql file was missing one SQL statement @line 96: delete from x_user_module_perm where user_id = x_portal_user_id; We were facing the below issue while trying the commands from psql file manually: ERROR: update or delete on table "x_portal_user" violates foreign key constraint "x_user_module_perm_fk_userid" on table "x_user_module_perm"
DETAIL: Key (id)=(19) is still referenced from table "x_user_module_perm" Because of the above blocker, the script was not deleting the users from the DB. Once we added the missing SQL statement and ran the deleteRangerUser.sh we were able to use it with successful results.
... View more
05-17-2019
06:50 PM
1 Kudo
Customers have asked me about wanting to review ranger audit archive logs stored on HDFS as the UI only shows the Last 90 days of data using Solr infra. I decided to approach the problem using Zeppelin/Spark for a fun example. 1. Prerequisites - Zeppelin and Spark2 installed on your system. As well as ranger with ranger audit logs being stored in HDFS. Create a policy in ranger for HDFS to allow your zeppelin user to read and execute recursively for /ranger/audit directory. 2. Create your notebook in Zeppelin and create some code like the following example: %spark2.spark
// --Specify service and date if you wish
//val path = "/ranger/audit/hdfs/20190513/*.log"
// --Be brave and map the whole enchilada
val path = "/ranger/audit/*/*/*.log"
// --read in the json and drop any malformed json
val rauditDF = spark.read.option("mode", "DROPMALFORMED").json(path)
// --print the schema to review and show me top 20 lines.
rauditDF.printSchema()
rauditDF.show(20,false)
// --Do some spark sql on the data and look for denials
println("sparksql--------------------")
rauditDF.createOrReplaceTempView(viewName="audit")
var readAccessDF = spark.sql("SELECT reqUser, repo, access, action, evtTime, policy, resource, reason, enforcer, result FROM audit where result='0'").withColumn("new_result", when(col("result") === "1","Allowed").otherwise("Denied"))
readAccessDF.show(20,false) 3. Output should look something like path: String = /ranger/audit/*/*/*.log
rauditDF: org.apache.spark.sql.DataFrame = [access: string, action: string ... 23 more fields]
root
|-- access: string (nullable = true)
|-- action: string (nullable = true)
|-- additional_info: string (nullable = true)
|-- agentHost: string (nullable = true)
|-- cliIP: string (nullable = true)
|-- cliType: string (nullable = true)
|-- cluster_name: string (nullable = true)
|-- enforcer: string (nullable = true)
|-- event_count: long (nullable = true)
|-- event_dur_ms: long (nullable = true)
|-- evtTime: string (nullable = true)
|-- id: string (nullable = true)
|-- logType: string (nullable = true)
|-- policy: long (nullable = true)
|-- reason: string (nullable = true)
|-- repo: string (nullable = true)
|-- repoType: long (nullable = true)
|-- reqData: string (nullable = true)
|-- reqUser: string (nullable = true)
|-- resType: string (nullable = true)
|-- resource: string (nullable = true)
|-- result: long (nullable = true)
|-- seq_num: long (nullable = true)
|-- sess: string (nullable = true)
|-- tags: array (nullable = true)
| |-- element: string (containsNull = true)
sql
readAccessDF: org.apache.spark.sql.DataFrame = [reqUser: string, repo: string ... 9 more fields]
+--------+------------+------------+-------+-----------------------+------+-------------------------------------------------------------------------------------+----------------------------------+----------+------+----------+
|reqUser |repo |access |action |evtTime |policy|resource |reason |enforcer |result|new_result|
+--------+------------+------------+-------+-----------------------+------+-------------------------------------------------------------------------------------+----------------------------------+----------+------+----------+
|dav |c3205_hadoop|READ_EXECUTE|execute|2019-05-13 22:07:23.971|-1 |/ranger/audit/hdfs |/ranger/audit/hdfs |hadoop-acl|0 |Denied |
|zeppelin|c3205_hadoop|READ_EXECUTE|execute|2019-05-13 22:10:47.288|-1 |/ranger/audit/hdfs |/ranger/audit/hdfs |hadoop-acl|0 |Denied |
|dav |c3205_hadoop|EXECUTE |execute|2019-05-13 23:57:49.410|-1 |/ranger/audit/hiveServer2/20190513/hiveServer2_ranger_audit_c3205-node3.hwx.local.log|/ranger/audit/hiveServer2/20190513|hadoop-acl|0 |Denied |
|zeppelin|c3205_hive |USE |_any |2019-05-13 23:42:50.643|-1 |null |null |ranger-acl|0 |Denied |
|zeppelin|c3205_hive |USE |_any |2019-05-13 23:43:08.732|-1 |default |null |ranger-acl|0 |Denied |
|dav |c3205_hive |USE |_any |2019-05-13 23:48:37.603|-1 |null |null |ranger-acl|0 |Denied |
+--------+------------+------------+-------+-----------------------+------+-------------------------------------------------------------------------------------+----------------------------------+----------+------+----------+ 4. You can proceed to run sql as well on the audit view information using sql if you so desire. 5. You may need to fine tune your spark interpreter in zeppelin to meet your needs like SPARK_DRIVER_MEMORY, spark.executor.cores, spark.executor.instances, & spark.executor.memory. It helped to see what was happening by tailing the zeppelin log for spark. tailf zeppelin-interpreter-spark2-spark-zeppelin-cluster1.hwx.log
... View more
- Find more articles tagged with:
- How-ToTutorial
- Ranger
- ranger-audit
- Security
- spark-sql
- spark2
- zeppelin-notebook
Labels:
12-10-2018
07:29 PM
Thank you @Robert Levas @dvillarreal Yes, I am using a newer version of ambari and also tried FreeIPA since openLDAP didn't seem to work art all with kerberos. I followed the exact steps as on https://community.hortonworks.com/articles/59645/ambari-24-kerberos-with-freeipa.html - everything seems to be working fine but fails when kerberizing the cluster. I get the following error: Also, important to note that while I get the following error: DNS query for data2.testhdp.com. A failed: The DNS operation timed out after 30.0005660057 seconds DNS resolution for hostname data2.testhdp.com failed: The DNS operation timed out after 30.0005660057 seconds Failed to update DNS records. Missing A/AAAA record(s) for host data2.testhdp.com: 172.31.6.79. Missing reverse record(s) for address(es): 172.31.6.79.
I installed server as: ipa-server-install --domain=testhdp.com \ --realm=TESTHDP.COM \ --hostname=ldap2.testhdp.com \ --setup-dns \ --forwarder=8.8.8.8 \ --reverse-zone=3.2.1.in-addr.arpa. and the clients on each node as ipa-client-install --domain=testhdp.com \
--server=ldap2.testhdp.com \
--realm=TESTHDP.COM \
--principal=hadoopadmin@TESTHDP.COM\
--enable-dns-updates Also, that post doing the following step: echo "nameserver ldap2.testhdp.com" > /etc/resolv.conf my yum is broken and I need to revert to make it work. Do you guys have any idea about it? I thought that there is no need of DNS as I have resolution of *.testhdp.com in my hostfile on all nodes.
... View more
10-15-2018
03:23 PM
My pleasure! @Jasper
... View more
03-19-2018
04:09 PM
Nice script. I've added a pull request (I'm just learning about these, so I hope I did that right) with some code that I've used to make my own Atlas scripts more flexible and to make the password invisible when you type it in. I hope you like it.
... View more
03-19-2018
01:21 PM
@dvillarreal oops, I have missed that ones. Thanks for pointing me policy change/update traces/audits.
... View more
05-18-2017
12:42 PM
Hi @Qi Wang, Yes, it should be fixed in the next maintenance release. In the meantime, please use the workaround provided. Thanks,
... View more
03-30-2017
05:54 PM
1 Kudo
yes surya it was one way ssl
... View more
03-06-2017
09:11 PM
@badr bakkou This would probably be best answered if you submitted as a new question. Provide the gateway.log & gateway-audit.log outputs, topology, and lastly the configuration string you are using with its associated output. Best regards, David
... View more
06-19-2018
08:14 PM
How to assign privileges to a group when it is created?
... View more
02-09-2017
08:16 PM
12 Kudos
AD admins may be busy and you may happen to know the ambari admin principal for enabling Kerberos. How would you go about adding a principal for AD with this information and add it to your kerberos keytab? Below is one way to do it. Thanks to @Robert Levas for collaborating with me on this.
1. Create LDIF file ad_user.ldif. (Make sure there are no spaces at the ends of each of these lines)
dn: CN=HTTP/loadbalancerhost,OU=dav,OU=hortonworks,DC=HOST,DC=COM
changetype: add
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: user
distinguishedName: CN=HTTP/loadbalancerhost,OU=dav,OU=hortonworks,DC=HOST,DC=COM
cn: HTTP/loadbalancerhost
userAccountControl: 514
accountExpires: 0
userPrincipalName: HTTP/loadbalancerhost@HOST.COM
servicePrincipalName: HTTP/loadbalancerhost
dn: CN=HTTP/loadbalancerhost,OU=dav,OU=hortonworks,DC=host,DC=com
changetype: modify
replace: unicodePwd
unicodePwd::IgBoAGEAZABvAG8AcABSAG8AYwBrAHMAMQAyADMAIQAiAA==
dn: CN=HTTP/loadbalancerhost,OU=dav,OU=hortonworks,DC=HOST,DC=COM
changetype: modify
replace: userAccountControl
userAccountControl: 66048
Do not have spaces at the ends of the above lines or you will get an error like the following:
ldap_add: No such attribute (16)
additional info: 00000057: LdapErr: DSID-0C090D8A, comment: Error in attribute conversion operation, data 0, v2580
2. Create unicode Password for the above principal with the password hadoopRocks123!. Replace unicodePWD field in step 1:
[root@host1 ~]# echo -n '"hadoopRocks123!"' | iconv -f UTF8 -t UTF16LE | base64 -w 0
IgBoAGEAZABvAG8AcABSAG8AYwBrAHMAMQAyADMAIQAiAA==
3. Add the account to AD:
[root@host1 ~]# ldapadd -x -H ldaps://sme-2012-ad.support.com:636 -D "test1@host.com" -W -f add_user.ldif
Enter LDAP Password:
adding new entry "CN=HTTP/loadbalancerhost,OU=dav,OU=hortonworks,DC=HOST,DC=COM"
modifying entry "CN=HTTP/loadbalancerhost,OU=dav,OU=hortonworks,DC=HOST,DC=com"
modifying entry "CN=HTTP/loadbalancerhost,OU=dav,OU=hortonworks,DC=HOST,DC=COM"
4. Test the account with kinit:
[root@host1 ~]# kinit HTTP/loadbalancerhost@HOST.COM
Password for HTTP/loadbalancerhost@HOST.COM:
[root@host1 ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: HTTP/loadbalancerhost@HOST.COM
Valid starting Expires Service principal
02/09/17 19:02:33 02/10/17 19:02:33 krbtgt/HOST.COM@HOST.COM
renew until 02/09/17 19:02:33
5. Take it one step further if you need to add the principal to a keytab file
[root@host1 ~]# ktutil
ktutil: add_entry -password -p HTTP/loadbalancerhost@HOST.COM -k 1 -e aes128-cts-hmac-sha1-96
Password for HTTP/loadbalancerhost@HOST.COM:
ktutil: add_entry -password -p HTTP/loadbalancerhost@HOST.COM -k 1 -e aes256-cts-hmac-sha1-96
Password for HTTP/loadbalancerhost@HOST.COM:
ktutil: add_entry -password -p HTTP/loadbalancerhost@HOST.COM -k 1 -e arcfour-hmac-md5-exp
Password for HTTP/loadbalancerhost@HOST.COM:
ktutil: add_entry -password -p HTTP/loadbalancerhost@HOST.COM -k 1 -e des3-cbc-sha1
Password for HTTP/loadbalancerhost@HOST.COM:
ktutil: add_entry -password -p HTTP/loadbalancerhost@HOST.COM -k 1 -e des-cbc-md5
Password for HTTP/loadbalancerhost@HOST.COM:
ktutil: write_kt spenego.service.keytab
ktutil: exit
[root@host1 ~]# klist -ket spenego.service.keytab
Keytab name: FILE:lb.service.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
1 01/18/17 03:12:38 HTTP/loadbalancerhost@HOST.COM (aes128-cts-hmac-sha1-96)
1 01/18/17 03:12:38 HTTP/loadbalancerhost@HOST.COM (aes256-cts-hmac-sha1-96)
1 01/18/17 03:12:38 HTTP/loadbalancerhost@HOST.COM (arcfour-hmac-exp)
1 01/18/17 03:12:38 HTTP/loadbalancerhost@HOST.COM (des3-cbc-sha1)
1 01/18/17 03:12:38 HTTP/loadbalancerhost@HOST.COM (des-cbc-md5)
... View more
- Find more articles tagged with:
- Ambari
- How-ToTutorial
- Kerberos
- Ranger
- Security
Labels:
01-19-2017
07:34 PM
Hi @Qi Wang This should help you to learn by example when it comes to configuring your knox groups and how it relates to your ldapsearch. See Sample 4 specifically https://cwiki.apache.org/confluence/display/KNOX/Using+Apache+Knox+with+ActiveDirectory Hope this helps.
... View more
01-04-2017
10:16 AM
1 Kudo
In your attached ambari.properties file (from another comment), your LDAP url is authentication.ldap.primaryUrl=ldap://127.0.0.1:636
The ldapseach host and port you used above are -h ldap://hortonworks.com -p 636 This should read -h 127.0.0.1 -p 636 Also, there are some issues with your authentication.ldap.primaryUrl value: Port 636 is the LDAPS port, so the url should be ldaps://127.0.0.1:636. If the connection is not secure and you really want to use LDAP, then the port should probably be 389 127.0.0.1 is a localhost IP address. Is the LDAP server on the same host as Ambari? If so, this may be ok, else you should consider using the hostname of the actual host If using LDAPS, Ambari will need to trust the SSL certificate provided by the LDAP server. To do this, the certificate needs to be imported into Ambari's truststore. See http://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-security/content/configure_ambari_to_use_ldap_server.html Hopefully some of this will help.
... View more
12-27-2016
07:51 PM
2 Kudos
PROBLEM: Some users may be associated to many groups causing a very long list of groups to be passed through the Rest APIs headers in Ranger and KMS. ERROR: error log from /var/log/ranger/kms/kms.log 2016-12-01 14:04:12,048 INFO Http11Processor - Error parsing HTTP request header
Note: further occurrences of HTTP header parsing errors will be logged at DEBUG level.
java.lang.IllegalArgumentException: Request header is too large
at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:515)
at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:504)
at org.apache.coyote.http11.InternalInputBuffer.parseHeader(InternalInputBuffer.java:396)
at org.apache.coyote.http11.InternalInputBuffer.parseHeaders(InternalInputBuffer.java:271)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1007)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:625)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:316)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
2016-12-01 14:04:12,074 INFO Http11Processor - Error parsing HTTP request header
Note: further occurrences of HTTP header parsing errors will be logged at DEBUG level.
java.lang.IllegalArgumentException: Request header is too large
at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:515)
at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:504)
at org.apache.coyote.http11.InternalInputBuffer.parseHeader(InternalInputBuffer.java:396)
at org.apache.coyote.http11.InternalInputBuffer.parseHeaders(InternalInputBuffer.java:271)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1007)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:625)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:316)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
ROOT CAUSE: Rest API calls being passed with large headersizes in this case users with large amount of groups that exceed the webservers maxHttpHeaderSize. SOLUTION:
In Ambari go to Ranger Admin->config->Advanced Tab->Custom ranger-admin-site->Add Property. Put ranger.service.http.connector.property.maxHttpHeaderSize in Key field and provide the required value for maxHttpHeaderSize attribute in Value field.
Save the changes and then go to Ranger KMS->config->Advanced Tab->Custom ranger-kms-site->Add Property. Put ranger.service.http.connector.property.maxHttpHeaderSize in Key field and provide the required value for maxHttpHeaderSize attribute in Value field.
Save the changes and restart all Ranger and Ranger KMS services.
... View more
- Find more articles tagged with:
- Issue Resolution
- Ranger
- ranger-admin
- ranger-kms
- Security
Labels:
12-25-2016
08:09 PM
SYMPTOM: Knox logs are filling up disk space ROOT CAUSE:
Kerberos debug is turned on by default causing the gateway.out file to grow rapidly. RESOLUTION:
To turn off kerberos debug logging.
1. Go to Ambari. KNOX -> Configs-> Advanced gateway-site
2. Change parameter sun.security.krb5.debug from true to false.
3. Restart Knox.
... View more
- Find more articles tagged with:
- Issue Resolution
- issue-resolution
- Knox
- knox-gateway
- Security
Labels:
12-25-2016
08:06 PM
SYMPTOM:
Hbase is giving an error KeyValue size too large when inserting large data sizes. ERROR: 0java.lang.IllegalArgumentException: KeyValue size too large
at org.apache.hadoop.hbase.client.HTable.validatePut(HTable.java:1521)
at org.apache.hadoop.hbase.client.BufferedMutatorImpl.validatePut(BufferedMutatorImpl.java:147)
at org.apache.hadoop.hbase.client.BufferedMutatorImpl.doMutate(BufferedMutatorImpl.java:134)
at org.apache.hadoop.hbase.client.BufferedMutatorImpl.mutate(BufferedMutatorImpl.java:105)
at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1050)
at org.apache.hadoop.hbase.rest.RowResource.update(RowResource.java:229)
at org.apache.hadoop.hbase.rest.RowResource.put(RowResource.java:318)
at sun.reflect.GeneratedMethodAccessor62.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:134)
at ROOT CAUSE:
hbase.client.keyvalue.maxsize is set too low RESOLUTION:
Set hbase.client.keyvalue.maxsize=0 This will allow the client key value to be allowed. Just be careful with this as too large of a keyvalue >1-2gb could have performance implications.
... View more
- Find more articles tagged with:
- HBase
- Issue Resolution
- issue-resolution
Labels:
12-25-2016
07:48 PM
Problem:
Time to live for collection data was never set on Solr Cloud server and caused the disk and nodes to fill up with too many documents due to ranger audit.
Solution:
1. Delete the collection through Solr api because the ranger archive was stored in HDFS anyways. (The following cleared up disk space)
http://<solr_host>:<solr_port>/solr/admin/collections?action=DELETE&name=collection
2. Set the configuration in solr to have a time to live.
a. Download each of the following configs from zookeeper: schema.xml, solrconfig.xml, and managed-schema.xml
/opt/hostname-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <zookeeper host>:<zookeeper port> -cmd get /ranger_audits/configs/ranger_audits/solrconfig.xml >/tmp/solrconfig.xml
/opt/hostname-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <zookeeper host>:<zookeeper port> -cmd get /ranger_audits/configs/ranger_audits/schema.xml >/tmp/schema.xml
/opt/hostname-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <zookeeper host>:<zookeeper port> -cmd get /ranger_audits/configs/ranger_audits/managed-schema >/tmp/managed-schema
b. Added the following to solrconfig
<updateRequestProcessorChain name="add-unknown-fields-to-the-schema">
<processor class="solr.DefaultValueUpdateProcessorFactory">
<str name="fieldName">_ttl_</str>
<str name="value">+90DAYS</str>
</processor>
<processor class="solr.processor.DocExpirationUpdateProcessorFactory">
<int name="autoDeletePeriodSeconds">86400</int>
<str name="ttlFieldName">_ttl_</str>
<str name="expirationFieldName">_expire_at_</str>
</processor>
<processor class="solr.FirstFieldValueUpdateProcessorFactory">
<str name="fieldName">_expire_at_</str>
</processor>
c. Added the following to schema.xml and managed-schema.xm
<field name="_expire_at_" type="tdate" multiValued="false" stored="true" docValues="true"/>
<field name="_ttl_" type="string" multiValued="false" indexed="true" stored="true"/>
d. Uploaded each edited file to zookeeper.
/opt/hostname-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <zookeeper host>:<zookeeper port> -cmd putfile ranger_audits/configs/ranger_audits/solrconfig.xml /tmp/solrconfig.xml
/opt/hostname-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <zookeeper host>:<zookeeper port> -cmd putfile ranger_audits/configs/ranger_audits/schema.xml /tmp/schema.xml
/opt/hostname-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <zookeeper host>:<zookeeper port> -cmd putfile ranger_audits/configs/ranger_audits/managed-schema /tmp/managed-schema
3.Make sure on each node the ranger_audit replicas were removed from the solr directories in the local filesystem.
4. Lastly we issue the create command
http://<host>:8983/solr/admin/collections?action=CREATE&name=ranger_audits&collection.configName=ranger_audits&numShards=2
MORE INFO:
Check out more Solr General info and Ambari-infra TTL info here: https://community.hortonworks.com/articles/63853/solr-ttl-auto-purging-solr-documents-ranger-audits.html
... View more
- Find more articles tagged with:
- disk
- Issue Resolution
- Ranger
- ranger_audit
- Security
- solrcloud
Labels:
03-16-2017
10:39 AM
Hi, This bug can have consequences on Spark / Yarn as well. We were encountering Out of Memory conditions running Spark job, not matter how much memory we assigned, we kept ending up exhausting it completely. This behaviour actually disappeared when we applied the fix listed here. I'll post back when I know more about the root cause & link between issues. Regards, Christophe
... View more
12-23-2016
04:51 PM
3 Kudos
There was a failure during "Configure Ambari Identity", but retry passed. So I thought it was not really a problem. I am sure the sudo rule is the problem. Will try again and let you know the outcome. Update the sudo permission and got another error: "you must have a tty to run sudo". this turns out to be related to sudo setting, use visudo to comment requiretty fixed the problem visudo
#Defaults requiretty
... View more
12-20-2016
06:36 PM
3 Kudos
Hi, we have an official document for deploy HDP on VMs: https://hortonworks.com/wp-content/uploads/2014/02/1514.Deploying-Hortonworks-Data-Platform-VMware-vSphere-0402161.pdf. It has reference link on HVE (NodeGroup) feature which include details you may want to know.
... View more
12-13-2016
06:11 PM
1 Kudo
Hi David, please see this HCC post regarding Tableau 10 integration with Knox over ODBC.
... View more
12-03-2016
12:33 AM
2 Kudos
Hi Raja,
In ranger 0.5 it is a lot more difficult to get the data from the db directly. What I have done in the past is use the Rest APIs to download the content and flatten out the data. You can get the rest apis from your browsers developer tool utilities. For example,
json curl curl 'http://127.0.0.1:6080/service/plugins/policies?page=0&pageSize=25&startIndex=0&serviceType=hive&group=&_=1451414078895' -H 'Cookie: AMBARISESSIONID=bkhp0zgjk8be1a5hy438zyk7r; JSESSIONID=DACAF20F2DDF50BD05BFA82C37AE758F; clientTimeOffset=480' -H 'Accept-Encoding: gzip, deflate, sdch' -H 'Accept-Language: en-US,en;q=0.8' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36' -H 'Accept: application/json, text/javascript, */*; q=0.01' -H 'Referer: http://127.0.0.1:6080/index.html' -H 'X-Requested-With: XMLHttpRequest' -H 'Connection: keep-alive' --compressed or xml curl curl 'http://127.0.0.1:6080/service/plugins/policies?page=0&pageSize=25&startIndex=0&serviceType=hive&group=&_=1451414078895' -H 'Accept-Encoding: gzip, deflate, sdch' -H 'Accept-Language: en-US,en;q=0.8' -H 'Upgrade-Insecure-Requests: 1' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8' -H 'Cookie: AMBARISESSIONID=bkhp0zgjk8be1a5hy438zyk7r; JSESSIONID=DACAF20F2DDF50BD05BFA82C37AE758F; clientTimeOffset=480' -H 'Connection: keep-alive' --compressed If you change the serviceType= to other components you can get the reports from those components, etc. Once you have the data you can use something like xpath to parse the data: Forexample, Say I want all hive databases in all my policies rangerPolicyList/policies/resources/entry[key='database']/value//values Say I want all users: /rangerPolicyList/policies/policyItems/users Say I want all groups: /rangerPolicyList/policies/policyItems/groups Or you can use a parsing tool of your choice. Forexample, http://www.convertcsv.com/json-to-csv.htm to convert the json file to csv and import into excel. In HDP version 2.5 you can download reports in Ranger to csv/excel.
... View more
04-24-2018
09:04 PM
Keep in mind, Taxonomy feature is still in Tech Preview (ie. not recommended for production use) and will not be supported. Taxonomy will be production ready or GA in HDP 3.0
... View more
06-08-2016
01:44 AM
5 Kudos
SYMPTOM: Specifying a valid group in Ranger Knox policy results in a 403 authorization error. ERROR: The result is a 403 forbidden error.
ROOT CAUSE: Most likely the cause of this issue is that the topology is not setup in Knox for ldap groups to be passed to Ranger from the Knox plugin. RESOLUTION: Make sure the following values are present and correct in the topology: <!-- changes needed for group sync-->
<param>
<name>main.ldapRealm.authorizationEnabled</name>
<value>true</value>
</param>
<param>
<name>main.ldapRealm.groupSearchBase</name>
<value>OU=MyUsers,DC=AD-HDP,DC=COM</value>
</param>
<param>
<name>main.ldapRealm.groupObjectClass</name>
<value>group</value>
</param>
<param>
<name>main.ldapRealm.groupIdAttribute</name>
<value>cn</value>
</param> ALTERNATIVE SOLUTION: Instead of getting the above LDAP group settings working open up Knox Authorization to everyone by using the 'Public' group value on the Knox policy and then do authorization at the other service level policies like HDFS, HIVE, HBASE, etc. DEBUG TECHNIQUES This knox log setting should show you what is getting passed to RANGER from the KNOX Plugin.
Modify the gateway-log4j.properties like below, restart Knox and review the ranger Knox plugin log in the file ranger.knoxagent.log #Ranger Knox Plugin debug
ranger.knoxagent.logger=DEBUG,console,KNOXAGENT
ranger.knoxagent.log.file=ranger.knoxagent.log
log4j.logger.org.apache.ranger=${ranger.knoxagent.logger}
log4j.additivity.org.apache.ranger=false
log4j.appender.KNOXAGENT =org.apache.log4j.DailyRollingFileAppender
log4j.appender.KNOXAGENT.File=${app.log.dir}/${ranger.knoxagent.log.file}
log4j.appender.KNOXAGENT.layout=org.apache.log4j.PatternLayout
log4j.appender.KNOXAGENT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n %L
log4j.appender.KNOXAGENT.DatePattern=.yyyy-MM-dd
... View more
- Find more articles tagged with:
- groups
- Issue Resolution
- issue-resolution
- Knox
- Ranger
- Security
Labels:
06-01-2016
01:48 AM
SYMPTOM:
KMS gets 500 error when decrypting files when being accessed from another one way trust realm.
ERRORS:
Command line error:
[root@support ~]$ hdfs dfs -cat /zone_encr3/abc1.txt
cat: org.apache.hadoop.security.authentication.client.AuthenticationException: Authentication failed, status: 500, message: Internal Server Error
KMS Stack Trace error:
2016-05-04 15:44:21,677 ERROR [webservices-driver] - Servlet.service() for servlet [webservices-driver] in context with path [/kms] threw exception
org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: No rules applied to user06@HDP.COM
at org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:389)
at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:377)
at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:347)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:347)
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:348)
at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:519)
at org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:129)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:501)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1070)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:611)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:314)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
ROOT CAUSE:
There is a one way trust from another realm and auth to local rules are not given to the KMS configuration.
RESOLUTION:
hadoop.kms.authentication.kerberos.name.rules property needs to have the auth to local rules.
By default this property is set to DEFAULT. If you replace this value with the value from auth_to_local (core-site.xml) and restart ranger KMS service then we will see a user from another realm is able to decrypt the file successfully.
Copy the rules from core-site.xml (auth_to_local property) to "hadoop.kms.authentication.kerberos.name.rules" in KMS and restart KMS service.
Note:- When you paste the rules from auth_to_local to "hadoop.kms.authentication.kerberos.name.rules", the rules are pasted with space separated values instead of newline. This is fine.
... View more
- Find more articles tagged with:
- auth-to-local
- Issue Resolution
- issue-resolution
- KMS
- ranger-kms
- rules
- solutions
Labels:
06-01-2016
01:47 AM
2 Kudos
1. Make sure webhbase is running: [root@ambari-server]# curl http://localhost:60080/version
rest 0.0.3 [JVM: Oracle Corporation 1.8.0_60-25.60-b23] [OS: Linux 2.6.32-504.el6.x86_64 amd64] [Server: jetty/6.1.26.hwx] [Jersey: 1.9] 2. If the webhdfs REST daemon is not running then you will need to start it. [root@ambari-server]# /usr/hdp/current/hbase-client/bin/hbase-daemon.sh start rest -p 60080
starting rest, logging to /var/log/hbase/hbase-root-rest-ambari-server.support.com.out [root@ambari-server security]# ps -ef | grep 60080
root 19147 1 0 20:17 pts/1 00:00:00 bash /usr/hdp/current/hbase-client/bin/hbase-daemon.sh --config /usr/hdp/current/hbase-client/bin/../conf foreground_start rest -p 60080
root 19161 19147 28 20:17 pts/1 00:00:03 /usr/jdk64/jdk1.8.0_60/bin/java -Dproc_rest -XX:OnOutOfMemoryError=kill -9 %p -Dhdp.version=2.3.4.0-3485 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hbase/hs_err_pid%p.log -Djava.security.auth.login.config=/usr/hdp/current/hbase-regionserver/conf/hbase_client_jaas.conf -Djava.io.tmpdir=/tmp -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/var/log/hbase/gc.log-201605312017 -Dhbase.log.dir=/var/log/hbase -Dhbase.log.file=hbase-root-rest-ambari-server.support.com.log -Dhbase.home.dir=/usr/hdp/current/hbase-client/bin/.. -Dhbase.id.str=root -Dhbase.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.3.4.0-3485/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.4.0-3485/hadoop/lib/native -Dhbase.security.logger=INFO,RFAS org.apache.hadoop.hbase.rest.RESTServer -p 60080 start 3. Test the webhbase call through knox. (https://cwiki.apache.org/confluence/display/KNOX/Examples+HBase) [root@ambari-slave2 topologies]# curl -ik -u admin:admin-password -H "Accept: text/xml" -X GET 'https://localhost:8443/gateway/default/hbase/version'
HTTP/1.1 200 OK
Set-Cookie: JSESSIONID=vs8e1emsk00h1va2vk5aeq2b9;Path=/gateway/default;Secure;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Cache-Control: no-cache
Content-Type: text/xml
Content-Length: 192
Server: Jetty(8.1.14.v20131031)
<?xml version="1.0" standalone="yes"?><Version JVM="Oracle Corporation 1.8.0_60-25.60-b23" REST="0.0.3" OS="Linux 2.6.32-504.el6.x86_64 amd64" Server="jetty/6.1.26.hwx" Jersey="1.9"></Version>
... View more
- Find more articles tagged with:
- connection
- HBase
- How-ToTutorial
- Knox
- Security
- tip
Labels:
05-31-2016
07:42 PM
1 Kudo
1. Username and password input fields in ranger should be the ldap user and password authenticating through knox. 2. Configure the admin topology to meet your configuration needs by default it points to the demo ldap server on knox server. 3. To test manually, login to the URL directly using ldap credentials from the ranger server. https://<knoxhost>:8443/gateway/admin/api/v1/topologies 4. Make sure the ranger server trusts the knox certificate being presented; if it doesn't; you will need to add the knox gateway trust certificate to the ranger/java truststore. 5. Currently knox service lookups don't work in ranger and a bug has been filed to fix this. Currently, as of HDP 2.4, only topology lookup in ranger works.
... View more
- Find more articles tagged with:
- How-ToTutorial
- Knox
- knox-gateway
- Ranger
- ranger-admin
- solutions
Labels:
10-24-2016
11:26 PM
Hi Jennie, You can just pull the query out to a file and check how big the file is.
... View more