Member since
05-10-2016
26
Posts
10
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4027 | 01-09-2019 06:17 PM | |
2366 | 12-14-2018 07:49 PM | |
1354 | 02-24-2017 02:57 PM | |
6073 | 09-13-2016 04:52 PM |
01-11-2018
06:30 PM
In Ambari under Solr there is a box for environment stuff. I don't have Ambari in front of me right now but you should be able to add a line like so: SOLR_OPTS="$SOLR_OPTS -javaagent...."
... View more
12-01-2017
05:17 PM
1 Kudo
This was expanded to a blog post with additional details: https://risdenk.github.io/2017/12/18/ambari-infra-solr-ranger.html There are two things I would check: 1) Check the field caches by navigating to Ambari Infra Solr UI - http://HOSTNAME:8886/solr/#/ranger_audits_shard1_replica1/plugins/cache?entry=fieldCache,fieldValueCache 2) If you were to take a heap dump, you could open the heap dump in Eclipse Memory Analyzer and check this biggest heap offender. My assumption is that the majority of the heap is being used by uninverting the _version_ field since it is being used for sorting or other things instead of indexing. This isn't a misuse by your part but a problem with how Ranger is using Solr. I was able to fix this by turning on DocValues for the _version_ field. I am currently working on opening a Ranger JIRA and posting to the Ranger user list to try to address this. The change to DocValues will require you to delete the collection and recreate. We have been running with 4GB heap with no issues. Previously we needed a heap of almost 20GB to handle the uninverted cached values (this grew overtime).
... View more
11-29-2017
02:21 PM
Is this infra-solr or a regular Solr instance? Assuming from the tags this is infra-solr. Are you putting Ranger audit logs in Infra Solr?
... View more
11-12-2017
04:48 AM
@Sean Roberts - Was Solr completely initialized when you were hitting it with curl? Did you restart Solr or reload the ranger_audit collection? One way to get a better error message: http://hostname:8886/solr/ranger_audits_shard1_replica1/query?debug=query&q=*:*&distrib=false This will query only a single shard and not try to get bounced around. On my Ambari Infra server, if I reboot and issue queries before the collections are completely loaded I get the request is a replay hitting /solr/ranger_audits/ but if I hit a single shard I get this error message: {
"responseHeader":{
"status":503,
"QTime":0,
"params":{
"q":"*:*",
"debug":"query"}},
"error":{
"metadata":[
"error-class","org.apache.solr.common.SolrException",
"root-error-class","org.apache.solr.common.SolrException"],
"msg":"no servers hosting shard: shard2",
"code":503}} The shard2 could be any shard that is still initializing. You should also be able to see this in the Solr Admin UI -> Cloud and see which shards aren't green.
... View more
04-03-2017
09:04 PM
1 Kudo
I'm pretty sure the Capacity Scheduler permissions query against the Ambari API is incorrect. The change that introduced this is here: https://github.com/apache/ambari/commit/82ccf224f0142b0b63c13e6ac0d58d45f55dd5ab The related JIRA: https://issues.apache.org/jira/browse/AMBARI-16866 The correct url: http://AMBARI/api/v1/users/USERNAME/privileges?PrivilegeInfo/permission_name=AMBARI.ADMINISTRATOR|(PrivilegeInfo/permission_name.in(CLUSTER.ADMINISTRATOR,CLUSTER.OPERATOR)&PrivilegeInfo/cluster_name=CLUSTER) This returns the list of privileges. The incorrect url: http://AMBARI/api/v1/users/USERNAME?privileges/PrivilegeInfo/permission_name=AMBARI.ADMINISTRATOR|(privileges/PrivilegeInfo/permission_name.in(CLUSTER.ADMINISTRATOR,CLUSTER.OPERATOR)&privileges/PrivilegeInfo/cluster_name=CLUSTER) This returns a blank page.
... View more
02-24-2017
02:57 PM
@Saloni Udani - I ran into the same thing. The issue is https://issues.apache.org/jira/browse/AMBARI-19258. I was able to patch resourceFilesKeeper.py. If you are going to use alerts you may also run into https://issues.apache.org/jira/browse/AMBARI-19283 which is harder to work around.
... View more
12-13-2016
08:31 PM
@Andrew Bumstead opened SOLR-8538. This was resolved as not a problem since JDK 1.7.0_79 fixed the issue. If you use JDK 8 and Solr <6.2.0 you will most likely have to manually update hadoop-commons to 2.6.1+. The packaged Solr 5.5.0 and 5.5.1 for HDP 2.3, 2.4, and 2.5 seem to have the original packaged Hadoop 2.6.0 dependencies. If you don't upgrade the Hadoop dependencies under Solr to 2.6.1+, you will most likely get Kerberos ticket renewal issues or you have to follow the steps outlined below by @Jonas Straub to enable full Kerberos authentication.
... View more
11-04-2016
06:17 PM
1 Kudo
Would be good to link to the Apache Solr documentation for this specifically: https://cwiki.apache.org/confluence/display/solr/Parallel+SQL+Interface https://cwiki.apache.org/confluence/display/solr/Solr+JDBC+-+Apache+Zeppelin
... View more
10-17-2016
06:44 PM
2 Kudos
Can we store index in local file system as well as in HDFS? Simultaneaously. Solr supports both indices on local file system and HDFS. It just depends on the directory factory being used for the collection. A single collection cannot span different file systems. What should be possible (haven't tested) is to have say two collections (one local filesystem, one hdfs filesystem) and then use Solr collection aliases to search both collections at once. Also can we set a custom job periodically to upload index from local file system to HDFS? There is nothing different between an index on HDFS and a local index. Moving an index between the two can be done carefully making sure that the correct index ends up in the right location. If the index is moved improperly (ie: shards don't line up) then you will get bad results.
... View more
09-13-2016
04:52 PM
Most likely this is https://issues.apache.org/jira/browse/RANGER-863. The user that has issues I would bet has a lot of AD groups and has a large Kerberos request header.
... View more
- « Previous
-
- 1
- 2
- Next »