Member since
09-15-2015
457
Posts
507
Kudos Received
90
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
15578 | 11-01-2016 08:16 AM | |
10980 | 11-01-2016 07:45 AM | |
8369 | 10-25-2016 09:50 AM | |
1891 | 10-21-2016 03:50 AM | |
3708 | 10-14-2016 03:12 PM |
04-06-2016
06:07 AM
@Jagdish Saripella Is this the connection test included in the Ranger UI when you setup a new Solr Service? To be honest I have never tried that, but my guess is that this connection test is not working with SolrCloud.
... View more
03-23-2016
04:58 PM
3 Kudos
Make sure the Audit Source is set to DB in Ambari (see Ranger configuration). Also could you check if the database (mysql?) contains any audit entries?
... View more
03-19-2016
08:21 AM
1 Kudo
Take a look at this article https://community.hortonworks.com/articles/15159/securing-solr-collections-with-ranger-kerberos.html especially the following section -------- Since all Solr data will be stored in the Hadoop Filesystem, it is important to adjust the time Solr will take to shutdown or "kill" the Solr process (whenever you execute "service solr stop/restart"). If this setting is not adjusted, Solr will try to shutdown the Solr process and because it takes a bit more time when using HDFS, Solr will simply kill the process and most of the time lock the Solr Indexes of your collections. If the index of a collection is locked the following exception is shown after the startup routine "org.apache.solr.common.SolrException: Index locked for write" Increase the sleep time from 5 to 30 seconds in /opt/lucidworks-hdpsearch/solr/bin/solr sed -i 's/(sleep 5)/(sleep 30)/g'/opt/lucidworks-hdpsearch/solr/bin/solr -------- You can try the following solution: Make sure nobody is writing to the Solr Index, remove all write.lock files in your HDFS Solr folder (/solr/), Restart Solr. Also make sure the solr user or the user that is running your Solr instance is able to remove the write.lock file.
... View more
03-19-2016
08:17 AM
2 Kudos
I just installed a new cluster on HDP 2.3.4 with Solr 5.2.1 including Kerberos + Ranger Solr Plugin. Following this article https://community.hortonworks.com/articles/15159/securing-solr-collections-with-ranger-kerberos.html
However when I enable the Ranger Solr Plugin and restart solr I am seeing the following error:
632300 [Thread-15] WARN org.apache.ranger.plugin.util.PolicyRefresher [ ] – cache file does not exist or not readble 'null'
662301 [Thread-15] ERROR org.apache.ranger.plugin.util.PolicyRefresher [ ] – PolicyRefresher(serviceName=null): failed to refresh policies. Will continue to use last known version of policies (-1)
com.sun.jersey.api.client.ClientHandlerException: java.lang.IllegalArgumentException: URI is not absolute
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:151)
at com.sun.jersey.api.client.Client.handle(Client.java:648)
at com.sun.jersey.api.client.WebResource.handle(WebResource.java:680)
at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74)
at com.sun.jersey.api.client.WebResource$Builder.get(WebResource.java:507)
at org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdated(RangerAdminRESTClient.java:73)
at org.apache.ranger.plugin.util.PolicyRefresher.loadPolicyfromPolicyAdmin(PolicyRefresher.java:205)
at org.apache.ranger.plugin.util.PolicyRefresher.loadPolicy(PolicyRefresher.java:175)
at org.apache.ranger.plugin.util.PolicyRefresher.run(PolicyRefresher.java:154)
Caused by: java.lang.IllegalArgumentException: URI is not absolute
at java.net.URI.toURL(URI.java:1088)
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:159)
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149)
... 8 more
This is what it looks like on HDP 2.3.2
INFO - 2016-03-15 16:54:50.478; [ ] org.apache.ranger.plugin.util.PolicyRefresher; PolicyRefresher(serviceName=mycluster_solr): found updated version. lastKnownVersion=-1; newVersion=61
On 2.3.4 the serviceName is not replaced by the actual repository name, which was set in the install.properties file. On 2.3.2 the serviceName is replaced by the repository name <clustername>_solr.
Looks like a bug :(
Anyone seen this issue before? Any possible workaround?
... View more
Labels:
03-15-2016
08:21 AM
2 Kudos
If you have enabled kerberos your connection string should look as follows: !connect jdbc:hive2://<hiveserver host>:<port>/default;principal=hive/_HOST@<REALM>
... View more
03-14-2016
05:17 PM
3 Kudos
It looks like your user does not have the right permissions to access the Tez View. Thats what it looks like when I am logged in as the admin user Go To Manage Ambari (user dropdown menu) -> Views (left side navigation) -> Tez View -> Add your user "maria_dev" to the list of allowed users ("GRant permission to these users")
... View more
03-14-2016
09:26 AM
4 Kudos
One way is to use Chef (OS + DBs + Partitions + ...) in combination with Ambari Blueprints (Cluster + Kerberos/Security). There are already some sample cookbooks out there that you can use as a starting point, e.g. https://github.com/bloomberg/chef-bcpc or https://supermarket.chef.io/cookbooks/hadoop
... View more
03-14-2016
08:12 AM
2 Kudos
This might help, take a look at the post from https://martin.atlassian.net/wiki/pages/viewpage.action?pageId=27885570 @Lester Martin
... View more
03-07-2016
07:04 AM
1 Kudo
Put the Solr instances on the slave/Datanodes. In regards to configuration, see this https://community.hortonworks.com/articles/15159/securing-solr-collections-with-ranger-kerberos.html although its focused on securing Solr with Kerberos and Ranger, there is also a small section about the installation.
... View more