Member since
05-29-2017
408
Posts
123
Kudos Received
9
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3275 | 09-01-2017 06:26 AM | |
| 2118 | 05-04-2017 07:09 AM | |
| 1915 | 09-12-2016 05:58 PM | |
| 2641 | 07-22-2016 05:22 AM | |
| 2054 | 07-21-2016 07:50 AM |
07-22-2016
05:22 AM
I have solved it by applying following steps. - Stop all Solr instances
- Stop all Zookeeper instances
- Start all Zookeeper instances
- Start Solr instances one at a time.
... View more
07-21-2016
10:13 AM
I have setup my cluster in cloud mode with 3 nodes and it was running fine. I have created two collections as well. But when I was enabling ranger plugin then I had to restart all the nodes. I stopped all the solr instance and started once again then following error occurred. I can see solr UI and my collection but I can't see cluster status in command line as getting below error. solr@m1 solr]$ bin/solr stop -all Sending stop command to Solr running on port 8983 ... waiting 5 seconds to allow Jetty process 15060 to stop gracefully. [solr@m1 solr]$ bin/solr start -c -z m1.hdp22:2181,m2.hdp22:2181,w1.hdp22:2181 -Dsolr.directoryFactory=HdfsDirectoryFactory -Dsolr.lock.type=hdfs Started Solr server on port 8983 (pid=24214). Happy searching! [solr@m1 solr]$ bin/solr status Found 1 Solr nodes: Solr process 24214 running on port 8983 Failed to get system information from http://localhost:8983/solr/ due to: org.apache.solr.client.solrj.SolrServerException: clusterstatus the collection time out:180s at org.apache.solr.util.SolrCLI.getJson(SolrCLI.java:537) at org.apache.solr.util.SolrCLI.getJson(SolrCLI.java:471) at org.apache.solr.util.SolrCLI$StatusTool.getCloudStatus(SolrCLI.java:721) at org.apache.solr.util.SolrCLI$StatusTool.reportStatus(SolrCLI.java:704) at org.apache.solr.util.SolrCLI$StatusTool.runTool(SolrCLI.java:662) at org.apache.solr.util.SolrCLI.main(SolrCLI.java:215)
I went through following jira which tell it is bug but I am not sure how to solve this issue,so can someone please help me to get it resolved. https://issues.apache.org/jira/browse/SOLR-7018 Note : I am using solr-spec-version":"5.2.1 version.
... View more
Labels:
- Labels:
-
Apache Solr
07-21-2016
09:15 AM
@Jonas Straub Can we configure ranger on solr without having kerberos in our cluster ?
... View more
07-21-2016
07:50 AM
I have solved it by setting up <boolname="solr.hdfs.blockcache.direct.memory.allocation">false</bool> in solrconfig.xml file.
... View more
07-21-2016
07:46 AM
@Ravi Hey Ravi, thanks I have solved it by changing value of SOLR_HEAP to 1024 MB in /opt/lucidworks-hdpsearch/solr/bin/solr.in.sh. Thanks once again for all your help. SOLR_HEAP="1024m" [solr@m1 solr]$ /opt/lucidworks-hdpsearch/solr/bin/solr create -c test -d /opt/lucidworks-hdpsearch/solr/server/solr/configsets/data_driven_schema_configs_hdfs/conf -n test -s 2 -rf 2 Connecting to ZooKeeper at m1.hdp22:2181,m2.hdp22:2181,w1.hdp22:2181 Uploading /opt/lucidworks-hdpsearch/solr/server/solr/configsets/data_driven_schema_configs_hdfs/conf for config test to ZooKeeper at m1.hdp22:2181,m2.hdp22:2181,w1.hdp22:2181 Creating new collection 'test' using command: http://192.168.56.42:8983/solr/admin/collections?action=CREATE&name=test&numShards=2&replicationFactor=2&maxShardsPerNode=2&collection.configName=test { "responseHeader":{ "status":0, "QTime":8494}, "success":{"":{ "responseHeader":{ "status":0, "QTime":8338}, "core":"test_shard1_replica1"}}}
... View more
07-21-2016
07:32 AM
@Ravi: Thanks a lot, It helped me to avoid direct memory issue but now I encountered another issue, so can you please help me on this also. [solr@m1 solr]$ /opt/lucidworks-hdpsearch/solr/bin/solr create -c test -d /opt/lucidworks-hdpsearch/solr/server/solr/configsets/data_driven_schema_configs_hdfs/conf -n test -s 2 -rf 2 Connecting to ZooKeeper at m1.hdp22:2181,m2.hdp22:2181,w1.hdp22:2181 Uploading /opt/lucidworks-hdpsearch/solr/server/solr/configsets/data_driven_schema_configs_hdfs/conf for config test to ZooKeeper at m1.hdp22:2181,m2.hdp22:2181,w1.hdp22:2181 Creating new collection 'test' using command: http://192.168.56.41:8983/solr/admin/collections?action=CREATE&name=test&numShards=2&replicationFactor=2&maxShardsPerNode=2&collection.configName=test { "responseHeader":{ "status":0, "QTime":6299}, "failure":{"":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at http://192.168.56.41:8983/solr: Error CREATEing SolrCore 'test_shard1_replica1': Unable to create core [test_shard1_replica1] Caused by: Java heap space"}, "success":{"":{ "responseHeader":{ "status":0, "QTime":5221}, "core":"test_shard2_replica1"}}}
... View more
07-21-2016
07:16 AM
@james.jones Thanks a lot. It helped me a lot. I have successfully pushed current config to zookeeper.
... View more
07-20-2016
10:18 AM
@james.jones: Can you also please let me know how to upload my new solrconfig.xml to all zookeepr ?
... View more
07-20-2016
10:05 AM
Team, When I am creating collection with two shards and 2 replication then I am getting buffer memory issue. So can you please help me how to increasing it or by setting other property. [solr@m1 solr]$ /opt/lucidworks-hdpsearch/solr/bin/solr create -c test -d /opt/lucidworks-hdpsearch/solr/server/solr/configsets/data_driven_schema_configs_hdfs/conf -n test -s 2 -rf 2 Connecting to ZooKeeper at m1.hdp22:2181,m2.hdp22:2181,w1.hdp22:2181 Uploading /opt/lucidworks-hdpsearch/solr/server/solr/configsets/data_driven_schema_configs_hdfs/conf for config test to ZooKeeper at m1.hdp22:2181,m2.hdp22:2181,w1.hdp22:2181 Creating new collection 'test' using command: http://192.168.56.42:8983/solr/admin/collections?action=CREATE&name=test&numShards=2&replicationFactor=2&maxShardsPerNode=2&collection.configName=test { "responseHeader":{ "status":0, "QTime":4812}, "failure":{"":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at http://192.168.56.41:8983/solr: Error CREATEing SolrCore 'test_shard1_replica1': Unable to create core [test_shard1_replica1] Caused by: Direct buffer memory"}, "success":{"":{ "responseHeader":{ "status":0, "QTime":4659}, "core":"test_shard2_replica1"}}}
... View more
Labels:
- Labels:
-
Apache Solr
07-20-2016
10:01 AM
@Ravi Can you please help me how to setup buffer memory for my solr cluster. I am getting following error. [solr@m1 solr]$ /opt/lucidworks-hdpsearch/solr/bin/solr create -c test -d /opt/lucidworks-hdpsearch/solr/server/solr/configsets/data_driven_schema_configs_hdfs/conf -n test -s 2 -rf 2 Connecting to ZooKeeper at m1.hdp22:2181,m2.hdp22:2181,w1.hdp22:2181 Uploading /opt/lucidworks-hdpsearch/solr/server/solr/configsets/data_driven_schema_configs_hdfs/conf for config test to ZooKeeper at m1.hdp22:2181,m2.hdp22:2181,w1.hdp22:2181 Creating new collection 'test' using command: http://192.168.56.42:8983/solr/admin/collections?action=CREATE&name=test&numShards=2&replicationFactor=2&maxShardsPerNode=2&collection.configName=test { "responseHeader":{ "status":0, "QTime":4812}, "failure":{"":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at http://192.168.56.41:8983/solr: Error CREATEing SolrCore 'test_shard1_replica1': Unable to create core [test_shard1_replica1] Caused by: Direct buffer memory"}, "success":{"":{ "responseHeader":{ "status":0, "QTime":4659}, "core":"test_shard2_replica1"}}}
... View more