Created 11-16-2016 09:15 PM
Hi,
I wonder how's the replication work on solr? One of the core_node data directory is crashed, will solr automatically recover the lost data? If not, how to do it manually? The replication factor is set to 2 for this collection
Now I am still able to query without error messages but sometime it returns the correct total number of documents and sometime it show roughly 50% of it.
How do I correct this problem?
-Wing
Created 11-16-2016 10:48 PM
@Wing Lo - The solution may depend on what is actually wrong. It may be that the node is just out of memory. If that's the case, a restart may resolve the issue (but it could occur again).
If the index data is actually corrupt, you can take the bad node offline and the errors will stop. However, if the index is actually corrupt, you will need to fix/replace the bad data. I have not used this technique, but you might look at this https://support.lucidworks.com/hc/en-us/articles/202091128-How-to-deal-with-Index-Corruption
Created 11-16-2016 10:48 PM
@Wing Lo - The solution may depend on what is actually wrong. It may be that the node is just out of memory. If that's the case, a restart may resolve the issue (but it could occur again).
If the index data is actually corrupt, you can take the bad node offline and the errors will stop. However, if the index is actually corrupt, you will need to fix/replace the bad data. I have not used this technique, but you might look at this https://support.lucidworks.com/hc/en-us/articles/202091128-How-to-deal-with-Index-Corruption
Created 11-17-2016 02:35 PM
Thanks for the suggestions. Everything seem working after i restart the Solr. Thanks
Created 11-17-2016 03:09 PM
Awesome. If it continues to happen you'll need to figure out why you're getting OOM (assuming that was what was happening). Intermittent exceptions are often a symptom of OOM exceptions. Solr loves memory but there are a lot of factors that can contribute to it. Sometimes giving the JVM more memory is the solution, but not always. Good luck.