Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

What are the best practices/guidelines for solr replication across data-centers?

avatar

Hi,

What are the best practices/guidelines for solr replication across data-centers (primary and dr)? I did found Cross Data Center Replication feature for solr 6.0. Has anyone used it successfully in production environment?

Thanks for looking.
Raj
1 ACCEPTED SOLUTION

avatar
Super Collaborator

@rbiswas, You may have read this but there's some good info here in what they describe as a "real world" production configuration using the new cross-data-center replication: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=62687462

Since this feature only came out in 6.0 which was released less than 2 months ago, there's probably been limited production use.

ALSO....Not a best practice, but since way before Solr Cloud existed, we used a brute force method of cross-data-center replication for stand-by Solrs with the magic of rsync. You can reliably use rsync to copy indexes as they are being updated, but there's a bit of scripting required.

I have only done this in non-cloud environments, but I'm pretty sure it can be done in cloud as well. It is crude, but it worked for years and uses some of the great features of linux.

Example script, run in crontab from the DR site nodes:

#step 1 - create a backup first, assuming your current copy is good. 
cp -rl ${data_dir} ${data_dir}.BAK

#step 2 - Now copy from the primary site
status=1
while [ $status != 0 ]; do
    rsync -a --delete ${primary_site_node}:${data_dir} ${data_dir}
    status=$?
done
echo "COPY COMPLETE!"

That script will create local backup (instantly via hard-links, not soft links) and then copies [only] new files and deletes files from DR that are have been deleted from Primary/remote. If files disappear during the rsync copy, it will copy again until nothing changes during the rsync. This can be run from crontab, but it does need a bit of bullet-proofing.

Simple. Crude. It works.

View solution in original post

7 REPLIES 7

avatar
Rising Star

Falcon can be used to replicate Solr transaction logs and index. If the index is active, replication may fail and be automatically retried. Therefore, it's best to schedule replication for off-peak periods.

avatar

@cnormile In case of a disaster if dr becomes primary what needs to be changed? Is there any document elaborating this? What happens if we schedule this for off peak and the primary fails in peak hours. will that data be lost?

avatar

@cnormile Which version of Falcon supports this?

avatar
Rising Star

For high availability with Solr, the best practice is probably using SolrCloud. I believe with SolrCloud, you let Solr handle the replication by creating additional shards. The Solr docs have more info (http://archive.apache.org/dist/lucene/solr/ref-guide/apache-solr-ref-guide-5.2.pdf).

avatar
Super Collaborator

Note that Solr Cloud's replication is not intended to go across data centers due to volume of traffic and dependency on zookeeper ensembles. However, the recently released 6.x added a special replication to go across data centers.

https://issues.apache.org/jira/browse/SOLR-6273, which is based on this description: http://yonik.com/solr-cross-data-center-replication/

Basically, this is a cross-cluster replication, which is different from the standard Solr Cloud's replication mechanism.

avatar
Super Collaborator

@rbiswas, You may have read this but there's some good info here in what they describe as a "real world" production configuration using the new cross-data-center replication: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=62687462

Since this feature only came out in 6.0 which was released less than 2 months ago, there's probably been limited production use.

ALSO....Not a best practice, but since way before Solr Cloud existed, we used a brute force method of cross-data-center replication for stand-by Solrs with the magic of rsync. You can reliably use rsync to copy indexes as they are being updated, but there's a bit of scripting required.

I have only done this in non-cloud environments, but I'm pretty sure it can be done in cloud as well. It is crude, but it worked for years and uses some of the great features of linux.

Example script, run in crontab from the DR site nodes:

#step 1 - create a backup first, assuming your current copy is good. 
cp -rl ${data_dir} ${data_dir}.BAK

#step 2 - Now copy from the primary site
status=1
while [ $status != 0 ]; do
    rsync -a --delete ${primary_site_node}:${data_dir} ${data_dir}
    status=$?
done
echo "COPY COMPLETE!"

That script will create local backup (instantly via hard-links, not soft links) and then copies [only] new files and deletes files from DR that are have been deleted from Primary/remote. If files disappear during the rsync copy, it will copy again until nothing changes during the rsync. This can be run from crontab, but it does need a bit of bullet-proofing.

Simple. Crude. It works.

avatar

@james.jones if you can put this as an answer, I will accept. Thanks