Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Cloudera Search: How HDFS replica and shards/replica collaborate to achieve Fault Tolerance in R/W?

avatar
Explorer

Hi all,
I would like to have a clarification about Cloudera Search architecture, in  particular on the way it manage Replicas.
I know that indexes are stored in HDFS and HDFS has its own replication  factor, so a directory is replicated N times in the cluster.
I know that with "sharding" you can divide index in more pieces stored in different directory over HDFS. "Replica", as well, add more directories with
index pieces copies.

I would like to know:
- why I need to add collection replica nodes, if I already have HDFS  replica?
- in a N nodes cluster without collection replicas, what is the behaviour if a nodes goes down? How can I read and write the index's piece owned by
the corresponding shard?
- What is the best way to save storage without leverage both hdfs replicas  and shard replicas?
- How can I achive high availability on Cloudera Search?

I have never found documentation or clear explanation over this topic.
Thank you so much for your support,
Stefano

3 ACCEPTED SOLUTIONS

avatar
Expert Contributor

Stefano,

 

> Is it possible to force no HDFS replication when i create a collection (if HDFS replication factor has already been setted to >=2 )?

 

I don't think Solr can handle the number of block replication in HDFS. If you just want to reduce the number of block replica, doing "hdfs dfs -setrep" should work (please note that the reliability would be decreasing).

 

> Are there best practice to add replicas after I have created a n-sharded collection without replicas? Can you give me an official link?

 

While we don't usually recommend to do the operation officially, I could find the following answer in stackoverflow.

 

http://stackoverflow.com/questions/18441893/add-shard-replica-in-solrcloud

 

Please note that, this can be dangerous if done incorrectly.

View solution in original post

avatar
Champion Alumni

I would probably suggest using  curl command to add replica  and not the solr UI.This is the wiki reference on how  you  can do that using the collection APIs.

 

https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-api_addreplica

 

You may want to do this during a quite time as it would have more I/O on the system,again depends on what your cluster environment and index size looks like.

 

Thanks,

Nishan

View solution in original post

avatar
Explorer

Thank you so much for your advices

Stefano

View solution in original post

5 REPLIES 5

avatar
Expert Contributor

Stefano,

 

> why I need to add collection replica nodes, if I already have HDFS  replica?

 

Queries are being issued against indices (shards), not against block replicas. That's why you need to have shards in SolrCloud, while you have HDFS block replicas. Replicas for shards are for robustness in SolrCloud. See the following link as well.

 

http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/search_glossary.html


> in a N nodes cluster without collection replicas, what is the behaviour if a nodes goes down? How can I read and write the index's piece owned by the corresponding shard?

 

If you have only one replica for a given shard, it will be unsearchable once the node (which has the replica) goes down.


> What is the best way to save storage without leverage both hdfs replicas  and shard replicas?

 

It depends on your requirement. That's a trade-off between saving storages and keeping fault tolerance/reliable.


> How can I achive high availability on Cloudera Search?

 

It's being covered by the following guide:

 

http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/search_ha_proxy.html

avatar
Explorer

Thank you very much for your quick and precise answers.

 

I would like also to know:

- Is it possible to force no HDFS replication when i create a collection (if HDFS replication factor has already been setted to >=2 )?

- Are there best practice to add replicas after I have created a n-sharded collection without replicas? Can you give me an official link?

 

Thank you,

Stefano

 

 

avatar
Expert Contributor

Stefano,

 

> Is it possible to force no HDFS replication when i create a collection (if HDFS replication factor has already been setted to >=2 )?

 

I don't think Solr can handle the number of block replication in HDFS. If you just want to reduce the number of block replica, doing "hdfs dfs -setrep" should work (please note that the reliability would be decreasing).

 

> Are there best practice to add replicas after I have created a n-sharded collection without replicas? Can you give me an official link?

 

While we don't usually recommend to do the operation officially, I could find the following answer in stackoverflow.

 

http://stackoverflow.com/questions/18441893/add-shard-replica-in-solrcloud

 

Please note that, this can be dangerous if done incorrectly.

avatar
Champion Alumni

I would probably suggest using  curl command to add replica  and not the solr UI.This is the wiki reference on how  you  can do that using the collection APIs.

 

https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-api_addreplica

 

You may want to do this during a quite time as it would have more I/O on the system,again depends on what your cluster environment and index size looks like.

 

Thanks,

Nishan

avatar
Explorer

Thank you so much for your advices

Stefano