- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Cloudera Search: How HDFS replica and shards/replica collaborate to achieve Fault Tolerance in R/W?
- Labels:
-
Cloudera Search
-
HDFS
Created on ‎02-10-2015 01:16 AM - edited ‎09-16-2022 02:21 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi all,
I would like to have a clarification about Cloudera Search architecture, in particular on the way it manage Replicas.
I know that indexes are stored in HDFS and HDFS has its own replication factor, so a directory is replicated N times in the cluster.
I know that with "sharding" you can divide index in more pieces stored in different directory over HDFS. "Replica", as well, add more directories with
index pieces copies.
I would like to know:
- why I need to add collection replica nodes, if I already have HDFS replica?
- in a N nodes cluster without collection replicas, what is the behaviour if a nodes goes down? How can I read and write the index's piece owned by
the corresponding shard?
- What is the best way to save storage without leverage both hdfs replicas and shard replicas?
- How can I achive high availability on Cloudera Search?
I have never found documentation or clear explanation over this topic.
Thank you so much for your support,
Stefano
Created ‎02-12-2015 05:05 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Stefano,
> Is it possible to force no HDFS replication when i create a collection (if HDFS replication factor has already been setted to >=2 )?
I don't think Solr can handle the number of block replication in HDFS. If you just want to reduce the number of block replica, doing "hdfs dfs -setrep" should work (please note that the reliability would be decreasing).
> Are there best practice to add replicas after I have created a n-sharded collection without replicas? Can you give me an official link?
While we don't usually recommend to do the operation officially, I could find the following answer in stackoverflow.
http://stackoverflow.com/questions/18441893/add-shard-replica-in-solrcloud
Please note that, this can be dangerous if done incorrectly.
Created ‎02-12-2015 01:06 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I would probably suggest using curl command to add replica and not the solr UI.This is the wiki reference on how you can do that using the collection APIs.
https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-api_addreplica
You may want to do this during a quite time as it would have more I/O on the system,again depends on what your cluster environment and index size looks like.
Thanks,
Nishan
Created ‎02-18-2015 05:55 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Created ‎02-10-2015 03:23 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Stefano,
> why I need to add collection replica nodes, if I already have HDFS replica?
Queries are being issued against indices (shards), not against block replicas. That's why you need to have shards in SolrCloud, while you have HDFS block replicas. Replicas for shards are for robustness in SolrCloud. See the following link as well.
http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/search_glossary.html
> in a N nodes cluster without collection replicas, what is the behaviour if a nodes goes down? How can I read and write the index's piece owned by the corresponding shard?
If you have only one replica for a given shard, it will be unsearchable once the node (which has the replica) goes down.
> What is the best way to save storage without leverage both hdfs replicas and shard replicas?
It depends on your requirement. That's a trade-off between saving storages and keeping fault tolerance/reliable.
> How can I achive high availability on Cloudera Search?
It's being covered by the following guide:
http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/search_ha_proxy.html
Created ‎02-10-2015 05:42 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you very much for your quick and precise answers.
I would like also to know:
- Is it possible to force no HDFS replication when i create a collection (if HDFS replication factor has already been setted to >=2 )?
- Are there best practice to add replicas after I have created a n-sharded collection without replicas? Can you give me an official link?
Thank you,
Stefano
Created ‎02-12-2015 05:05 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Stefano,
> Is it possible to force no HDFS replication when i create a collection (if HDFS replication factor has already been setted to >=2 )?
I don't think Solr can handle the number of block replication in HDFS. If you just want to reduce the number of block replica, doing "hdfs dfs -setrep" should work (please note that the reliability would be decreasing).
> Are there best practice to add replicas after I have created a n-sharded collection without replicas? Can you give me an official link?
While we don't usually recommend to do the operation officially, I could find the following answer in stackoverflow.
http://stackoverflow.com/questions/18441893/add-shard-replica-in-solrcloud
Please note that, this can be dangerous if done incorrectly.
Created ‎02-12-2015 01:06 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I would probably suggest using curl command to add replica and not the solr UI.This is the wiki reference on how you can do that using the collection APIs.
https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-api_addreplica
You may want to do this during a quite time as it would have more I/O on the system,again depends on what your cluster environment and index size looks like.
Thanks,
Nishan
Created ‎02-18-2015 05:55 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you so much for your advices
Stefano
