Community Articles
Find and share helpful community-sourced technical articles
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.
Labels (1)

When running Solr in a clustered mode (SolrCloud), it has a runtime dependency on a ZooKeeper, where it stores configs, coordinates leader election, tracks replicas allocation, etc. All-in-all, there's a whole tree of ZK nodes created with sub-nodes.

Deploying SolrCloud into a Hadoop cluster usually means re-using the centralized ZK quorum already maintained by HDP. Unfortunately, if not explicitly taken care of, SolrCloud will happily dump all its ZK content in ZK root, which really complicates things for an admin down the line.

If you need to clean up your ZK first, take a look at this how-to.

Solution is to put all SolrCloud ZK entries under its own ZK node (e.g. /solr). Here's how one does it:

su - zookeeper
cd /usr/hdp/current/zookeeper-client/bin/
# point it a ZK quorum (or just a single ZK server is ok, e.g. localhost)
./ -server lake02:2181,lake03:2181,lake04:2181
# in zk shell now
# note the empty brackets are _required_
create /solr []
# verify the zk node has been created, must not complain the node doesn't exist
ls /solr


# back in the OS shell
# start SolrCloud and tell it which ZK node to use
su - solr
cd /opt/lucidworks-hdpsearch/solr/bin/

# note how we add '/solr' to a ZK quorum address.
# it must be added to the _last_ ZK node address
# this keeps things organized and doesn't pollute root ZK tree with Solr artifacts
./solr start -c -z lake02:2181,lake03:2181,lake04:2181/solr

# alternatively, if you have multiple IPs on your Hadoop nodes and have
# issues accessing Solr UI and dashboards, try binding it to an address explicitly:
./solr start -c -z lake02:2181,lake03:2181,lake04:2181/solr -h $HOSTNAME

Don't have an account?
Coming from Hortonworks? Activate your account here
Version history
Revision #:
1 of 1
Last update:
‎12-17-2015 03:05 PM
Updated by:
Top Kudoed Authors