Community Articles

Find and share helpful community-sourced technical articles.
Announcements
Celebrating as our community reaches 100,000 members! Thank you!
Labels (1)
avatar

When running Solr in a clustered mode (SolrCloud), it has a runtime dependency on a ZooKeeper, where it stores configs, coordinates leader election, tracks replicas allocation, etc. All-in-all, there's a whole tree of ZK nodes created with sub-nodes.

Deploying SolrCloud into a Hadoop cluster usually means re-using the centralized ZK quorum already maintained by HDP. Unfortunately, if not explicitly taken care of, SolrCloud will happily dump all its ZK content in ZK root, which really complicates things for an admin down the line.

If you need to clean up your ZK first, take a look at this how-to.

Solution is to put all SolrCloud ZK entries under its own ZK node (e.g. /solr). Here's how one does it:

su - zookeeper
cd /usr/hdp/current/zookeeper-client/bin/
 
# point it a ZK quorum (or just a single ZK server is ok, e.g. localhost)
./zkCli.sh -server lake02:2181,lake03:2181,lake04:2181
 
# in zk shell now
# note the empty brackets are _required_
create /solr []
 
# verify the zk node has been created, must not complain the node doesn't exist
ls /solr

quit


# back in the OS shell
# start SolrCloud and tell it which ZK node to use
su - solr
cd /opt/lucidworks-hdpsearch/solr/bin/

# note how we add '/solr' to a ZK quorum address.
# it must be added to the _last_ ZK node address
# this keeps things organized and doesn't pollute root ZK tree with Solr artifacts
./solr start -c -z lake02:2181,lake03:2181,lake04:2181/solr

# alternatively, if you have multiple IPs on your Hadoop nodes and have
# issues accessing Solr UI and dashboards, try binding it to an address explicitly:
./solr start -c -z lake02:2181,lake03:2181,lake04:2181/solr -h $HOSTNAME

8,395 Views
Version history
Last update:
‎12-17-2015 03:05 PM
Updated by:
Contributors