Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Solr config with zookeeper giving an error when creating collection.

avatar
Guru

I have setup solr in cloud mode with three different zookeeper servers and running all three solr instance on three different servers. I have setup solr to save index data on hdfs only. After starting all three instances I want to create a simple collection with two replica and two shards.

But I am getting following error during collection creation. It seems that somewhere solr.hdfs.home

is having wrong value for hdfs location. I have tried to check location for this properties but could not get.

Command to start Solr in Cloud mode on all three servers:

[solr@m1 solr]$ bin/solr start -c -z m1.hdp22:2181,m2.hdp22:2181,w1.hdp22:2181 -Dsolr.directoryFactory=HdfsDirectoryFactory -Dsolr.lock.type=hdfs -Dsolr.hdfs.home=hdfs://HDPTSTHA/user/solr

Started Solr server on port 8983 (pid=1495). Happy searching!

[solr@m1 solr]$ bin/solr status

Found 1 Solr nodes:

Solr process 1495 running on port 8983

{

"solr_home":"/opt/lucidworks-hdpsearch/solr/server/solr/",

"version":"5.2.1 1684708 - shalin - 2015-06-10 23:20:13",

"startTime":"2016-07-19T09:21:03.245Z",

"uptime":"0 days, 0 hours, 0 minutes, 6 seconds",

"memory":"83.1 MB (%16.9) of 490.7 MB",

"cloud":{

"ZooKeeper":"m1.hdp22:2181,m2.hdp22:2181,w1.hdp22:2181",

"liveNodes”:"3",

"collections":"0"}}

Command to create collection :

[solr@m1 solr]$ /opt/lucidworks-hdpsearch/solr/bin/solr create -c labs -d /opt/lucidworks-hdpsearch/solr/server/solr/configsets/data_driven_schema_configs_hdfs/conf -n labs -s 2 -rf 2

Connecting to ZooKeeper at m1.hdp22:2181,m2.hdp22:2181,w1.hdp22:2181

Re-using existing configuration directory labs

Creating new collection 'labs' using command:

http://192.168.56.41:8983/solr/admin/collections?action=CREATE&name=labs&numShards=1&replicationFact...

{

"responseHeader":{

"status":0,

"QTime":13353},

"failure":{"":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at http://192.168.56.41:8983/solr: Error CREATEing SolrCore 'labs_shard1_replica1': Unable to create core [labs_shard1_replica1] Caused by: Uri without authority: hdfs:/m1.hdp22:8020/user/solr/labs/core_node1/data"}}

Dir structure :

solr@m1 data_driven_schema_configs_hdfs]$ cd conf/

[solr@m1 conf]$ ll

total 148

-rw-r--r-- 1 solr hadoop 3974 Jul 14 01:34 currency.xml

-rw-r--r-- 1 solr hadoop 1348 Jul 14 01:34 elevate.xml

drwxr-xr-x 2 solr hadoop 4096 Jul 14 01:34 lang

-rw-r--r-- 1 solr hadoop 55543 Jul 14 01:34 managed-schema

-rw-r--r-- 1 solr hadoop 308 Jul 14 01:34 params.json

-rw-r--r-- 1 solr hadoop 873 Jul 14 01:34 protwords.txt

-rw-r--r-- 1 solr hadoop 62546 Jul 19 05:19 solrconfig.xml

-rw-r--r-- 1 solr hadoop 781 Jul 14 01:34 stopwords.txt

-rw-r--r-- 1 solr hadoop 1119 Jul 14 01:34 synonyms.txt

[solr@m1 conf]$ pwd

/opt/lucidworks-hdpsearch/solr/server/solr/configsets/data_driven_schema_configs_hdfs/confsolrconfig.xml

1 ACCEPTED SOLUTION

avatar
Super Collaborator

You have HDFS defined in two places: in the command line and also in the solrconfig.xml. I don't understand the one in the command line since it does not include a port and that does not look like a hostname, but it could be: HDPTSTHA. You might try temporarily changing the one in in your solrconfig.xml to something bogus to see if it affects your reported error.

Also, the create command says, "Re-using existing configuration directory labs", which makes me wonder if it is reusing what's already in zookeeper and perhaps that file does not match the one on your OS FS. The error reported has only one slash after "hdfs:/". Use Solr's zkcli.sh tool (which is different from the one that comes with Zookeeper) to get the contents of what's there or you could do a getfile or upconfig (to replace/update zookeeper). Remember that Solr adds "/solr" to the root except in embedded ZK mode.

View solution in original post

6 REPLIES 6

avatar
Super Collaborator

You have HDFS defined in two places: in the command line and also in the solrconfig.xml. I don't understand the one in the command line since it does not include a port and that does not look like a hostname, but it could be: HDPTSTHA. You might try temporarily changing the one in in your solrconfig.xml to something bogus to see if it affects your reported error.

Also, the create command says, "Re-using existing configuration directory labs", which makes me wonder if it is reusing what's already in zookeeper and perhaps that file does not match the one on your OS FS. The error reported has only one slash after "hdfs:/". Use Solr's zkcli.sh tool (which is different from the one that comes with Zookeeper) to get the contents of what's there or you could do a getfile or upconfig (to replace/update zookeeper). Remember that Solr adds "/solr" to the root except in embedded ZK mode.

avatar
Guru

Thanks a lot @james.jones, I tried to download config before I push with new. But I am getting below error. SO can you please help me how can I download it.

[root@m1 ~]# /opt/lucidworks-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost m1.hdp22:2181,m2.hdp22:2181,w1.hdp22:2181 -cmd downconfig -confdir /opt/lucidworks-hdpsearch/solr/server/solr/configsets/data_driven_schema_configs_hdfs/conf -confname myconf

/opt/lucidworks-hdpsearch/solr/server/scripts/cloud-scripts

INFO - 2016-07-19 09:05:38.706; org.apache.solr.common.cloud.SolrZkClient; Using default ZkCredentialsProvider

INFO - 2016-07-19 09:05:38.732; org.apache.zookeeper.Environment; Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT

INFO - 2016-07-19 09:05:38.732; org.apache.zookeeper.Environment; Client environment:host.name=m1.hdp22

INFO - 2016-07-19 09:05:38.734; org.apache.zookeeper.Environment; Client environment:java.version=1.7.0_75

INFO - 2016-07-19 09:05:38.734; org.apache.zookeeper.Environment; Client environment:java.vendor=Oracle Corporation

s-hdpsearch/solr/server/scripts/cloud-scripts/../../lib/ext/slf4j-log4j12-1.7.7.jar:/opt/lucidworks-hdpsearch/solr/server/scripts/cloud-scripts/../../lib/ext/jul-to-slf4j-1.7.7.jar

INFO - 2016-07-19 09:05:38.737; org.apache.zookeeper.Environment; Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib

INFO - 2016-07-19 09:05:38.737; org.apache.zookeeper.Environment; Client environment:java.io.tmpdir=/tmp

INFO - 2016-07-19 09:05:38.737; org.apache.zookeeper.Environment; Client environment:java.compiler=<NA>

INFO - 2016-07-19 09:05:38.737; org.apache.zookeeper.Environment; Client environment:os.name=Linux

INFO - 2016-07-19 09:05:38.737; org.apache.zookeeper.Environment; Client environment:os.arch=amd64

INFO - 2016-07-19 09:05:38.737; org.apache.zookeeper.Environment; Client environment:os.version=2.6.32-504.el6.x86_64

INFO - 2016-07-19 09:05:38.737; org.apache.zookeeper.Environment; Client environment:user.name=root

INFO - 2016-07-19 09:05:38.738; org.apache.zookeeper.Environment; Client environment:user.home=/root

INFO - 2016-07-19 09:05:38.738; org.apache.zookeeper.Environment; Client environment:user.dir=/root

INFO - 2016-07-19 09:05:38.741; org.apache.zookeeper.ZooKeeper; Initiating client connection, connectString=m1.hdp22:2181,m2.hdp22:2181,w1.hdp22:2181 sessionTimeout=30000 watcher=org.apache.solr.common.cloud.SolrZkClient$3@259709b1

INFO - 2016-07-19 09:05:38.759; org.apache.zookeeper.ClientCnxn$SendThread; Opening socket connection to server w1.hdp22/192.168.56.51:2181. Will not attempt to authenticate using SASL (unknown error)

INFO - 2016-07-19 09:05:38.760; org.apache.solr.common.cloud.ConnectionManager; Waiting for client to connect to ZooKeeper

INFO - 2016-07-19 09:05:38.763; org.apache.zookeeper.ClientCnxn$SendThread; Socket connection established to w1.hdp22/192.168.56.51:2181, initiating session

INFO - 2016-07-19 09:05:38.771; org.apache.zookeeper.ClientCnxn$SendThread; Session establishment complete on server w1.hdp22/192.168.56.51:2181, sessionid = 0x356021b7ffb0128, negotiated timeout = 30000

INFO - 2016-07-19 09:05:38.778; org.apache.solr.common.cloud.ConnectionManager; Watcher org.apache.solr.common.cloud.ConnectionManager@ce124a7 name:ZooKeeperConnection Watcher:m1.hdp22:2181,m2.hdp22:2181,w1.hdp22:2181 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None

INFO - 2016-07-19 09:05:38.779; org.apache.solr.common.cloud.ConnectionManager; Client is connected to ZooKeeper

INFO - 2016-07-19 09:05:38.779; org.apache.solr.common.cloud.SolrZkClient; Using default ZkACLProvider

INFO - 2016-07-19 09:05:38.799; org.apache.zookeeper.ClientCnxn$EventThread; EventThread shut down

INFO - 2016-07-19 09:05:38.799; org.apache.zookeeper.ZooKeeper; Session: 0x356021b7ffb0128 closed

Exception in thread "main" java.io.IOException: Error downloading files from zookeeper path /configs/myconf to /opt/lucidworks-hdpsearch/solr/server/solr/configsets/data_driven_schema_configs_hdfs/conf

at org.apache.solr.common.cloud.ZkConfigManager.downloadFromZK(ZkConfigManager.java:107)

at org.apache.solr.common.cloud.ZkConfigManager.downloadConfigDir(ZkConfigManager.java:131)

at org.apache.solr.cloud.ZkCLI.main(ZkCLI.java:233)

Caused by: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /configs/myconf

at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)

at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)

at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1472)

at org.apache.solr.common.cloud.SolrZkClient$6.execute(SolrZkClient.java:328)

at org.apache.solr.common.cloud.SolrZkClient$6.execute(SolrZkClient.java:325)

at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:61)

at org.apache.solr.common.cloud.SolrZkClient.getChildren(SolrZkClient.java:325)

at org.apache.solr.common.cloud.ZkConfigManager.downloadFromZK(ZkConfigManager.java:92)

... 2 more

avatar
Guru

@james.jones: Can you also please let me know how to upload my new solrconfig.xml to all zookeepr ?

avatar
Super Collaborator

@Saurabh Kumar You need to add "/solr" to the end of your zookeeper host:port like this (you probably only need to list one of the zookeepers for the command)

./zkcli.sh -cmd upconfig -confdir /opt/lucidworks-hdpsearch/solr/server/solr/configsets/data_driven_schema_configs_hdfs/conf -confname labs -z m1.hdp22:2181/solr

That command will upload the conf directory. I'd also suggest trying the list command ("-cmd list") to see what's in zookeeper. It has been a ehwhile since I have used it and I can't try it at the moment.

avatar
Super Collaborator

I may have been wrong about adding "/solr" to your zookeepers. I know I had to do that somewhere, but I guess it was when starting solr from the commandline without the "bin/solr start" command.

So, you can re-upload your config directory to a configset named "lab". It will create or overwrite the current configset (which is just a ZK directory of your conf directory). The default configset name is the same as your collection. T

./zkcli.sh -zkhost localhost:2181 -cmd upconfig -confdir /opt/lucidworks-hdpsearch/solr/server/solr/configsets/data_driven_schema_configs_hdfs/conf -confname lab

If your configset is called testcoll, then do this to show the contents of the solrconfig.xml in zookeeper:

./zkcli.sh -zkhost localhost:2181 -cmd get /configs/lab/solrconfig.xml

I recommend running the list command which will dump everything in zookeeper, not just listing files but will print the contents of the files. That's a bit too much, so just pipe it to "less" and then search for your collection name as you would with vi (with / and ? to search). Then you'll see the path to your configs.

./zkcli.sh -zkhost localhost:2181 -cmd list |less

you will see something like this (my collection is called testcoll in this example):

   /configs/testcoll/solrconfig.xml (0)
   DATA: ...supressed...
   /configs/testcoll/lang (38)
    /configs/testcoll/lang/contractions_ga.txt (0)
    DATA: ...supressed...
    /configs/testcoll/lang/stopwords_hi.txt (0)
    DATA: ...supressed...
    /configs/testcoll/lang/stopwords_eu.txt (0)
    DATA: ...supressed...
    /configs/testcoll/lang/stopwords_sv.txt (0)
    DATA: ...supressed...
    /configs/testcoll/lang/contractions_it.txt (0)

I hope that helps.

avatar
Guru

@james.jones Thanks a lot. It helped me a lot. I have successfully pushed current config to zookeeper.