Support Questions

Find answers, ask questions, and share your expertise

unable to create Solr collection / index

avatar
Explorer

Hi Team,

I've downloaded CDH quickstart vm and started to play with HUE - single node.

When I tried to create a Solr collection by running 

solrctl --zk quickstart.cloudera:2181/solr collection --create hue_solr -s 1

but i got an error that i want to use --solr as the cluster can't detect the solr automatically, I've googled this error and found people talking abut specifying the solr url as well as the zookeeper url, when I run the command as solrctl --zk quickstart.cloudera:2181/solr --solr quickstart.cloudera:8983/solr collection --create hue_solr -s 1 and this time I got another error "curl: option --negotiate: the installed libcurl version doesn't support this
curl: try 'curl --help' or 'curl --manual' for more information
Error: A call to SolrCloud WEB APIs failed:"

found this kb article https://www.cloudera.com/documentation/enterprise/5-7-x/topics/search_solrctl_examples.html talking about disabling proxy so I run it and the command became NO_PROXY='*' solrctl --zk quickstart.cloudera:2181/solr --solr quickstart.cloudera:8983/solr collection --create hue_solr -s 1 but still getting same error as above.

I came from Windows background so my Linux knowledge is a bit limited.

I  tried to restart Solr service but this didn't help.

Any help would be appreciated.

Thanks in advance

 

 

1 ACCEPTED SOLUTION

avatar
Expert Contributor
Hi Osama,

Sorry, didn't realize you were managing using Cloudera Manager (CM). In
that case you should be using CM to start and stop the Solr services. I
would try to get thing reset by stopping Solr via CM, make sure it's
stopped on the system, and then start it with CM. If it doesn't stop via
CM control then you may have to manually stop it this time. I am not
familiar with how the manual init scripts interact w/ the CM methods for
service control so you may be in an odd state.

* Stop Solr from CM
* Check from the command line "ps -ef | grep solr"
* If needed, stop Solr from the command line "service solr-server stop"
* Check from the command line "ps -ef | grep solr"
* Start Solr from CM
* Check from the command line "ps -ef | grep solr"

One you're satisfied that Solr is actually up and running test your
collection creation step. If it fails, check to see that Solr is still
running. I think you're just having some sort of stability problem with
Solr. You may need to increase the heap for Solr a little.

Nick

View solution in original post

25 REPLIES 25

avatar
Expert Contributor

Hi,

 

 

CDH always stores Solr configuration in /etc/solr/conf on nodes that are defined as Solr gateways (the Quickstart VM can be considered a Solr gateway node).  With that client configuration available you should not need to specify the --zk or --solr parameters.

 

I just did a quick test on the Quickstart VM and here's how I created my collection (collection1):

 

  $ solrctl collection --create collection1 -c predefinedTemplate

 

Note that I'm taking advantage of the "predefinedTemplate instancedir that is already loaded to ZK.  If you want to use your own instancedir you can create one locally and then upload to ZK:

 

  $ solrctl instancedir --generate <collection>

  # Edit the <collection>/conf/... files as needed

  $ solrctl instancdir --create <collection> <path_to_local_instancedir>

 

Example:

  $ solrctl instancedir --generate collection1

  $ solrctl instancedir --create collection1 collection1

  $ solrctl instancedir --list

 

Nick

avatar
Explorer

Hi Nick,

Thanks for your reply.

 

I run the command 

solrctl collection --create collection1 -c predefinedTemplate

but got the below error 

Error: can't discover Solr URI. Please specify it explicitly via --solr.

 

I've checked the Solr service and it's running.

Is there anything else I need to check before running the command.

 

Thanks,

Osama

 

 

 

avatar
Expert Contributor

Hi Osama,

 

It seems like your Solr service may not be running.  What is the output of:

 

  $ sudo service solr-server status

 

If it is not running, start it with:

 

  $ sudo service solr-server start

 

If started, ensure that it stays running.

 

Also, can you verify the files in /etc/solr/conf:

 

  $ ls -l /etc/solr/conf

 

Thanks,

Nick

avatar
Explorer

Hi Nick,

 

I run the command to check the service status and it was found Failed, however from the Cloudera manager dashboard I see it's gree and no failure events are recorded.

I run the command to start the service and got the below outcome:

 

[cloudera@quickstart ~]$ sudo service solr-server start
Starting Solr server daemon:                               [  OK  ]
Using CATALINA_BASE:   /var/lib/solr/tomcat-deployment
Using CATALINA_HOME:   /usr/lib/solr/../bigtop-tomcat
Using CATALINA_TMPDIR: /var/lib/solr/
Using JRE_HOME:        /usr/java/jdk1.7.0_67-cloudera
Using CLASSPATH:       /usr/lib/solr/../bigtop-tomcat/bin/bootstrap.jar
Using CATALINA_PID:    /var/run/solr/solr.pid
Existing PID file found during start.
Removing/clearing stale PID file.

 then run the check command again and got the below outcome 

[cloudera@quickstart ~]$ sudo service solr-server status
Solr server daemon is dead and pid file exists             [FAILED]

then I run the   $ ls -l /etc/solr/conf and below is the outcome

 

cloudera@quickstart ~]$ ls -l /etc/solr/conf
lrwxrwxrwx 1 root root 27 Jun 19 16:10 /etc/solr/conf -> /etc/alternatives/solr-conf

and below the same command without -l 

[cloudera@quickstart ~]$ ls /etc/solr/conf
__cloudera_generation__  __cloudera_metadata__  log4j.properties  solr-env.sh

From the Cloudera Manager dashboar I can see the service status is green, I refreshed the page several times and still green and I can browse to Solr web site via http://quickstart.cloudera:8983/solr/

 

I've a single VM, 2 core and 8GB ram.

 

 

Any recommendations please.

 

Thanks,

Osama

 

avatar
Expert Contributor
Hi Osama,

Sorry, didn't realize you were managing using Cloudera Manager (CM). In
that case you should be using CM to start and stop the Solr services. I
would try to get thing reset by stopping Solr via CM, make sure it's
stopped on the system, and then start it with CM. If it doesn't stop via
CM control then you may have to manually stop it this time. I am not
familiar with how the manual init scripts interact w/ the CM methods for
service control so you may be in an odd state.

* Stop Solr from CM
* Check from the command line "ps -ef | grep solr"
* If needed, stop Solr from the command line "service solr-server stop"
* Check from the command line "ps -ef | grep solr"
* Start Solr from CM
* Check from the command line "ps -ef | grep solr"

One you're satisfied that Solr is actually up and running test your
collection creation step. If it fails, check to see that Solr is still
running. I think you're just having some sort of stability problem with
Solr. You may need to increase the heap for Solr a little.

Nick

avatar
Explorer

Hi  Nick,

I tried to restart the service from CM and it's restarted successfully and I can see the status is green from the dashboard, then tried to create the collection and it failed again, then did the same using the command line and unfortunately got the same results.

Then I've increased the heap size from CM > Solr > Configuration and then Java Heap Size of Solr Server in Bytes, it was set to 256, I set it to 500MB.

I've also increased the Java Direct Memory Size of Solr Server in Bytes from 256 to 500MB.

then I restarted the Solr service and tried again and still getting the same failure message, checking the service staus agin and still reporting failed from the command like while it's green from CM.

Below is the output from running the commands

 

[cloudera@quickstart ~]$ sudo service solr-server start
Starting Solr server daemon:                               [  OK  ]
Using CATALINA_BASE:   /var/lib/solr/tomcat-deployment
Using CATALINA_HOME:   /usr/lib/solr/../bigtop-tomcat
Using CATALINA_TMPDIR: /var/lib/solr/
Using JRE_HOME:        /usr/java/jdk1.7.0_67-cloudera
Using CLASSPATH:       /usr/lib/solr/../bigtop-tomcat/bin/bootstrap.jar
Using CATALINA_PID:    /var/run/solr/solr.pid
[cloudera@quickstart ~]$ sudo service solr-server status
Solr server daemon is dead and pid file exists             [FAILED]
[cloudera@quickstart ~]$ ps -ef | grep solr
solr      6118  5308  2 13:31 ?        00:00:10 /usr/java/jdk1.7.0_67-cloudera/bin/java -Djava.util.logging.config.file=/var/lib/solr/tomcat-deployment/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.net.preferIPv4Stack=true -Dsolr.hdfs.blockcache.enabled=true -Dsolr.hdfs.blockcache.direct.memory.allocation=true -Dsolr.hdfs.blockcache.blocksperbank=16384 -Dsolr.hdfs.blockcache.slab.count=1 -DzkClientTimeout=15000 -Xms524288000 -Xmx524288000 -XX:MaxDirectMemorySize=524288000 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/solr_solr-SOLR_SERVER_pid6118.hprof -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh -DzkHost=quickstart.cloudera:2181/solr -Dsolr.solrxml.location=zookeeper -Dsolr.hdfs.home=hdfs://quickstart.cloudera:8020/solr -Dsolr.hdfs.confdir=/var/run/cloudera-scm-agent/process/101-solr-SOLR_SERVER/hadoop-conf -Dsolr.authentication.simple.anonymous.allowed=true -Dsolr.security.proxyuser.hue.hosts=* -Dsolr.security.proxyuser.hue.groups=* -Dhost=quickstart.cloudera -Djetty.port=8983 -Dsolr.host=quickstart.cloudera -Dsolr.port=8983 -DuseCachedStatsBetweenGetMBeanInfoCalls=true -DdisableSolrFieldCacheMBeanEntryListJmx=true -Dlog4j.configuration=file:///var/run/cloudera-scm-agent/process/101-solr-SOLR_SERVER/log4j.properties -Dsolr.log=/var/log/solr -Dsolr.admin.port=8984 -Dsolr.tomcat.backlog=4096 -Dsolr.tomcat.connectionTimeout=180000 -Dsolr.tomcat.keepAliveTimeout=600000 -Dsolr.tomcat.maxKeepAliveRequests=-1 -Dsolr.max.connector.thread=10000 -Dsolr.tomcat.connectionLinger=300 -Dsolr.tomcat.bufferSize=131072 -Dsolr.solr.home=/var/lib/solr -Djava.endorsed.dirs=/usr/lib/bigtop-tomcat/endorsed -classpath /usr/lib/bigtop-tomcat/bin/bootstrap.jar -Dcatalina.base=/var/lib/solr/tomcat-deployment -Dcatalina.home=/usr/lib/bigtop-tomcat -Djava.io.tmpdir=/var/lib/solr/ org.apache.catalina.startup.Bootstrap start
solr      6125  6118  0 13:31 ?        00:00:00 python2.6 /usr/lib64/cmf/agent/build/env/bin/cmf-redactor /usr/lib64/cmf/service/solr/solr.sh
solr      6302     1  0 13:31 ?        00:00:00 bash /usr/lib/bigtop-utils/bigtop-monitor-service 70 6118
hbase     6572  5308  1 13:32 ?        00:00:05 /usr/java/jdk1.7.0_67-cloudera/bin/java -Dproc_server -XX:OnOutOfMemoryError=kill -9 %p -Xmx1000m -Djava.net.preferIPv4Stack=true -Xms52428800 -Xmx52428800 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/ks_indexer_ks_indexer-HBASE_INDEXER_pid6572.hprof -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh -Dhbaseindexer.log.dir=/usr/lib/hbase-solr/bin/../logs -Dhbaseindexer.log.file=hbase-indexer.log -Dhbaseindexer.home.dir=/usr/lib/hbase-solr/bin/.. -Dhbaseindexer.id.str= -Dhbaseindexer.root.logger=INFO,console -Djava.library.path=/usr/lib/hadoop/lib/native com.ngdata.hbaseindexer.Main
solr     11366  6302  0 13:39 ?        00:00:00 sleep 70
cloudera 11630 31453  0 13:40 pts/0    00:00:00 grep solr
spark    20887  5308  0 13:05 ?        00:00:11 /usr/java/jdk1.7.0_67-cloudera/bin/java -cp /var/run/cloudera-scm-agent/process/92-spark_on_yarn-SPARK_YARN_HISTORY_SERVER/spark-conf/:/usr/lib/spark/lib/spark-assembly-1.6.0-cdh5.10.0-hadoop2.6.0-cdh5.10.0.jar:/var/run/cloudera-scm-agent/process/92-spark_on_yarn-SPARK_YARN_HISTORY_SERVER/yarn-conf/:/etc/hive/conf/:/usr/lib/avro/avro-1.7.6-cdh5.10.0.jar:/usr/lib/avro/avro-compiler-1.7.6-cdh5.10.0.jar:/usr/lib/avro/avro-ipc-1.7.6-cdh5.10.0.jar:/usr/lib/avro/avro-ipc-1.7.6-cdh5.10.0-tests.jar:/usr/lib/avro/avro-mapred-1.7.6-cdh5.10.0-hadoop2.jar:/usr/lib/avro/avro-maven-plugin-1.7.6-cdh5.10.0.jar:/usr/lib/avro/avro-protobuf-1.7.6-cdh5.10.0.jar:/usr/lib/avro/avro-service-archetype-1.7.6-cdh5.10.0.jar:/usr/lib/avro/avro-thrift-1.7.6-cdh5.10.0.jar:/usr/lib/avro/trevni-avro-1.7.6-cdh5.10.0-hadoop2.jar:/usr/lib/avro/trevni-avro-1.7.6-cdh5.10.0.jar:/usr/lib/avro/trevni-core-1.7.6-cdh5.10.0.jar:/usr/lib/flume-ng/lib/apache-log4j-extras-1.1.jar:/usr/lib/flume-ng/lib/async-1.4.0.jar:/usr/lib/flume-ng/lib/asynchbase-1.7.0.jar:/usr/lib/flume-ng/lib/commons-codec-1.8.jar:/usr/lib/flume-ng/lib/commons-jexl-2.1.1.jar:/usr/lib/flume-ng/lib/fastutil-6.3.jar:/usr/lib/flume-ng/lib/flume-avro-source-1.6.0-cdh5.10.0.jar:/usr/lib/flume-ng/lib/flume-dataset-sink-1.6.0-cdh5.10.0.jar:/usr/lib/flume-ng/lib/flume-file-channel-1.6.0-cdh5.10.0.jar:/usr/lib/flume-ng/lib/flume-hdfs-sink-1.6.0-cdh5.10.0.jar:/usr/lib/flume-ng/lib/flume-hive-sink-1.6.0-cdh5.10.0.jar:/usr/lib/flume-ng/lib/flume-irc-sink-1.6.0-cdh5.10.0.jar:/usr/lib/flume-ng/lib/flume-jdbc-channel-1.6.0-cdh5.10.0.jar:/usr/lib/flume-ng/lib/flume-jms-source-1.6.0-cdh5.10.0.jar:/usr/lib/flume-ng/lib/flume-kafka-channel-1.6.0-cdh5.10.0.jar:/usr/lib/flume-ng/lib/flume-kafka-source-1.6.0-cdh5.10.0.jar:/usr/lib/flume-ng/lib/flume-ng-auth-1.6.0-cdh5.10.0.jar:/usr/lib/flume-ng/lib/flume-ng-configuration-1.6.0-cdh5.10.0.jar:/usr/lib/flume-ng/lib/flume-ng-core-1.6.0-cdh5.10.0.jar:/usr/lib/flume-ng/lib/flume-ng-elasticsearch-sink-1.6.0-cdh5.10.0.jar:/usr/lib/flume-ng/lib/flume-ng-embedded-agent-1.6.0-cdh5.10.0.jar:/usr/lib/flume-ng/lib/flume-ng-hbase-sink-1.6.0-cdh5.10.0.jar:/usr/lib/flume-ng/lib/flume-ng-kafka-sink-1.6.0-cdh5.10.0.jar:/usr/lib/flume-ng/lib/flume-ng-log4jappender-1.6.0-cdh5.10.0.jar:/usr/lib/flume-ng/lib/flume-ng-morphline-solr-sink-1.6.0-cdh5.10.0.jar:/usr/lib/flume-ng/lib/flume-ng-node-1.6.0-cdh5.10.0.jar:/usr/lib/flume-ng/lib/flume-ng-sdk-1.6.0-cdh5.10.0.jar:/usr/lib/flume-ng/lib/flume-scribe-source-1.6.0-cdh5.10.0.jar:/usr/lib/flume-ng/lib/flume-spillable-memory-channel-1.6.0-cdh5.10.0.jar:/usr/lib/flume-ng/lib/flume-taildir-source-1.6.0-cdh5.10.0.jar:/usr/lib/flume-ng/lib/flume-thrift-source-1.6.0-cdh5.10.0.jar:/usr/lib/flume-ng/lib/flume-tools-1.6.0-cdh5.10.0.jar:/usr/lib/flume-ng/lib/flume-twitter-source-1.6.0-cdh5.10.0.jar:/usr/lib/flume-ng/lib/irclib-1.10.jar:/usr/lib/flume-ng/lib/joda-time-2.1.jar:/usr/lib/flume-ng/lib/jopt-simple-4.9.jar:/usr/lib/flume-ng/lib/jsr305-1.3.9.jar:/usr/lib/flume-ng/lib/kafka_2.10-0.9.0-kafka-2.0.2.jar:/usr/lib/flume-ng/lib/kafka-clients-0.9.0-kafka-2.0.2.jar:/usr/lib/flume-ng/lib/lz4-1.3.0.jar:/usr/lib/flume-ng/lib/mapdb-0.9.9.jar:/usr/lib/flume-ng/lib/metrics-core-2.2.0.jar:/usr/lib/flume-ng/lib/mina-core-2.0.4.jar:/usr/lib/flume-ng/lib/netty-3.9.4.Final.jar:/usr/lib/flume-ng/lib/scala-library-2.10.5.jar:/usr/lib/flume-ng/lib/serializer-2.7.2.jar:/usr/lib/flume-ng/lib/servlet-api-2.5-20110124.jar:/usr/lib/flume-ng/lib/spark-streaming-flume-sink_2.10-1.6.0-cdh5.10.0.jar:/usr/lib/flume-ng/lib/twitter4j-core-3.0.3.jar:/usr/lib/flume-ng/lib/twitter4j-media-support-3.0.3.jar:/usr/lib/flume-ng/lib/twitter4j-stream-3.0.3.jar:/usr/lib/flume-ng/lib/unused-1.0.0.jar:/usr/lib/flume-ng/lib/velocity-1.7.jar:/usr/lib/flume-ng/lib/xalan-2.7.2.jar:/usr/lib/flume-ng/lib/zkclient-0.7.jar:/usr/lib/hadoop/hadoop-annotations-2.6.0-cdh5.10.0.jar:/usr/lib/hadoop/hadoop-auth-2.6.0-cdh5.10.0.jar:/usr/lib/hadoop/hadoop-aws-2.6.0-cdh5.10.0.jar:/usr/lib/hadoop/hadoop-common-2.6.0-cdh5.10.0.jar:/usr/lib/hadoop/hadoop-common-2.6.0-cdh5.10.0-tests.jar:/usr/lib/hadoop/hadoop-n
[cloudera@quickstart ~]$ sudo service solr-server status
Solr server daemon is dead and pid file exists             [FAILED]

 I've noticed there are some HBase outofmemory errors, not sure if this is related to Solr or it's different eror, are you please able to shed some light.

 

Thanks for your continous support.

Osama

avatar
Explorer

Hi Nick,

Sorry for my late replt, after so many trials, I kept adding more memory to the vm until it works.

 

Thanks for your help.

Osama

avatar
Expert Contributor

Hi Osama,

 

Glad you finally got to the solution!

 

Nick

avatar
New Contributor

Hello Osama,

I am facing the same issue. Can you please send me the code or the process how did you tried to resolve this issue. It will be really great if you can help me out with this..

 

I am trying with solrctl collection --create <collection name>..

 

I did

 

solrctl instancedir --generate <name of directory>

solrctl instancedir --create <name of directory>

but when trying to create a collection facing the same error. 

"Error: can't discover Solr URI. Please specify it explicitly via --solr."

 

Please help.