Support Questions

Find answers, ask questions, and share your expertise

Upgrading and can't upgrade Solr

avatar
Explorer

Hi I have a 4 node cluster and I can't seem to upgrade from CDH 5.5.2 to 5.9.  It is getting stuck on upgrading Solr. 

Here is the err file 

 

Sat Dec 24 05:42:00 UTC 2016
+ source_parcel_environment
+ '[' '!' -z /opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/meta/cdh_env.sh ']'
+ OLD_IFS=' 	
'
+ IFS=:
+ SCRIPT_ARRAY=($SCM_DEFINES_SCRIPTS)
+ DIRNAME_ARRAY=($PARCEL_DIRNAMES)
+ IFS=' 	
'
+ COUNT=1
++ seq 1 1
+ for i in '`seq 1 $COUNT`'
+ SCRIPT=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/meta/cdh_env.sh
+ PARCEL_DIRNAME=CDH-5.9.0-1.cdh5.9.0.p0.23
+ . /opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/meta/cdh_env.sh
++ CDH_DIRNAME=CDH-5.9.0-1.cdh5.9.0.p0.23
++ export CDH_HADOOP_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop
++ CDH_HADOOP_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop
++ export CDH_MR1_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-0.20-mapreduce
++ CDH_MR1_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-0.20-mapreduce
++ export CDH_HDFS_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-hdfs
++ CDH_HDFS_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-hdfs
++ export CDH_HTTPFS_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-httpfs
++ CDH_HTTPFS_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-httpfs
++ export CDH_MR2_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-mapreduce
++ CDH_MR2_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-mapreduce
++ export CDH_YARN_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-yarn
++ CDH_YARN_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-yarn
++ export CDH_HBASE_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hbase
++ CDH_HBASE_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hbase
++ export CDH_ZOOKEEPER_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/zookeeper
++ CDH_ZOOKEEPER_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/zookeeper
++ export CDH_HIVE_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hive
++ CDH_HIVE_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hive
++ export CDH_HUE_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hue
++ CDH_HUE_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hue
++ export CDH_OOZIE_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/oozie
++ CDH_OOZIE_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/oozie
++ export CDH_HUE_PLUGINS_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop
++ CDH_HUE_PLUGINS_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop
++ export CDH_FLUME_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/flume-ng
++ CDH_FLUME_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/flume-ng
++ export CDH_PIG_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/pig
++ CDH_PIG_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/pig
++ export CDH_HCAT_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hive-hcatalog
++ CDH_HCAT_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hive-hcatalog
++ export CDH_SQOOP2_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/sqoop2
++ CDH_SQOOP2_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/sqoop2
++ export CDH_LLAMA_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/llama
++ CDH_LLAMA_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/llama
++ export CDH_SENTRY_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/sentry
++ CDH_SENTRY_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/sentry
++ export TOMCAT_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/bigtop-tomcat
++ TOMCAT_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/bigtop-tomcat
++ export JSVC_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/bigtop-utils
++ JSVC_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/bigtop-utils
++ export CDH_HADOOP_BIN=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop/bin/hadoop
++ CDH_HADOOP_BIN=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop/bin/hadoop
++ export CDH_IMPALA_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/impala
++ CDH_IMPALA_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/impala
++ export CDH_SOLR_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/solr
++ CDH_SOLR_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/solr
++ export CDH_HBASE_INDEXER_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hbase-solr
++ CDH_HBASE_INDEXER_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hbase-solr
++ export SEARCH_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/search
++ SEARCH_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/search
++ export CDH_SPARK_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/spark
++ CDH_SPARK_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/spark
++ export WEBHCAT_DEFAULT_XML=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/etc/hive-webhcat/conf.dist/webhcat-default.xml
++ WEBHCAT_DEFAULT_XML=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/etc/hive-webhcat/conf.dist/webhcat-default.xml
++ export CDH_KMS_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-kms
++ CDH_KMS_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-kms
++ export CDH_PARQUET_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/parquet
++ CDH_PARQUET_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/parquet
++ export CDH_AVRO_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/avro
++ CDH_AVRO_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/avro
+ locate_cdh_java_home
+ '[' -z /usr/java/latest/jdk1.8.0_65 ']'
+ verify_java_home
+ '[' -z /usr/java/latest/jdk1.8.0_65 ']'
+ echo JAVA_HOME=/usr/java/latest/jdk1.8.0_65
+ set -x
+ echo 'using 5 as CDH_VERSION'
+ '[' 5 -ge 5 ']'
+ export BIGTOP_DEFAULTS_DIR=
+ BIGTOP_DEFAULTS_DIR=
+ JAAS_OPT=
+ '[' -n '' ']'
+ ZKCLI_TMPDIR=/run/cloudera-scm-agent/process/824-solr-SOLR_SERVER/temp
+ mkdir /run/cloudera-scm-agent/process/824-solr-SOLR_SERVER/temp
mkdir: cannot create directory ‘/run/cloudera-scm-agent/process/824-solr-SOLR_SERVER/temp’: File exists
+ export 'ZKCLI_JVM_FLAGS=-Djava.io.tmpdir=/run/cloudera-scm-agent/process/824-solr-SOLR_SERVER/temp '
+ ZKCLI_JVM_FLAGS='-Djava.io.tmpdir=/run/cloudera-scm-agent/process/824-solr-SOLR_SERVER/temp '
+ '[' solrinit = '' ']'
+ '[' 5 -ge 5 ']'
+ '[' -z /opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/meta/cdh_env.sh ']'
+ TOMCAT_CONF=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/solr/../../etc/solr/tomcat-conf.dist
+ export CATALINA_BASE=/var/lib/solr/tomcat-deployment
+ CATALINA_BASE=/var/lib/solr/tomcat-deployment
+ SOLR_PLUGIN_DIR=/var/lib/solr/lib
+ '[' stop '!=' '' ']'
+ env TOMCAT_CONF=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/solr/../../etc/solr/tomcat-conf.dist TOMCAT_DEPLOYMENT=/var/lib/solr/tomcat-deployment SOLR_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/solr SOLR_PLUGIN_DIR=/var/lib/solr/lib bash /opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/solr/tomcat-deployment.sh
+ replace_conf_dir
+ find /run/cloudera-scm-agent/process/824-solr-SOLR_SERVER -type f '!' -path '/run/cloudera-scm-agent/process/824-solr-SOLR_SERVER/logs/*' '!' -name '*.log' '!' -name '*.keytab' '!' -name '*jceks' -exec perl -pi -e 's#{{CMF_CONF_DIR}}#/run/cloudera-scm-agent/process/824-solr-SOLR_SERVER#g' '{}' ';'
Can't open /run/cloudera-scm-agent/process/824-solr-SOLR_SERVER/supervisor.conf: Permission denied.
++ replace_pid -Djava.net.preferIPv4Stack=true -Dsolr.hdfs.blockcache.enabled=true -Dsolr.hdfs.blockcache.direct.memory.allocation=true -Dsolr.hdfs.blockcache.blocksperbank=16384 -Dsolr.hdfs.blockcache.slab.count=1 -DzkClientTimeout=15000 -Xms2378170368 -Xmx2378170368 -XX:MaxDirectMemorySize=3089104896 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:OnOutOfMemoryError=/usr/lib/cmf/service/common/killparent.sh
++ sed 's#{{PID}}#28104#g'
++ echo -Djava.net.preferIPv4Stack=true -Dsolr.hdfs.blockcache.enabled=true -Dsolr.hdfs.blockcache.direct.memory.allocation=true -Dsolr.hdfs.blockcache.blocksperbank=16384 -Dsolr.hdfs.blockcache.slab.count=1 -DzkClientTimeout=15000 -Xms2378170368 -Xmx2378170368 -XX:MaxDirectMemorySize=3089104896 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:OnOutOfMemoryError=/usr/lib/cmf/service/common/killparent.sh
+ CATALINA_OPTS='-Djava.net.preferIPv4Stack=true -Dsolr.hdfs.blockcache.enabled=true -Dsolr.hdfs.blockcache.direct.memory.allocation=true -Dsolr.hdfs.blockcache.blocksperbank=16384 -Dsolr.hdfs.blockcache.slab.count=1 -DzkClientTimeout=15000 -Xms2378170368 -Xmx2378170368 -XX:MaxDirectMemorySize=3089104896 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:OnOutOfMemoryError=/usr/lib/cmf/service/common/killparent.sh'
+ export CATALINA_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/bigtop-tomcat
+ CATALINA_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/bigtop-tomcat
+ export CATALINA_BASE=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/solr/server
+ CATALINA_BASE=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/solr/server
+ export CATALINA_TMPDIR=/run/cloudera-scm-agent/process/824-solr-SOLR_SERVER/temp
+ CATALINA_TMPDIR=/run/cloudera-scm-agent/process/824-solr-SOLR_SERVER/temp
+ export CATALINA_OUT=/var/log/solr/solr.out
+ CATALINA_OUT=/var/log/solr/solr.out
+ export SOLR_RUN=/run/cloudera-scm-agent/process/824-solr-SOLR_SERVER
+ SOLR_RUN=/run/cloudera-scm-agent/process/824-solr-SOLR_SERVER
+ export SOLR_LOG4J_CONFIG=/run/cloudera-scm-agent/process/824-solr-SOLR_SERVER/log4j.properties
+ SOLR_LOG4J_CONFIG=/run/cloudera-scm-agent/process/824-solr-SOLR_SERVER/log4j.properties
+ export SOLR_HDFS_CONFIG=/run/cloudera-scm-agent/process/824-solr-SOLR_SERVER/hadoop-conf
+ SOLR_HDFS_CONFIG=/run/cloudera-scm-agent/process/824-solr-SOLR_SERVER/hadoop-conf
+ '[' '' = true ']'
+ '[' '' = true ']'
+ '[' stop = '' ']'
+ '[' true = true ']'
+ SOLR_HOME_TMP=/var/lib/solr
+ export SOLR_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/solr
+ SOLR_HOME=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/solr
+ '[' '' = true ']'
+ eval /opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/solr/bin/solrctl.sh --zk name-node-1:2181/solr cluster --set-property urlScheme http
++ /opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/solr/bin/solrctl.sh --zk name-node-1:2181/solr cluster --set-property urlScheme http
+ '[' 0 == 1 ']'
+ echo 'Failed to configure urlScheme property for Solr cluster in Zookeeper'
+ exit 1

 

Here is std_out file 

Sat Dec 24 05:41:51 UTC 2016
JAVA_HOME=/usr/java/latest/jdk1.8.0_65
using 5 as CDH_VERSION
Unable to set the cluster property due to following error : Error updating cluster property urlScheme
Error: Unable to set the cluster property in ZK solr
Failed to configure urlScheme property for Solr cluster in Zookeeper
Sat Dec 24 05:41:53 UTC 2016
JAVA_HOME=/usr/java/latest/jdk1.8.0_65
using 5 as CDH_VERSION
Unable to set the cluster property due to following error : Error updating cluster property urlScheme
Error: Unable to set the cluster property in ZK solr
Failed to configure urlScheme property for Solr cluster in Zookeeper
Sat Dec 24 05:41:56 UTC 2016
JAVA_HOME=/usr/java/latest/jdk1.8.0_65
using 5 as CDH_VERSION
Unable to set the cluster property due to following error : Error updating cluster property urlScheme
Error: Unable to set the cluster property in ZK solr
Failed to configure urlScheme property for Solr cluster in Zookeeper
Sat Dec 24 05:42:00 UTC 2016
JAVA_HOME=/usr/java/latest/jdk1.8.0_65
using 5 as CDH_VERSION
Unable to set the cluster property due to following error : Error updating cluster property urlScheme
Error: Unable to set the cluster property in ZK solr
Failed to configure urlScheme property for Solr cluster in Zookeeper

I have chmodded the parent directory '

/run/cloudera-scm-agent/process/ 

to be 777 but it doesnt make sense why its not reading the file 

supervisor.conf

 anyway because the user is solr and solr is the owner of the file 

1 ACCEPTED SOLUTION

avatar
Master Guru

Actually, this message may be showing us the cause.  Due to the fact that the interesting bit was off the page when I was looking at your Zookeeper snippet, I couldn't see it at first:

 

2016-12-26 20:13:48,322 INFO org.apache.zookeeper.server.PrepRequestProcessor: Got user-level KeeperException when processing sessionid:0x1593c8f09d800f6 type:create cxid:0x2 zxid:0x1b2ea txntype:-1 reqpath:n/a Error Path:/solr Error:KeeperErrorCode = NoNode for /solr

 

Zookeeper appears to have no znode for /solr

To create one, in Cloudera Manager, Go to Solr.

Next, click on the Actions button (drop-down) on the far right and choose "Initialize Solr"

This will create the /solr znode.

 

After that, try starting Solr again.

 

Regards,

 

Ben

View solution in original post

7 REPLIES 7

avatar
Master Guru

The problem occurs when trying to set the scheme to http for the Solr znode in Zookeeper:

 

name-node-1:2181/solr

 

Make sure that Zookeeper is running and that "name-node-1" can be resolved to an IP.  You might check the zookeeper log to see if there were any errors.  If not, it is more likely that a connection could not be made to zookeeper to update the scheme property.

 

Regards,

 

Ben

avatar
Explorer

Thanks Ben, 

  I checked the Zookeeper logs there is not much help there 

 

16-12-26 20:13:44,298 INFO org.apache.zookeeper.server.ZooKeeperServer: Established session 0x1593c8f09d800f5 with negotiated timeout 30000 for client /10.128.0.2:60980
2016-12-26 20:13:44,314 INFO org.apache.zookeeper.server.PrepRequestProcessor: Got user-level KeeperException when processing sessionid:0x1593c8f09d800f5 type:create cxid:0x2 zxid:0x1b2e8 txntype:-1 reqpath:n/a Error Path:/solr Error:KeeperErrorCode = NoNode for /solr
2016-12-26 20:13:44,636 WARN org.apache.zookeeper.server.NIOServerCnxn: caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 0x1593c8f09d800f5, likely client has closed socket
        at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
        at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
        at java.lang.Thread.run(Thread.java:745)
2016-12-26 20:13:44,637 INFO org.apache.zookeeper.server.NIOServerCnxn: Closed socket connection for client /10.128.0.2:60980 which had sessionid 0x1593c8f09d800f5
2016-12-26 20:13:48,298 INFO org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket connection from /10.128.0.2:60986
2016-12-26 20:13:48,302 INFO org.apache.zookeeper.server.ZooKeeperServer: Client attempting to establish new session at /10.128.0.2:60986
2016-12-26 20:13:48,304 INFO org.apache.zookeeper.server.ZooKeeperServer: Established session 0x1593c8f09d800f6 with negotiated timeout 30000 for client /10.128.0.2:60986
2016-12-26 20:13:48,322 INFO org.apache.zookeeper.server.PrepRequestProcessor: Got user-level KeeperException when processing sessionid:0x1593c8f09d800f6 type:create cxid:0x2 zxid:0x1b2ea txntype:-1 reqpath:n/a Error Path:/solr Error:KeeperErrorCode = NoNode for /solr
2016-12-26 20:13:48,643 WARN org.apache.zookeeper.server.NIOServerCnxn: caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 0x1593c8f09d800f6, likely client has closed socket
        at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
        at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
        at java.lang.Thread.run(Thread.java:745)
2016-12-26 20:13:48,644 INFO org.apache.zookeeper.server.NIOServerCnxn: Closed socket connection for client /10.128.0.2:60986 which had sessionid 0x1593c8f09d800f6
2016-12-26 20:14:10,000 INFO org.apache.zookeeper.server.ZooKeeperServer: Expiring session 0x1593c8f09d800f3, timeout of 30000ms exceeded
2016-12-26 20:14:10,000 INFO org.apache.zookeeper.server.PrepRequestProcessor: Processed session termination for sessionid: 0x1593c8f09d800f3
2016-12-26 20:14:12,000 INFO org.apache.zookeeper.server.ZooKeeperServer: Expiring session 0x1593c8f09d800f4, timeout of 30000ms exceeded
2016-12-26 20:14:12,000 INFO org.apache.zookeeper.server.PrepRequestProcessor: Processed session termination for sessionid: 0x1593c8f09d800f4
2016-12-26 20:14:16,000 INFO org.apache.zookeeper.server.ZooKeeperServer: Expiring session 0x1593c8f09d800f5, timeout of 30000ms exceeded
2016-12-26 20:14:16,000 INFO org.apache.zookeeper.server.PrepRequestProcessor: Processed session termination for sessionid: 0x1593c8f09d800f5
2016-12-26 20:14:20,000 INFO org.apache.zookeeper.server.ZooKeeperServer: Expiring session 0x1593c8f09d800f6, timeout of 30000ms exceeded
2016-12-26 20:14:20,000 INFO org.apache.zookeeper.server.PrepRequestProcessor: Processed session termination for sessionid: 0x1593c8f09d800f6
2016-12-26 20:14:40,993 INFO org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket connection from /10.128.0.2:32886
2016-12-26 20:14:40,993 INFO org.apache.zookeeper.server.ZooKeeperServer: Client attempting to establish new session at /10.128.0.2:32886
2016-12-26 20:14:40,995 INFO org.apache.zookeeper.server.ZooKeeperServer: Established session 0x1593c8f09d800f7 with negotiated timeout 30000 for client /10.128.0.2:32886
2016-12-26 20:14:41,019 INFO org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket connection from /10.128.0.2:32892
2016-12-26 20:14:41,020 INFO org.apache.zookeeper.server.ZooKeeperServer: Client attempting to establish new session at /10.128.0.2:32892

 

Zookeeper is running, name-node-1 resolves to an IP and zookeeper and solr-server are on the same node anyway.  What could be going on?  

avatar
Master Guru

I would verify resolution of the host to IP and make sure you can make a connection to zookeeper from the host on which you are trying to start Solr.  If you are not seeing any error messages in Zookeeper logs, that is a basic indicator that a connection to the zookeeper could not be established.  First thing to check is if you can connect from the same host, using the same IP/port, to the server.

 

 

avatar
Explorer

Ben, the services are on the same host, and yes I already verified those things.  

avatar
Explorer

That did the trick thanks a lot!

avatar
Master Guru

Actually, this message may be showing us the cause.  Due to the fact that the interesting bit was off the page when I was looking at your Zookeeper snippet, I couldn't see it at first:

 

2016-12-26 20:13:48,322 INFO org.apache.zookeeper.server.PrepRequestProcessor: Got user-level KeeperException when processing sessionid:0x1593c8f09d800f6 type:create cxid:0x2 zxid:0x1b2ea txntype:-1 reqpath:n/a Error Path:/solr Error:KeeperErrorCode = NoNode for /solr

 

Zookeeper appears to have no znode for /solr

To create one, in Cloudera Manager, Go to Solr.

Next, click on the Actions button (drop-down) on the far right and choose "Initialize Solr"

This will create the /solr znode.

 

After that, try starting Solr again.

 

Regards,

 

Ben

avatar
Explorer

Ok I will try the initialize Solr.  Thanks for the idea Ben!  I will try it as son as I get to another office.  I can't do it from the current client as their firewalls are pretty tight and I have it in GCE, but I'm excited to try this.