Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Solr is Stopped in Ambari but Log says running

avatar
Expert Contributor

I installed Solr, but its status is Stop (Red) in Ambari console. When I attempt to restart the service, which I already did repeatedly, the log shows it's already running:

stderr: /var/lib/ambari-agent/data/errors-253.txt

/usr/lib/python2.6/site-packages/resource_management/core/environment.py:165: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
  Logger.info("Skipping failure of {0} due to ignore_failures. Failure reason: {1}".format(resource, ex.message))
2016-11-14 18:58:11,634 - Solr is running, it cannot be started again

stdout: /var/lib/ambari-agent/data/output-253.txt

2016-11-14 18:58:06,871 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2016-11-14 18:58:06,871 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2016-11-14 18:58:06,871 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-11-14 18:58:06,923 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2016-11-14 18:58:06,924 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-11-14 18:58:06,950 - checked_call returned (0, '')
2016-11-14 18:58:06,951 - Ensuring that hadoop has the correct symlink structure
2016-11-14 18:58:06,951 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-11-14 18:58:07,151 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2016-11-14 18:58:07,151 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2016-11-14 18:58:07,151 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-11-14 18:58:07,177 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2016-11-14 18:58:07,178 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-11-14 18:58:07,200 - checked_call returned (0, '')
2016-11-14 18:58:07,201 - Ensuring that hadoop has the correct symlink structure
2016-11-14 18:58:07,201 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-11-14 18:58:07,205 - Group['hadoop'] {}
2016-11-14 18:58:07,209 - Group['users'] {}
2016-11-14 18:58:07,209 - Group['zeppelin'] {}
2016-11-14 18:58:07,209 - Group['solr'] {}
2016-11-14 18:58:07,210 - Group['knox'] {}
2016-11-14 18:58:07,210 - Group['ranger'] {}
2016-11-14 18:58:07,210 - Group['spark'] {}
2016-11-14 18:58:07,210 - Group['livy'] {}
2016-11-14 18:58:07,211 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,212 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,212 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,213 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger']}
2016-11-14 18:58:07,214 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,215 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,215 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,216 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,216 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-11-14 18:58:07,217 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-11-14 18:58:07,218 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,218 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,219 - User['solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,219 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,220 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,220 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,221 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,222 - User['accumulo'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,223 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-11-14 18:58:07,223 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,224 - User['mahout'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,226 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-11-14 18:58:07,226 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,227 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,228 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,229 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,230 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-11-14 18:58:07,274 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-11-14 18:58:07,285 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-11-14 18:58:07,286 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2016-11-14 18:58:07,287 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-11-14 18:58:07,289 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-11-14 18:58:07,294 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-11-14 18:58:07,294 - Group['hdfs'] {}
2016-11-14 18:58:07,295 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2016-11-14 18:58:07,295 - FS Type: 
2016-11-14 18:58:07,296 - Directory['/etc/hadoop'] {'mode': 0755}
2016-11-14 18:58:07,316 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-11-14 18:58:07,318 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2016-11-14 18:58:07,335 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2016-11-14 18:58:07,347 - Skipping Execute[('setenforce', '0')] due to not_if
2016-11-14 18:58:07,347 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2016-11-14 18:58:07,356 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2016-11-14 18:58:07,356 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2016-11-14 18:58:07,374 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2016-11-14 18:58:07,380 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2016-11-14 18:58:07,381 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-11-14 18:58:07,397 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2016-11-14 18:58:07,398 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2016-11-14 18:58:07,407 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2016-11-14 18:58:07,426 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2016-11-14 18:58:07,430 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2016-11-14 18:58:07,700 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2016-11-14 18:58:07,700 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2016-11-14 18:58:07,701 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-11-14 18:58:07,724 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2016-11-14 18:58:07,724 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-11-14 18:58:07,747 - checked_call returned (0, '')
2016-11-14 18:58:07,747 - Ensuring that hadoop has the correct symlink structure
2016-11-14 18:58:07,748 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-11-14 18:58:07,749 - Execute['/opt/lucidworks-hdpsearch/solr/bin/solr stop -all >> /var/log/service_solr/solr-service.log 2>&1'] {'environment': {'JAVA_HOME': '/usr/lib/jvm/java-1.7.0-oracle'}, 'user': 'solr'}
2016-11-14 18:58:08,041 - File['/var/run/solr/solr-8983.pid'] {'action': ['delete']}
2016-11-14 18:58:08,041 - Pid file /var/run/solr/solr-8983.pid is empty or does not exist
2016-11-14 18:58:08,042 - Directory['/opt/lucidworks-hdpsearch/solr'] {'owner': 'solr', 'create_parents': True, 'group': 'solr', 'mode': 0755, 'cd_access': 'a'}
2016-11-14 18:58:08,044 - Directory['/var/log/solr'] {'owner': 'solr', 'create_parents': True, 'group': 'solr', 'mode': 0755, 'cd_access': 'a'}
2016-11-14 18:58:08,044 - Directory['/var/log/service_solr'] {'owner': 'solr', 'create_parents': True, 'group': 'solr', 'mode': 0755, 'cd_access': 'a'}
2016-11-14 18:58:08,045 - Directory['/var/run/solr'] {'owner': 'solr', 'create_parents': True, 'group': 'solr', 'mode': 0755, 'cd_access': 'a'}
2016-11-14 18:58:08,045 - Directory['/etc/solr/conf'] {'owner': 'solr', 'create_parents': True, 'group': 'solr', 'mode': 0755, 'cd_access': 'a'}
2016-11-14 18:58:08,046 - Directory['/etc/solr/data_dir'] {'owner': 'solr', 'group': 'solr', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2016-11-14 18:58:08,047 - Execute[('chmod', '-R', '777', '/opt/lucidworks-hdpsearch/solr/server/solr-webapp')] {'sudo': True}
2016-11-14 18:58:08,104 - File['/opt/lucidworks-hdpsearch/solr/bin/solr.in.sh'] {'owner': 'solr', 'content': InlineTemplate(...)}
2016-11-14 18:58:08,106 - File['/etc/solr/conf/log4j.properties'] {'owner': 'solr', 'content': InlineTemplate(...)}
2016-11-14 18:58:08,128 - File['/etc/solr/data_dir/solr.xml'] {'owner': 'solr', 'content': Template('solr.xml.j2')}
2016-11-14 18:58:08,129 - HdfsResource['/user/solr'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://<host>:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'solr', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', u'/mr-history/done', u'/apps/falcon']}
2016-11-14 18:58:08,133 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://<host>:50070/webhdfs/v1/user/solr?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmp7MLiQD 2>/tmp/tmp1gMY3t''] {'logoutput': None, 'quiet': False}
2016-11-14 18:58:08,186 - call returned (0, '')
2016-11-14 18:58:08,186 - call['export JAVA_HOME=/usr/lib/jvm/java-1.7.0-oracle; /opt/lucidworks-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <host>:2181 -cmd get /solr/clusterstate.json'] {'timeout': 60}
2016-11-14 18:58:08,884 - call returned (1, 'Exception in thread "main" org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /solr/clusterstate.json\n\tat org.apache.zookeeper.KeeperException.create(KeeperException.java:111)\n\tat org.apache.zookeeper.KeeperException.create(KeeperException.java:51)\n\tat org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)\n\tat org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:345)\n\tat org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:342)\n\tat org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)\n\tat org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:342)\n\tat org.apache.solr.cloud.ZkCLI.main(ZkCLI.java:296)')
2016-11-14 18:58:08,884 - Execute['export JAVA_HOME=/usr/lib/jvm/java-1.7.0-oracle; /opt/lucidworks-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <host>:2181 -cmd makepath /solr'] {'ignore_failures': True, 'user': 'solr'}
2016-11-14 18:58:09,302 - Skipping failure of Execute['export JAVA_HOME=/usr/lib/jvm/java-1.7.0-oracle; /opt/lucidworks-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <host>:2181 -cmd makepath /solr'] due to ignore_failures. Failure reason: Execution of 'export JAVA_HOME=/usr/lib/jvm/java-1.7.0-oracle; /opt/lucidworks-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <host>:2181 -cmd makepath /solr' returned 1. Exception in thread "main" org.apache.zookeeper.KeeperException$NodeExistsException: KeeperErrorCode = NodeExists for /solr
	at org.apache.zookeeper.KeeperException.create(KeeperException.java:119)
	at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
	at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
	at org.apache.solr.common.cloud.SolrZkClient$10.execute(SolrZkClient.java:501)
	at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
	at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:498)
	at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:455)
	at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:442)
	at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:398)
	at org.apache.solr.cloud.ZkCLI.main(ZkCLI.java:258)
2016-11-14 18:58:09,302 - HdfsResource['/solr'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://<host>:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'solr', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', u'/mr-history/done', u'/apps/falcon']}
2016-11-14 18:58:09,303 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://<host>:50070/webhdfs/v1/solr?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpUGvE9U 2>/tmp/tmp9AYBHf''] {'logoutput': None, 'quiet': False}
2016-11-14 18:58:09,341 - call returned (0, '')
2016-11-14 18:58:09,341 - call['export JAVA_HOME=/usr/lib/jvm/java-1.7.0-oracle; /opt/lucidworks-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <host>:2181 -cmd get /solr/clusterprops.json'] {'timeout': 60}
2016-11-14 18:58:09,743 - call returned (1, 'Exception in thread "main" org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /solr/clusterprops.json\n\tat org.apache.zookeeper.KeeperException.create(KeeperException.java:111)\n\tat org.apache.zookeeper.KeeperException.create(KeeperException.java:51)\n\tat org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)\n\tat org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:345)\n\tat org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:342)\n\tat org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)\n\tat org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:342)\n\tat org.apache.solr.cloud.ZkCLI.main(ZkCLI.java:296)')
2016-11-14 18:58:09,743 - call['export JAVA_HOME=/usr/lib/jvm/java-1.7.0-oracle; /opt/lucidworks-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <host>:2181 -cmd get /solr/security.json'] {'timeout': 60}
2016-11-14 18:58:10,260 - call returned (1, 'Exception in thread "main" org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /solr/security.json\n\tat org.apache.zookeeper.KeeperException.create(KeeperException.java:111)\n\tat org.apache.zookeeper.KeeperException.create(KeeperException.java:51)\n\tat org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)\n\tat org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:345)\n\tat org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:342)\n\tat org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)\n\tat org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:342)\n\tat org.apache.solr.cloud.ZkCLI.main(ZkCLI.java:296)')
2016-11-14 18:58:10,260 - call['netstat -lnt | awk -v v1=8983 '$6 == "LISTEN" && $4 ~ ":"+v1''] {'timeout': 60}
2016-11-14 18:58:10,338 - call returned (0, '')
2016-11-14 18:58:10,339 - Solr port validation output: 
2016-11-14 18:58:10,339 - call['/opt/lucidworks-hdpsearch/solr/bin/solr status'] {'timeout': 60}
2016-11-14 18:58:11,633 - call returned (0, 'Found 1 Solr nodes: \n\nSolr process 10244 running on port 8886\n{\n  "solr_home":"/opt/ambari_infra_solr/data",\n  "version":"5.5.2 8e5d40b22a3968df065dfc078ef81cbb031f0e4a - sarowe - 2016-06-21 11:44:11",\n  "startTime":"2016-11-14T09:03:22.462Z",\n  "uptime":"0 days, 1 hours, 54 minutes, 49 seconds",\n  "memory":"150.5 MB (%7.7) of 981.4 MB",\n  "cloud":{\n    "ZooKeeper":"<host>:2181/infra-solr",\n    "liveNodes":"1",\n    "collections":"4"}}')
2016-11-14 18:58:11,634 - Solr status output: Found 1 Solr nodes: 


Solr process 10244 running on port 8886
{
  "solr_home":"/opt/ambari_infra_solr/data",
  "version":"5.5.2 8e5d40b22a3968df065dfc078ef81cbb031f0e4a - sarowe - 2016-06-21 11:44:11",
  "startTime":"2016-11-14T09:03:22.462Z",
  "uptime":"0 days, 1 hours, 54 minutes, 49 seconds",
  "memory":"150.5 MB (%7.7) of 981.4 MB",
  "cloud":{
    "ZooKeeper":"<host>:2181/infra-solr",
    "liveNodes":"1",
    "collections":"4"}}
2016-11-14 18:58:11,634 - Solr is running, it cannot be started again


Command failed after 1 tries

I am using the latest version, HDP 2.5.

1 ACCEPTED SOLUTION

avatar
Super Collaborator

@J. D. Bacolod , make sure your Ambari Infra service is stopped when you start the Solr. Ambari Infra is the embedded solr which other services need. Start your newly installed Solr and later on, you can start Ambari Infra.

View solution in original post

3 REPLIES 3

avatar
Super Guru

@J. D. Bacolod

Try to stop service using ambari api -

Can you check the status of service from ambari DB -

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=41812517

OR

Check the service state from database.

[root@thakur1 ~]# psql -U ambari

Password for user ambari: [default password is 'bigdata']

ambari=> select * from servicecomponentdesiredstate ;

====

Check the status of SOlr here.

You can modify the state to stopped as below -

ambari=>update servicecomponentdesiredstate set desired_state='INSTALLED' where component_name='SOLR'

avatar
Super Collaborator

@J. D. Bacolod , make sure your Ambari Infra service is stopped when you start the Solr. Ambari Infra is the embedded solr which other services need. Start your newly installed Solr and later on, you can start Ambari Infra.

avatar

Hey @J. D. Bacolod,

did you solve your issue ? i have the same problem with solr when i want to start it. but when i hit restart all te solr symbol turns green and i can enter the UI. but there i get the next error.

  • collection1_shard1_replica1: org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: Index dir 'hdfs://sandbox.hortonworks.com:8020/solr/collection1/core_node2/data/index/' of core 'collection1_shard1_replica1' is already locked. The most likely cause is another Solr server (or another solr core in this server) also configured to use this directory; other possible causes may be specific to lockType: hdfs
  • tweets_shard1_replica1: org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: Index dir 'hdfs://sandbox.hortonworks.com:8020/solr/tweets/core_node1/data/index/' of core 'tweets_shard1_replica1' is already locked. The most likely cause is another Solr server (or another solr core in this server) also configured to use this directory; other possible causes may be specific to lockType: hdfs
  • collection1_shard2_replica1: org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: Index dir 'hdfs://sandbox.hortonworks.com:8020/solr/collection1/core_node1/data/index/' of core 'collection1_shard2_replica1' is already locked. The most likely cause is another Solr server (or another solr core in this server) also configured to use this directory; other possible causes may be specific to lockType: hdfs

Anyone have an idea ?

Best Regards,

Martin