Support Questions

Find answers, ask questions, and share your expertise
Announcements
Welcome to the upgraded Community! Read this blog to see What’s New!

Solr 5.5 Solr Exception Error opening new searcher (NiFi Flow)

avatar
Explorer

Hi - Solr 5.5 was working fine with NiFi, until I restarted my VM. Now when I start Nifi and Solr, I get the following message in Solr:

tweets_shard1_replica1: org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: Error opening new searcher

I'm attaching the complete solr log. Please can someone help?

Thank Youtweets-shard1-replica1.txt

1 ACCEPTED SOLUTION

avatar
Super Collaborator

@Raf Mohammed I recreated your error and tested it, although I was not using HDFS, which should not be any different. To delete the corrupt collection you can do these steps.

I recommend shutting down Solr and HDFS gracefully to avoid index corruption. Also, you can create backup copies if you want to recover in the future. Having replicas will also probably help with this situation.

View solution in original post

23 REPLIES 23

avatar

The attached file doesn't look like a normal Solr log, did that come from solr_home/server/logs/solr.log ?

avatar
Explorer

@Bryan Bende I got the info from Solr UI, under 'Logging'.

avatar
Explorer

@Bryan Bende any ideas?

avatar
Explorer

<response>

<lst name="responseHeader">
<int name="status">0</int> <int name="QTime">0</int> </lst>
<arr name="collections">
<str>collection1</str> <str>tweets</str> </arr>

</response>

[root@sandbox ~]# su solr

[solr@sandbox root]$ /opt/lucidworks-hdpsearch/solr/bin/solr create -c tweets -d tweet_configs -s 1 -rf 1 -p 8983

Connecting to ZooKeeper at sandbox.hortonworks.com:2181/solr ...

Re-using existing configuration directory tweets

ERROR:

Collection 'tweets' already exists!

Checked collection existence using Collections API command:

http://sandbox.hortonworks.com:8983/solr/admin/collections?action=list

avatar

@Raf Mohammed It looks like your transaction log has some issue and is preventing solr from opening a new searcher. Have you tried to restart solr or issue a manual commit ? Is your HDFS healthy?

You could remove the recent transaction logs, which should resolve this issue, however you will loose the most recent ingested data.

Got a checksum exception for /solr/tweets/core_node1/data/tlog/tlog.0000000000000000338 at BP-1464254149-172.17.0.2-1477381671113:blk_1073759482_18761:1024 from xxxxxxxxx:50010\cell \row

.

avatar
Explorer

@Jonas Straub HDFS is healthy and solr has been restarted however I'm not sure how to go about doing a manual commit...?

http://localhost:8983/solr/update?commit=true

Please can you tell me how to remove the recent transaction logs? I don't mind losing previously ingested data as this is a test environment. Thanks!

avatar
Explorer

@Jonas Straub @Neeraj Sabharwal hi, some guidance on this would be highly appreciated

avatar
Super Collaborator

@Raf Mohammed - This is a transaction log file: /solr/tweets/core_node1/data/tlog/tlog.0000000000000000338, so just delete them (I always prefer to move files I'm going to delete to another directory called "TODELETE" in the event that I actually need them and once things look good I then delete them). You may need to restart Solr if you delete these files out from under Solr.

A manual commit should be like this assuming your collection is named "tweets":

http://localhost:8983/solr/tweets/update?commit=true

avatar
Explorer

@james.jones Sorry, how can i access and remove these logs please? *newbie* i'm not concerned about losing any previously ingested data, as they were just a collection of tweets from that particular day.

avatar
Super Collaborator

@Raf Mohammed - Assuming your index is stored in HDFS rather than the local file system where Solr is running:

hdfs dfs -mkdir /solr/DELETEME_core_node1 
hdfs dfs -mv /solr/tweets/core_node1/data/tlog/tlog.* /solr/DELETEME_core_node1

When you're ready to delete the files, run this command:

hdfs dfs -rm -r /solr/DELETEME_core_node1

avatar
Explorer

@james.jones I managed to run those commands without any issues, however when restarting Solr, I'm still getting the same error 😞

29/11/2016, 22:16:42WARNnullNativeCodeLoaderUnable to load native-hadoop library for your platform... using builtin-java classes where applicable
29/11/2016, 22:16:44WARNnullBlockReaderFactoryI/O error constructing remote block reader.
29/11/2016, 22:16:44WARNnullDFSClientConnection failure: Failed to connect to /172.17.0.2:50010 for file /solr/tweets/core_node1/data/index/segments_9g for block BP-1464254149-172.17.0.2-1477381671113:blk_1073759485_18671:java.io.IOException: Got error for OP_READ_BLOCK, self=/172.17.0.2:33680, remote=/172.17.0.2:50010, for file /solr/tweets/core_node1/data/index/segments_9g, for pool BP-1464254149-172.17.0.2-1477381671113 block 1073759485_18671
29/11/2016, 22:16:44WARNnullDFSClientDFS chooseDataNode: got # 1 IOException, will wait for 673.6703523877359 msec.
29/11/2016, 22:16:45WARNnullBlockReaderFactoryI/O error constructing remote block reader.
29/11/2016, 22:16:45WARNnullDFSClientConnection failure: Failed to connect to /172.17.0.2:50010 for file /solr/tweets/core_node1/data/index/segments_9g for block BP-1464254149-172.17.0.2-1477381671113:blk_1073759485_18671:java.io.IOException: Got error for OP_READ_BLOCK, self=/172.17.0.2:33684, remote=/172.17.0.2:50010, for file /solr/tweets/core_node1/data/index/segments_9g, for pool BP-1464254149-172.17.0.2-1477381671113 block 1073759485_18671
29/11/2016, 22:16:45WARNnullDFSClientDFS chooseDataNode: got # 2 IOException, will wait for 6501.220629185886 msec.
29/11/2016, 22:16:52WARNnullBlockReaderFactoryI/O error constructing remote block reader.
29/11/2016, 22:16:52WARNnullDFSClientConnection failure: Failed to connect to /172.17.0.2:50010 for file /solr/tweets/core_node1/data/index/segments_9g for block BP-1464254149-172.17.0.2-1477381671113:blk_1073759485_18671:java.io.IOException: Got error for OP_READ_BLOCK, self=/172.17.0.2:33706, remote=/172.17.0.2:50010, for file /solr/tweets/core_node1/data/index/segments_9g, for pool BP-1464254149-172.17.0.2-1477381671113 block 1073759485_18671
29/11/2016, 22:16:52WARNnullDFSClientDFS chooseDataNode: got # 3 IOException, will wait for 6947.415060544425 msec.
29/11/2016, 22:16:59WARNnullBlockReaderFactoryI/O error constructing remote block reader.
29/11/2016, 22:16:59WARNnullDFSClientConnection failure: Failed to connect to /172.17.0.2:50010 for file /solr/tweets/core_node1/data/index/segments_9g for block BP-1464254149-172.17.0.2-1477381671113:blk_1073759485_18671:java.io.IOException: Got error for OP_READ_BLOCK, self=/172.17.0.2:33726, remote=/172.17.0.2:50010, for file /solr/tweets/core_node1/data/index/segments_9g, for pool BP-1464254149-172.17.0.2-1477381671113 block 1073759485_18671
29/11/2016, 22:16:59WARNnullDFSClientCould not obtain block: BP-1464254149-172.17.0.2-1477381671113:blk_1073759485_18671 file=/solr/tweets/core_node1/data/index/segments_9g No live nodes contain current block Block locations: 172.17.0.2:50010 Dead nodes: 172.17.0.2:50010. Throwing a BlockMissingException
29/11/2016, 22:17:00ERRORnullCoreContainerError creating core [tweets_shard1_replica1]: Error opening new searcher
29/11/2016, 22:17:00ERRORnullCoreContainer

Error waiting for SolrCore to be created

avatar
Super Collaborator

@Raf Mohammed - Also, if you really don't care about the data you can also delete the collection and then recreate it again:

#http://localhost:8983/admin/collections?action=DELETE&name=tweets

avatar
Explorer

@james.jones Thanks, I ran that command, but didn't get a confirmation message to say it's deleted, when i re-enter the command it doesn't tell me that it no longer exists. am i missing something?

avatar
Explorer

avatar
Super Collaborator

@Raf Mohammed - it looks like something is bad wrong with your HDFS.

So about deleting the collection, I don't recall, but you should be able to look in Solr at the collections and it should be gone. Also if you try to recreate the collection it should tell you that it already exists if it was not deleted. I don't recall if the data will be deleted when the collection is deleted but I think it is.

avatar
Explorer

@james.jones - Not sure what else to try, maybe i'm not deleting it from the correct path?

[solr@sandbox root]$ /opt/lucidworks-hdpsearch/solr/bin/solr create -c tweets -d tweet_configs -s 1 -rf 1 -p 8983

Connecting to ZooKeeper at sandbox.hortonworks.com:2181/solr ...

Re-using existing configuration directory tweets

ERROR:

Collection 'tweets' already exists!

Checked collection existence using Collections API command:

http://sandbox.hortonworks.com:8983/solr/admin/collections?action=list

avatar
Explorer

screen-shot-2016-11-29-at-224034.png Here's what my Solr UI looks like right now. There must be away to force Solr to overlook this...

avatar
Super Collaborator

@Raf Mohammed I recreated your error and tested it, although I was not using HDFS, which should not be any different. To delete the corrupt collection you can do these steps.

I recommend shutting down Solr and HDFS gracefully to avoid index corruption. Also, you can create backup copies if you want to recover in the future. Having replicas will also probably help with this situation.

avatar
Explorer

@james.jones I got up to step for, but can't proceed due to: [root@sandbox ~]# -bash: http://localhost:8984/solr/admin/collections?action=DELETE: No such file or directory

Would I be ok to recreate the collection?

avatar
Super Collaborator

@Raf Mohammed - Click on the "Cloud" icon in the Admin console and your collection should not appear in the graph (or there will be no graph if there are no collections at all). If that's the case, then yes, you can recreate the collection now. It will create the directories it needs.

Labels