Created 11-29-2016 01:27 AM
Hi - Solr 5.5 was working fine with NiFi, until I restarted my VM. Now when I start Nifi and Solr, I get the following message in Solr:
tweets_shard1_replica1: org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: Error opening new searcher
I'm attaching the complete solr log. Please can someone help?
Thank Youtweets-shard1-replica1.txt
Created 11-30-2016 03:55 PM
@Raf Mohammed I recreated your error and tested it, although I was not using HDFS, which should not be any different. To delete the corrupt collection you can do these steps.
I recommend shutting down Solr and HDFS gracefully to avoid index corruption. Also, you can create backup copies if you want to recover in the future. Having replicas will also probably help with this situation.
Created 11-29-2016 09:44 PM
@Raf Mohammed - Assuming your index is stored in HDFS rather than the local file system where Solr is running:
hdfs dfs -mkdir /solr/DELETEME_core_node1 hdfs dfs -mv /solr/tweets/core_node1/data/tlog/tlog.* /solr/DELETEME_core_node1
When you're ready to delete the files, run this command:
hdfs dfs -rm -r /solr/DELETEME_core_node1
Created 11-29-2016 10:20 PM
@james.jones I managed to run those commands without any issues, however when restarting Solr, I'm still getting the same error 😞
29/11/2016, 22:16:42 | WARN | null | NativeCodeLoader | Unable to load native-hadoop library for your platform... using builtin-java classes where applicable |
29/11/2016, 22:16:44 | WARN | null | BlockReaderFactory | I/O error constructing remote block reader. |
29/11/2016, 22:16:44 | WARN | null | DFSClient | Connection failure: Failed to connect to /172.17.0.2:50010 for file /solr/tweets/core_node1/data/index/segments_9g for block BP-1464254149-172.17.0.2-1477381671113:blk_1073759485_18671:java.io.IOException: Got error for OP_READ_BLOCK, self=/172.17.0.2:33680, remote=/172.17.0.2:50010, for file /solr/tweets/core_node1/data/index/segments_9g, for pool BP-1464254149-172.17.0.2-1477381671113 block 1073759485_18671 |
29/11/2016, 22:16:44 | WARN | null | DFSClient | DFS chooseDataNode: got # 1 IOException, will wait for 673.6703523877359 msec. |
29/11/2016, 22:16:45 | WARN | null | BlockReaderFactory | I/O error constructing remote block reader. |
29/11/2016, 22:16:45 | WARN | null | DFSClient | Connection failure: Failed to connect to /172.17.0.2:50010 for file /solr/tweets/core_node1/data/index/segments_9g for block BP-1464254149-172.17.0.2-1477381671113:blk_1073759485_18671:java.io.IOException: Got error for OP_READ_BLOCK, self=/172.17.0.2:33684, remote=/172.17.0.2:50010, for file /solr/tweets/core_node1/data/index/segments_9g, for pool BP-1464254149-172.17.0.2-1477381671113 block 1073759485_18671 |
29/11/2016, 22:16:45 | WARN | null | DFSClient | DFS chooseDataNode: got # 2 IOException, will wait for 6501.220629185886 msec. |
29/11/2016, 22:16:52 | WARN | null | BlockReaderFactory | I/O error constructing remote block reader. |
29/11/2016, 22:16:52 | WARN | null | DFSClient | Connection failure: Failed to connect to /172.17.0.2:50010 for file /solr/tweets/core_node1/data/index/segments_9g for block BP-1464254149-172.17.0.2-1477381671113:blk_1073759485_18671:java.io.IOException: Got error for OP_READ_BLOCK, self=/172.17.0.2:33706, remote=/172.17.0.2:50010, for file /solr/tweets/core_node1/data/index/segments_9g, for pool BP-1464254149-172.17.0.2-1477381671113 block 1073759485_18671 |
29/11/2016, 22:16:52 | WARN | null | DFSClient | DFS chooseDataNode: got # 3 IOException, will wait for 6947.415060544425 msec. |
29/11/2016, 22:16:59 | WARN | null | BlockReaderFactory | I/O error constructing remote block reader. |
29/11/2016, 22:16:59 | WARN | null | DFSClient | Connection failure: Failed to connect to /172.17.0.2:50010 for file /solr/tweets/core_node1/data/index/segments_9g for block BP-1464254149-172.17.0.2-1477381671113:blk_1073759485_18671:java.io.IOException: Got error for OP_READ_BLOCK, self=/172.17.0.2:33726, remote=/172.17.0.2:50010, for file /solr/tweets/core_node1/data/index/segments_9g, for pool BP-1464254149-172.17.0.2-1477381671113 block 1073759485_18671 |
29/11/2016, 22:16:59 | WARN | null | DFSClient | Could not obtain block: BP-1464254149-172.17.0.2-1477381671113:blk_1073759485_18671 file=/solr/tweets/core_node1/data/index/segments_9g No live nodes contain current block Block locations: 172.17.0.2:50010 Dead nodes: 172.17.0.2:50010. Throwing a BlockMissingException |
29/11/2016, 22:17:00 | ERROR | null | CoreContainer | Error creating core [tweets_shard1_replica1]: Error opening new searcher |
29/11/2016, 22:17:00 | ERROR | null | CoreContainer | Error waiting for SolrCore to be created |
Created 11-29-2016 09:48 PM
@Raf Mohammed - Also, if you really don't care about the data you can also delete the collection and then recreate it again:
#http://localhost:8983/admin/collections?action=DELETE&name=tweets
Created 11-29-2016 10:26 PM
@james.jones Thanks, I ran that command, but didn't get a confirmation message to say it's deleted, when i re-enter the command it doesn't tell me that it no longer exists. am i missing something?
Created 11-29-2016 10:45 PM
[root@sandbox ~]# http://localhost:8983/solr/tweets/update?commit=true
bash: http://localhost:8983/solr/tweets/update?commit=true: No such file or directory
Created 11-29-2016 10:30 PM
@Raf Mohammed - it looks like something is bad wrong with your HDFS.
So about deleting the collection, I don't recall, but you should be able to look in Solr at the collections and it should be gone. Also if you try to recreate the collection it should tell you that it already exists if it was not deleted. I don't recall if the data will be deleted when the collection is deleted but I think it is.
Created 11-29-2016 10:39 PM
@james.jones - Not sure what else to try, maybe i'm not deleting it from the correct path?
[solr@sandbox root]$ /opt/lucidworks-hdpsearch/solr/bin/solr create -c tweets -d tweet_configs -s 1 -rf 1 -p 8983
Connecting to ZooKeeper at sandbox.hortonworks.com:2181/solr ...
Re-using existing configuration directory tweets
ERROR:
Collection 'tweets' already exists!
Checked collection existence using Collections API command:
http://sandbox.hortonworks.com:8983/solr/admin/collections?action=list
Created 11-29-2016 10:43 PM
screen-shot-2016-11-29-at-224034.png Here's what my Solr UI looks like right now. There must be away to force Solr to overlook this...
Created 11-30-2016 03:55 PM
@Raf Mohammed I recreated your error and tested it, although I was not using HDFS, which should not be any different. To delete the corrupt collection you can do these steps.
I recommend shutting down Solr and HDFS gracefully to avoid index corruption. Also, you can create backup copies if you want to recover in the future. Having replicas will also probably help with this situation.
Created 11-30-2016 05:59 PM
@james.jones I got up to step for, but can't proceed due to: [root@sandbox ~]# -bash: http://localhost:8984/solr/admin/collections?action=DELETE: No such file or directory
Would I be ok to recreate the collection?