Member since
09-01-2016
20
Posts
7
Kudos Received
0
Solutions
10-05-2017
11:29 PM
Thanks @Cassandra Targett. I've reverted to 6.6.0 and will await V7.0.1 Regards Tony
... View more
10-05-2017
03:42 AM
I upgraded an existing Solr 6.6.0 instance to V7.0.0. On startup, only empty cores have come up cleanly. All cores that contain any data get an error "Error Opening New Searcher" All these cores store their indexes on HDFS. This instance is a single node Solr cloud using external zookeeper. The HDFS platform is HDP 2.4.2 Just for starters, here is an example from solr.log for one such core: 2017-09-27 16:31:23.197 ERROR (coreContainerWorkExecutor-2-thread-1-processing-n:SolrServer:9031_solr) [ ] o.a.s.c.CoreContainer Error waiting for SolrCore to be created
java.util.concurrent.ExecutionException: org.apache.solr.common.SolrException: Unable to create core [CoreName_shard1_replica1]
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.solr.core.CoreContainer.lambda$load$118(CoreContainer.java:647)
at org.apache.solr.core.CoreContainer$Lambda$132/1829217853.run(Unknown Source)
at com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$128(ExecutorUtil.java:188)
at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$Lambda$15/991515462.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.common.SolrException: Unable to create core [CoreName_shard1_replica1]
at org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:996)
at org.apache.solr.core.CoreContainer.lambda$load$117(CoreContainer.java:619)
at org.apache.solr.core.CoreContainer$Lambda$131/1622458036.call(Unknown Source)
at com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
... 6 more
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:988)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:843)
at org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:980)
... 9 more
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2066)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2186)
at org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1071)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:960)
... 11 more
Caused by: java.lang.NullPointerException
at java.util.Objects.requireNonNull(Objects.java:203)
at java.util.Optional.<init>(Optional.java:96)
at java.util.Optional.of(Optional.java:108)
at java.util.stream.ReduceOps$2ReducingSink.get(ReduceOps.java:129)
at java.util.stream.ReduceOps$2ReducingSink.get(ReduceOps.java:107)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.reduce(ReferencePipeline.java:479)
at org.apache.solr.index.SlowCompositeReaderWrapper.<init>(SlowCompositeReaderWrapper.java:76)
at org.apache.solr.index.SlowCompositeReaderWrapper.wrap(SlowCompositeReaderWrapper.java:57)
at org.apache.solr.search.SolrIndexSearcher.<init>(SolrIndexSearcher.java:252)
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2034)
... 14 more Any ideas?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Solr
06-19-2017
09:12 PM
Thanks @Cassandra Targett I didn't think it sounded quite right. We'll give it a try Tony
... View more
06-19-2017
12:54 AM
The Release notes for Solr 6.6.0 include the following Upgrade Note: ZooKeeper dependency has been upgraded from 3.4.6 to 3.4.10. On the face of it, this looks like a problem, since even HDP 2.6 only include Zookeeper 3.4.6. Does this 'dependency' mean that Solr will not work with Zookeeper 3.4.6 or is it just documenting the fact that the Solr embedded Zookeeper instance is now 3.4.10?
... View more
Labels:
- Labels:
-
Apache Solr
03-02-2017
01:58 AM
Thanks @Cassandra Targett
I am happy to wait for the 6.4.2 release. The team here are impressed with how fast this issue was resolved. Thanks for following this up for us.
... View more
02-28-2017
10:02 PM
Thanks @james.jones There are a series of symlinks involved but all the permissions look OK. All the directories and files are, at least, readable by all
... View more
02-28-2017
09:43 PM
Thanks @Cassandra Targett Very happy for you to include the link. I'm also happy to supply extra info and/or test a fix.
... View more
02-23-2017
05:52 AM
1 Kudo
Following an upgrade to SOLR 6.4.1, it appears that access to HDFS via a NameService name (High Availability) is no longer working. We have a solrconfig.xml which defines a HDFSDirectoryFactory as follows: <directoryFactory name="DirectoryFactory"
class="${solr.directoryFactory:solr.HdfsDirectoryFactory}">
<str name="solr.hdfs.home">hdfs://XXXXHDPDEV1/data/DEV/solr</str>
<bool name="solr.hdfs.blockcache.enabled">true</bool>
<int name="solr.hdfs.blockcache.slab.count">32</int>
<bool name="solr.hdfs.blockcache.direct.memory.allocation">true</bool>
<int name="solr.hdfs.blockcache.blocksperbank">16384</int>
<bool name="solr.hdfs.blockcache.read.enabled">true</bool>
<bool name="solr.hdfs.nrtcachingdirectory.enable">true</bool>
<int name="solr.hdfs.nrtcachingdirectory.maxmergesizemb">16</int>
<int name="solr.hdfs.nrtcachingdirectory.maxcachedmb">192</int>
<str name="solr.hdfs.confdir">/etc/hadoop/conf/</str>
</directoryFactory> In this definition the value of solr.hdfs.home is hdfs://XXXXHDPDEV1/data/DEV/solr, where XXXXHDPDEV1 is the nameService name for a Hadoop cluster. To enable this form of reference to the Hadoop cluster, we also include solr.hdfs.confdir which identifies a local directory that contains the Hadoop config files such as hdfs-site.xml These files map the nameservice name to multiple name nodes and should allow the HDFS client to discover the active name node. Using this nameservice name works fine when using command-line hdfs commands from the same SOLR server. Under V6.4.1, when we try to create a collection based on the config that contains this solrconfig.xml file, the HDFS objects are successfully created - but the CREATE COLLECTION fails because it fails to instantiate the Update Handler, solr.DirectUpdateHandler2. We get the following traceback: 2017-02-23 11:42:16.419 ERROR (qtp225493257-77) [c:aircargo s:shard1 x:aircargo_shard1_replica1] o.a.s.c.CoreContainer Error creating core [aircargo_shard1_replica1]: SolrCore 'aircargo_shard1_replica1' is not available due to init failure: Error Instantiating Update Handler, solr.DirectUpdateHandler2 failed to instantiate org.apache.solr.update.UpdateHandler
org.apache.solr.common.SolrException: SolrCore 'aircargo_shard1_replica1' is not available due to init failure: Error Instantiating Update Handler, solr.DirectUpdateHandler2 failed to instantiate org.apache.solr.update.UpdateHandler
at org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:1151)
at org.apache.solr.cloud.ZkController.publish(ZkController.java:1198)
at org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1372)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:885)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:827)
at org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:88)
at org.apache.solr.handler.admin.CoreAdminOperation$$Lambda$28/50699452.execute(Unknown Source)
at org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:377)
at org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:379)
at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:165)
at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:166)
at org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:664)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:445)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:296)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:534)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.common.SolrException: Error Instantiating Update Handler, solr.DirectUpdateHandler2 failed to instantiate org.apache.solr.update.UpdateHandler
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:959)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:823)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:890)
... 36 more
Caused by: org.apache.solr.common.SolrException: Error Instantiating Update Handler, solr.DirectUpdateHandler2 failed to instantiate org.apache.solr.update.UpdateHandler
at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:767)
at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:815)
at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1065)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:930)
... 38 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:753)
... 41 more
Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: DIBPHDPDEV1
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:378)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:310)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:678)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:619)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:145)
at org.apache.solr.update.UpdateHandler.<init>(UpdateHandler.java:137)
at org.apache.solr.update.UpdateHandler.<init>(UpdateHandler.java:94)
at org.apache.solr.update.DirectUpdateHandler2.<init>(DirectUpdateHandler2.java:102)
... 46 more
Caused by: java.net.UnknownHostException: XXXXHDPDEV1
... 58 more
Right at the end of the above log you will see UnknownHostException: XXXXHDPDEV1. It appears that the instantiation of the update handler thinks that the hadoop nameService name is a host name. We can avoid this error by hard-coding the server address and port of the active name node (e.g. XXXXn1:8020). e.g.
<str name="solr.hdfs.home">hdfs://XXXXn1:8020/data/DEV/solr</str> However, in the event of a name node switch, the collection becomes inaccessible. Is this a bug in V6.4.1? (Note. This approach worked fine in V5.3.0) There is a very similar problem reported in HDPSearch - failed to create collection - UnknownHostExceptionl However, this is from an earlier version and was solved by fixing a problem in uploading the config to zookeeper. (The fact that we can get our config to work by hard-coding the server name, suggests that we have our zookeeper update process under control.)
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Solr
02-22-2017
11:11 PM
@Jon Maestas I'm still having the same problem. I've tried clearing the config and upconfiging multiple times. In every instance the solrconfig.xml looks fine from the Solr UI. The HDS stuff seems to be working OK. i.e. when I create the collection, the expected directories and files are created in HDFS. It is only after that, when SOLR tries to instantiate the updateHandler that we get the unKnownHostException refering to our HDFS Nameservice Name. Unfortuanatley we changed multiple things going in here. Everything was working fine on Solr Version 5.3.1 and the embedded zookeeper. This problem has arisen when we went to SOLR 6.4.1 but we simultaneously switched to using the Hadoop cluster's existing zookeeper quorum. We have the /solr chroot setup in Zookeeper and it is referenced consistently across all the Solr config files and commands. Our next step is to start backing out our changes (which is a pain becuase we want some of the security enhancements in 6.4.1 In your examples (above) you use $zk_quorum. IS that set to the name of a single zookeeper node (or is it a list of all the nodes) I've tried both approaches but it doesn't make any difference. Thanks Tony
... View more
02-21-2017
05:35 AM
@Jon Maestas I have hit this problem too, but simply re-upconfig'ing did not fix the issue: I get an unknownHostException on the Nameservice name specified in solr.hdfs.home.
Did you get any further insight into what was going wrong and why re-executing the upconfig did the trick? (I have to say that the version of solrconfig.xml in zookeeper looks identical to my source version.) Regards, Tony
... View more