Support Questions

Find answers, ask questions, and share your expertise

SOLR Cloud & HDFS - problems with write.lock

avatar
Contributor

I am trying to create a multi-node SOLR Cloud configuration using HDFS for index storage.

I keep running into problems with a 'write.lock' when I try to create a multi-shard index

For example, I have 2 SOLR nodes running on ports 34010 and 34011. If I try to create a 2-shard collection; one shard gets created successfully but the other fails to create because of an existing write.lock. It appears that the 2 nodes are getting in each other's way.

The following CREATE command:

http://xxxxxx:34010/solr/admin/collections?action=CREATE&name=car_cloud&numShards=2&replicationFacto...

Returns the following error. It shows that one shard is successfully created but the other fails because of an existing write.lock. (Note, before running this command I deleted all the relevant directories from hdfs (hdfs dfs -rm -r /solr/car_cloud) so the lock file did not exist prior to running the command.

<response>
<lst name="responseHeader"> 
<int name="status">0</int> 
<int name="QTime">34958</int>
</lst> 
<lst name="failure"> 
<str name="99.9.99.9:34010_solr">
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at http://99.9.99.9:34010/solr: Error CREATEing SolrCore 'car_cloud_shard2_replica1': Unable to create core [car_cloud_shard2_replica1] Caused by: /solr/car_cloud/core_node1/data/index/write.lock for client 99.9.99.9 already exists at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2606) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2493) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2377) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:708) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:405) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2206) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2202) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2200)
</str> 
</lst>
<lst name="success"> 
<lst name="99.9.99.9:34011_solr"> 
<lst name="responseHeader"> 
<int name="status">0</int> 
<int name="QTime">4031</int>
</lst> 
<str name="core">car_cloud_shard1_replica1</str>
</lst>
</lst>
</response>

Any suggestions?

1 ACCEPTED SOLUTION

avatar
Super Guru

@Tony Bolt

The maxShardsPerNode is a setting specifically for replicas: https://cwiki.apache.org/confluence/display/solr/Collections+API. It defines how many replicas per node will be allowed for that collection. If you tried to increase the replica count to 2, I think you would see the error you are expecting.

In a similar thought to my previous comment on Dec 2, I think the problem may be that you are starting all of the Solr nodes from the same instance directory on the server. Typically you would create an instance directory for each instance of Solr on a server. For example: /opt/solr/solr_server1, /opt/solr/solr_server2. Normally when you start 2 instances of SolrCloud on the same server, you would "cd" into the respective Solr instance directories and then run the start commands from there.

I think that "solr.home" is defined by the current working directory by default. If you start all 4 instances from the same location, the instances are treated as the same "nodes" and this is likely creating some confusion for Solr.

View solution in original post

9 REPLIES 9

avatar

Hi @Tony Bolt

how does the <DirectoryFactory> element in your solrconfig.xml look like?

Just to make sure, you have uploaded the configuration to the Solr Zookeeper ZNode?

Did you try to remove the Collection and all its core folders and try again? Maybe the write.lock is from a previous test and the folders havent been removed properly

avatar
Contributor

Thanks @JonasStraub

Your post convinced me that this should work, so I cleaned everything up, restarted the nodes and I was able to create a multi-shard collection.

However, this process only works if I start the nodes first and then add the collection. As soon as I stop the nodes, I can't bring them back up again becuase of the 'write.lock' issue.

I am using Solr V6.3.0 and HDP 2.4.

The following process works fine:

I start 4 nodes (on the same physical Linux system)

bin/solr start -c -p 34010

bin/solr start -c -p 34011 -z localhost:35010

bin/solr start -c -p 34012 -z localhost:35010

bin/solr start -c -p 34013 -z localhost:35010

The <DirectoryFactory> element is as follows (with some obfuscation):

<directoryFactory name="DirectoryFactory"
                    class="${solr.directoryFactory:solr.HdfsDirectoryFactory}">

  <str name="solr.hdfs.home">hdfs://DEV1/solr</str>
  <bool name="solr.hdfs.blockcache.enabled">true</bool>
  <int name="solr.hdfs.blockcache.slab.count">1</int>
  <bool name="solr.hdfs.blockcache.direct.memory.allocation">true</bool>
  <int name="solr.hdfs.blockcache.blocksperbank">16384</int>
  <bool name="solr.hdfs.blockcache.read.enabled">true</bool>
  <bool name="solr.hdfs.nrtcachingdirectory.enable">true</bool>
  <int name="solr.hdfs.nrtcachingdirectory.maxmergesizemb">16</int>
  <int name="solr.hdfs.nrtcachingdirectory.maxcachedmb">192</int>
  <str name="solr.hdfs.confdir">/etc/hadoop/conf</str>

  <bool name="solr.hdfs.security.kerberos.enabled">true</bool>
  <str name="solr.hdfs.security.kerberos.keytabfile">/etc/security/keytabs/hdfs.headless.keytab</str>
  <str name="solr.hdfs.security.kerberos.principal">hdfs-DEV1@DHD1.XXXXX.XXXXXXX</str>
</directoryFactory>

I also have a lock element:

<lockType>${solr.lock.type:hdfs}</lockType>

The solrconfig.xml has been uploaded to the zookeeper instance

With the 4 nodes active, I create a collection as follows:

http://XXXXX:34010/solr/admin/collections?action=CREATE&name=car_cloud&numShards=4&replicationFactor...

This works fine and the 4 shards are distributed across the 4 nodes started above. Loading and querying the collection performs very well. I can see from solr.log that all 4 nodes are active with their respective shards.

However, if I stop the nodes, I cannot successfully restart them and access the collection.

The first node comes up fine. The solr admin console shows the 4 shards assigned to this node.

When I bring up the second node, I run into the locking problem:

2016-12-01 22:28:09.599 ERROR (coreLoadExecutor-6-thread-1-processing-n:39.7.48.7:34011_solr) [c:car_cloud s:shard1 r:core_node2 x:car_cloud_shard1_replica1] o.a.s.c.Core
Container Error creating core [car_cloud_shard1_replica1]: Index dir 'hdfs://DEV1/solr/car_cloud/core_node2/data/index/' of core 'car_cloud_shard1_replica1' i
s already locked. The most likely cause is another Solr server (or another solr core in this server) also configured to use this directory; other possible causes may be specific to
 lockType: hdfs
org.apache.solr.common.SolrException: Index dir 'hdfs://DEV1/solr/car_cloud/core_node2/data/index/' of core 'car_cloud_shard1_replica1' is already locked. The most
 likely cause is another Solr server (or another solr core in this server) also configured to use this directory; other possible causes may be specific to lockType: hdfs
        at org.apache.solr.core.SolrCore.<init>(SolrCore.java:903)

Any ideas?

avatar

Hi @Tony Bolt

Did you increase the "sleep time"?

Increase the sleep time from 5 to 30 seconds in /opt/lucidworks-hdpsearch/solr/bin/solr

sed -i 's/(sleep 5)/(sleep 30)/g'/opt/lucidworks-hdpsearch/solr/bin/solr

Since all Solr data will be stored in the Hadoop Filesystem, it is important to adjust the time Solr will take to shutdown or "kill" the Solr process (whenever you execute "service solr stop/restart"). If this setting is not adjusted, Solr will try to shutdown the Solr process and because it takes a bit more time when using HDFS, Solr will simply kill the process and most of the time lock the Solr Indexes of your collections. If the index of a collection is locked the following exception is shown after the startup routine "org.apache.solr.common.SolrException: Index locked for write"

Original article can be found here => https://community.hortonworks.com/articles/15159/securing-solr-collections-with-ranger-kerberos.html

avatar
Contributor

Thanks @Jonas Straub, @Michael Young

I have been distracted for a few days but now I'm back to this issue again.

I tried playing around with the sleep time in the stop process, but it didn't seem to have any impact. Now I'm trying to understand the maxShardsPerNode parameter. I have it set to 1 for my collection: http://xxxxxx:34010/solr/admin/collections?action=CREATE&name=aircargo_cloud&numShards=4&replication...

As I mentioned earlier, if I create this collection when my 4 nodes are active, all is well. Each node gets one shard and everything works very well. Remember, I am using HDFS for my index storage.

If I shut down the 4 nodes and then bring the first one back up, it grabs all 4 shards which seems contrary to the NumShardsPerNode setting. I expected the first node to grab one shard and for the other 3 to remain unassigned. (At this stage, with just a single node running, the collection is accessible and queries run successfully.)

Because the first node acquires the 4 shards, it is not surprising that when I bring up the second node that it recognises that it's shard is already locked. Once the second node comes up (and reports the locking error), the collection become unusable. An attempted query, returns:

<html> <head> <meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>

<title>Error 404 Not Found</title> </head>

<body><h2>HTTP ERROR 404</h2> <p>Problem accessing /solr/aircargo_cloud/select. Reason: <pre> Not Found</pre></p>

</body> </html>

Am I misunderstanding the purpose/operation of the maxShardsPerNode parameter?

Am I confusing things by trying to run all 4 nodes on the same server? I have a chunky server available (it has 40 cores and 256GB of memory). It has a very fast connection to the Hadoop cluster where I'm storing the Solr indexes. So it seems like a reasonable configuration from a resource availability viewpoint.

Each node has the same 'HOST_NAME' (as defined in solr.in.sh). Is this confusing things?

When I bring up just the first node (and queries work OK), the state.json file (available in zookeeper) looks as follows (with some obfuscation). You can see that the node_name is the same for all 4 shards (even though, right at the bottom we see: "maxShardsPerNode":"1" )

{"aircargo_cloud":{ "replicationFactor":"1",

"shards": { "shard1":{ "range":"80000000-bfffffff", "state":"active", "replicas":{"core_node1":{ "core":"aircargo_cloud_shard1_replica1",

"dataDir":"hdfs://HDP/solr/aircargo_cloud/core_node1/data/",

"base_url":"http://XXXXXX.YYYYYY.ZZZZZZ:34010/solr",

"node_name":"XXXXXX.YYYYYY.ZZZZZZ:34010_solr",

"state":"active",

"ulogDir":"hdfs://HDP/solr/aircargo_cloud/core_node1/data/tlog",

"leader":"true"}}}, "shard2":{ "range":"c0000000-ffffffff", "state":"active", "replicas":{"core_node3":{ "core":"aircargo_cloud_shard2_replica1",

"dataDir":"hdfs://HDP/solr/aircargo_cloud/core_node3/data/",

"base_url":"http://XXXXXX.YYYYYY.ZZZZZZ:34010/solr",

"node_name":"XXXXXX.YYYYYY.ZZZZZZ:34010_solr",

"state":"active", "ulogDir":"hdfs://HDP/solr/aircargo_cloud/core_node3/data/tlog",

"leader":"true"}}}, "shard3":{ "range":"0-3fffffff", "state":"active", "replicas":{"core_node4":{ "core":"aircargo_cloud_shard3_replica1",

"dataDir":"hdfs://HDP/solr/aircargo_cloud/core_node4/data/",

"base_url":"http://XXXXXX.YYYYYY.ZZZZZZ:34010/solr",

"node_name":"XXXXXX.YYYYYY.ZZZZZZ:34010_solr",

"state":"active", "ulogDir":"hdfs://HDP/solr/aircargo_cloud/core_node4/data/tlog", "leader":"true"}}}, "shard4":{ "range":"40000000-7fffffff", "state":"active", "replicas":{"core_node2":{

"core":"aircargo_cloud_shard4_replica1",

"dataDir":"hdfs://HDP/solr/aircargo_cloud/core_node2/data/",

"base_url":"http://XXXXXX.YYYYYY.ZZZZZZ:34010/solr",

"node_name":"XXXXXX.YYYYYY.ZZZZZZ:34010_solr",

"state":"active", "ulogDir":"hdfs://HDP/solr/aircargo_cloud/core_node2/data/tlog", "leader":"true"}}}

}, "router":{"name":"compositeId"},

"maxShardsPerNode":"1",

"autoAddReplicas":"false"}}

Regards Tony

avatar
Super Guru

@Tony Bolt

I wonder if you are running into something like this: https://issues.apache.org/jira/browse/SOLR-8335? Is Solr being shutdown cleanly? What process are you using to stop the nodes? You can remove the write.lock file on HDFS before starting the nodes.

I think the issue is likely related to running multiple instances of Solr on a the same server where they are using the same Solr home directory. This is causing each of the Solr cores to attempt to create a write.lock file in the same location on HDFS. You may want to try using different Solr homes for each core to see if that solves the issue.

Using the start approach you are using works ok when you are not using HDFS because the -c option will typically create a node specific directory on the local disk. I wonder if that is where your problem exists.

avatar
Contributor

Thanks @Michael Young

Yes, I saw the jira your mentioned but I don't think that is my problem. When I stop, say, just one node, I see that one of the write.lock files disappears (I'm stopping them with the "solr stop" command.)

I tried your idea of changing the solr home - but this seemed to confuse the issue. I got new errors because the node was looking for a solr.xml file in the specified directory instead of getting it from ZK. So, I abandoned that approach.

I haven't worked out what is going on yet. When I stop one of my 4 nodes, I see that the lock file disappears from one of the 4 shards (in HDFS). When I re-start the same node, the lock file re-appears (which looks good) but then that node complains about one of the other shards already being locked.

I think that there is more to the hdfs lock type than just the file that gets created in the /index directory in hdfs. The name of the lock file is identical in each shard except for one element which looks like core_node1, core_node2 etc. It is not clear how these literals tie back to the separate nodes.

I have to leave this for the weekend. I'll post any new findings next week.

Regards

Tony

avatar
Super Guru

@Tony Bolt

The maxShardsPerNode is a setting specifically for replicas: https://cwiki.apache.org/confluence/display/solr/Collections+API. It defines how many replicas per node will be allowed for that collection. If you tried to increase the replica count to 2, I think you would see the error you are expecting.

In a similar thought to my previous comment on Dec 2, I think the problem may be that you are starting all of the Solr nodes from the same instance directory on the server. Typically you would create an instance directory for each instance of Solr on a server. For example: /opt/solr/solr_server1, /opt/solr/solr_server2. Normally when you start 2 instances of SolrCloud on the same server, you would "cd" into the respective Solr instance directories and then run the start commands from there.

I think that "solr.home" is defined by the current working directory by default. If you start all 4 instances from the same location, the instances are treated as the same "nodes" and this is likely creating some confusion for Solr.

avatar
Contributor

Thanks @Michael Young

Since this is a Solr cloud configuration, all the nodes share the same set of configuration files held in Zookeeper. So, even if I invoke the nodes from different Linux directory hierarchies, they will all share the same solrconfig.xml and managed_schema (from ZK)

Running multiple Solr instances from the same Linux directories seems to 'work'. For example, my solrconfig.xml sets solr.hdfs.home as the directory /solr in HDFS:

<str name="solr.hdfs.home">hdfs://HDP/solr</str> 

But, when I create my 4 shard collection it automatically creates node-specific subdirectories in HDFS:

drwxrwx---   - hdfs dctbo          0 2016-12-14 09:06 /solr/aircargo_cloud/core_node1
drwxrwx---   - hdfs dctbo          0 2016-12-14 09:06 /solr/aircargo_cloud/core_node2
drwxrwx---   - hdfs dctbo          0 2016-12-14 09:06 /solr/aircargo_cloud/core_node3
drwxrwx---   - hdfs dctbo          0 2016-12-14 09:06 /solr/aircargo_cloud/core_node4

So, to me, this all looks fine.

Nevertheless, I gave your suggestion a try - and it worked!

When I replicated my whole Solr directory structure and started 2 nodes from the separate directories I can see, from a 'ps' command, that solr.solr.home is being set to a directory in the respective hierarchies. I had a look in the respective solr.home directories and I can see that there is a new sub-directory created in each one relating to the shard that that node 'owns'. These directories look like <collectionName>_shard1_replica1 and <collectionName>_shard2_replice1 respectively. Each of these directories contains just a single file 'core.properties'.

When I was running all my nodes from the same Linux directories, these shard directories were all in the same solr.home directory. Presumably, Solr was using these sub-directories to determine what shards should be acquired by the node instance - and my multiple nodes were clobbering each other.

I've been experimenting with a 2 shard collection - but I'll ramp back up to my 4 shard collection and make sure that all stays good - But it looks very promising.

There is a side-benefit to separating the directories out like this: the node logs get separated as well. (When I was running multiple nodes from the same directories, they were sharing the same log directory and it was a bit hard to see what log belonged to what node.)

I suppose I could achieve much the same outcome by explicitly providing additional parameters to my 'bin/solr start' commands to specify a different solr.solr.home directory (and maybe log directory) for each node. I'm not sure that this would be a better solution.

Thanks for sticking with me on this.

Regards

Tony

avatar
Super Guru

@Tony Bolt

I'm glad it worked! That is fantastic news. I'm sorry it took so long to ferret out the cause. It seems like a small detail, but it sure does matter. To your point, having separate instance directories will keep your logs nice and clean as well.

While your solrconfig.xml has "solr.hdfs.home" defined, that is /not/ the same thing as "solr.home". You could specify solr.home when you start solr. That's probably what I would do if I was using an init.d script to run 4 different instances on the same host.

Please accept my answer if you feel it was useful. 🙂