Community Articles
Find and share helpful community-sourced technical articles

As we know many services like Atlas for lineage, Ranger for audit logs, log search and so on uses Ambari Infra (Solr) for indexing data. So moving Ambari Infra in production and keeping it stable and up is really important. This are the key points I came up with to make this happen

Hardware –

Try to have minimum of 3 Ambari infra nodes with atleast 1-2TB disk for Solr data storage, but mainly depends on how many components ( like Ranger, Atlas , Log search.. ) and amount of data will feed into Solr of indexing. A major driving factor for Solr performance is RAM. Solr requires sufficient memory for two separate things: One is the Java heap, the other is free memory for the OS disk cache. Let's say that you have a Solr index size of 8GB. If your OS, Solr's Java heap, and all other running programs require 4GB of memory, then an ideal memory size for that server is at least 12GB. So How much memory do I need for Ambari Infra?? This is one of those questions that has no generic answer. You want a heap that's large enough so that you don't have OOM exceptions and problems with constant garbage collection, but small enough that you're not wasting memory or running into huge garbage collection pauses. So ideally we can start with 8GB total memory (leaving 4GB for disk cache)initially, but that also might NOT be enough. The really important thing is to ensure that there is a high cache hit ratio on the OS disk cache.

GC -

GC pauses usually caused by full garbage collections i.e pause all program execution to clean up memory. GC tuning is an art form, and what works for one person may not work for you.

Using the ConcurrentMarkSweep (CMS) collector with tuning parameters is a very good option for for Solr, but with the latest Java 7 releases (7u72 at the time of this writing), G1 is looking like a better option, if the -XX:+ParallelRefProcEnabled option is used. Information from Oracle engineers who specialize in GC indicates that the latest Java 8 will noticeably improve G1 performance over Java 7, but that has not been confirmed. Here are some ideas that hopefully you will find helpful:

  • The "MaxNewSize" should not be low, because the applications use caches setting it low value will cause the temporary cache data to me moved to Old Generation prematurely / so quickly. Once the objects are moved to Old gen then only during the complete GC face they will get cleared and till that time they will be present in the heap space. We should set the "MaxNewSize" (young generation heap size) to atleaset 1/6 (recommended) or 1/8 of the MaxHeap in genaral. If our application creates much more temporary objects (short lived) cached then the MaxNewSize can be further increased. Example : -Xmx8192m –Xms8192m –XX:MaxNewSize=1365m
  • Because normally the throughput Collector starts a GC cycle only when the heap is full (or reaches max), In order to finish a GC cycle before the application runs out of memory (or max memory), the CMS Collector needs to start a GC cycle much earlier than the throughput collector by setting -XX:CMSInitiatingOccupancyFraction=65 -XX:+UseCMSInitiatingOccupancyOnly
  • this will help in reducing the long GC pause. Because it will help the JVM to more proactively clean the heap when it reaches to 65% instead of waiting for it to be filled 90% and above.

Zookeeper –

As we know Solr uses Zookeeper to manage configs and co-ordination. Solr doesn’t use zookeeper that intensively when compared to other services(Kafka, services HA..). Since SolrCloud relies on Zookeeper, it can be very unstable if you have underlying performance issues that result in operations taking longer than the zkClientTimeout. Increasing that timeout can help, but addressing the underlying performance issues will yield better results. The default timeout 30 sec should be more than enough for a well-tuned SolrCloud. As we always strongly recommend storing the Zookeeper data on separate physical disks form other services and OS. Having dedicated machines when we have multiple services using ZK is even better, but not a requirement

Availability -

Having multiple shards with replication helps to keep the solr collections available in most of the cases like nodes going down. By default most of the collection are created with 1 shard and 1 replica. We can use the following commands to split the shard or recreate the collection with multiple shards.

Example Ranger Audit log, we can split the existing shard or recreate the collection. If its a new install/initial stages I would delete and recreate the collection.

To delete ranger_audits collection

If you don’t have Solr UI enable or access you can use spnego principal and run the below command from command-line

curl -i --negotiate -u : “"

create new ranger_audits numShards=3&replicationFactor=2&collection.configName=ranger_audits

Or from command line

curl -i --negotiate -u : " numShards=3&replicationFactor=2&collection.configName=ranger_audits"

You can also provide solr nodes where your shards can land in numShard=3&replicationFactor=2&collection.configName=ranger_audits&

NOTE: Since we are using same collection.configName we don’t need to provide configs again for collection.

Split Shard

The below command split the shard1 into 2 shards shard1_0 and shard1_1

Disk Space

Some time having high expiration for documents can fill up the disk space in case of heavy traffic. So configuring the right TTL can eliminate this kind of disk space alerts. Example by default ranger_audits have 90days ttl this can be changed if needed.

If you haven't used Solr Audits before and haven't enabled Ranger Audits to Solr via Ambari yet, it will be easy to adjust the TTL configuration. By default ranger has its solrconfig.xml in /usr/hdp/

So you can directly edit the solrconfig.xml file and change +90days to the other number.


<updateRequestProcessorChain name="add-unknown-fields-to-the-schema">


<str name="fieldName">_ttl_</str>

<str name="value">+60DAYS</str>



<int name="autoDeletePeriodSeconds">86400</int>

<str name="ttlFieldName">_ttl_</str>

<str name="expirationFieldName">_expire_at_</str>



<str name="fieldName">_expire_at_</str>


Afterwards, you can go to Ambari and enable Ranger Solr Audits, the collection that is going to be created will use the new setting.

If you already configured Ranger audits to Solr

Go to one of the Ambari Infra nodes that hosts a Solr Instance. You can download the solrconfig.xml or change the existing one of the component you have

To download

/usr/lib/ambari-infra-solr/server/scripts/cloud-scripts/ -cmd getfile /infra-solr/configs/ranger_audits/solrconfig.xml solrconfig.xml -z

Edit the downloaded solrconfig.xml and change the ttl

Upload the config back to Zookeeper

/usr/lib/ambari-infra-solr/server/scripts/cloud-scripts/ -cmd putfile /infra-solr/configs/ranger_audits/solrconfig.xml solrconfig.xml -z

Reload the config

Or form command line

curl -v --negotiate -u : "

Example of doc after changing ttl from +90DAYS to +60DAYS you can verify

curl -i --negotiate -u : "" or from solr query UI have q as _ttl_:"+60DAYS"

p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 13.0px Courier; -webkit-text-stroke: #000000} span.s1 {font-kerning: none}

































Expert Contributor

If you use the $ZK_HOST defined in you should not need to include the /infra-solr prefix when getting the solrconfig.xml:

source /etc/ambari-infra-solr/conf/
/usr/lib/ambari-infra-solr/server/scripts/cloud-scripts/ -z $ZK_HOST \
-cmd getfile /configs/ranger_audits/solrconfig.xml solrconfig.xml  

 The same applies when uploading the edited config.

Don't have an account?
Version history
Revision #:
1 of 1
Last update:
‎02-13-2017 09:01 PM
Updated by:
Top Kudoed Authors