<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Solr installation in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Solr-installation/m-p/125744#M34498</link>
    <description>&lt;P&gt;Hi Saurabh, here is a partial response in case it's helpful: HDP Search (which includes Solr) should be deployed on all nodes that run HDFS. Ambari is not supported quite yet. The &lt;A href="http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_hdp_search/content/ch_hdp-search.html"&gt;HDP Search Guide&lt;/A&gt; contains basic information and links to additional documentation.  &lt;/P&gt;</description>
    <pubDate>Wed, 13 Jul 2016 01:12:00 GMT</pubDate>
    <dc:creator>lgeorge</dc:creator>
    <dc:date>2016-07-13T01:12:00Z</dc:date>
    <item>
      <title>Solr installation</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Solr-installation/m-p/125743#M34497</link>
      <description>&lt;P&gt;Team,&lt;/P&gt;&lt;P&gt;I am new to Solr and want to install it in my cluster (5 nodes),before I go ahead I have got few questions.So can someone please help me on it.&lt;/P&gt;&lt;P&gt;1. Do I need to install solr on all nodes including master and workers ?&lt;/P&gt;&lt;P&gt;2. Can't we monitor it via Ambari ? &lt;/P&gt;&lt;P&gt;3. How we will configure Ranger Security on top of Solr ? &lt;/P&gt;&lt;P&gt;Note: I want to install solr in cloud mode(SolrCloud).&lt;/P&gt;</description>
      <pubDate>Tue, 12 Jul 2016 19:24:27 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Solr-installation/m-p/125743#M34497</guid>
      <dc:creator>SK1</dc:creator>
      <dc:date>2016-07-12T19:24:27Z</dc:date>
    </item>
    <item>
      <title>Re: Solr installation</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Solr-installation/m-p/125744#M34498</link>
      <description>&lt;P&gt;Hi Saurabh, here is a partial response in case it's helpful: HDP Search (which includes Solr) should be deployed on all nodes that run HDFS. Ambari is not supported quite yet. The &lt;A href="http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_hdp_search/content/ch_hdp-search.html"&gt;HDP Search Guide&lt;/A&gt; contains basic information and links to additional documentation.  &lt;/P&gt;</description>
      <pubDate>Wed, 13 Jul 2016 01:12:00 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Solr-installation/m-p/125744#M34498</guid>
      <dc:creator>lgeorge</dc:creator>
      <dc:date>2016-07-13T01:12:00Z</dc:date>
    </item>
    <item>
      <title>Re: Solr installation</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Solr-installation/m-p/125745#M34499</link>
      <description>&lt;P&gt;Thanks a lot &lt;A rel="user" href="https://community.cloudera.com/users/36/lgeorge.html" nodeid="36"&gt;@lgeorge&lt;/A&gt;. So you mean to say If I have 5(2M+3W) nodes cluster then I have to install Solr on all 3 worker nodes only or I need to install on 1 master and all 3 worker nodes as a part of server &amp;amp; client. &lt;/P&gt;&lt;P&gt;Also I would be very thankful if can you please help me to get following answer as well. &lt;/P&gt;&lt;P&gt;1. If I will install HDP-Search and it has banana so whether banana will run as root user or banana user ?&lt;/P&gt;&lt;P&gt;2. Can we do ldap integration for both Solr and Banana UI ?&lt;/P&gt;&lt;P&gt;3. How many resources we need for HDP search like heap,RAM or CPU ?&lt;/P&gt;&lt;P&gt;Thanks in advance. &lt;/P&gt;</description>
      <pubDate>Wed, 13 Jul 2016 19:04:26 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Solr-installation/m-p/125745#M34499</guid>
      <dc:creator>SK1</dc:creator>
      <dc:date>2016-07-13T19:04:26Z</dc:date>
    </item>
    <item>
      <title>Re: Solr installation</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Solr-installation/m-p/125746#M34500</link>
      <description>&lt;P&gt;Good questions. Not sure, but I'm checking. If I find answers I'll post them (or send the Solr expert this way &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 15 Jul 2016 22:25:12 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Solr-installation/m-p/125746#M34500</guid>
      <dc:creator>lgeorge</dc:creator>
      <dc:date>2016-07-15T22:25:12Z</dc:date>
    </item>
    <item>
      <title>Re: Solr installation</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Solr-installation/m-p/125747#M34501</link>
      <description>&lt;A rel="user" href="https://community.cloudera.com/users/2273/saurabhmcakiet.html" nodeid="2273"&gt;@Saurabh Kumar&lt;/A&gt;&lt;P&gt;1. Solr does not follow Master - Slave model, rather its Leader - Follower model. &lt;/P&gt;&lt;P&gt;Each Solr node therefore will be used for Indexing/Query, in SolrCloud. &lt;/P&gt;&lt;P&gt;Considering that you have 5 nodes, the Solr Collection creation therefore, can be done with 2 Shards and RF (Replication Factor ) of 2. This will allow to use 4 nodes for Solr. &lt;/P&gt;&lt;P&gt;2. Each node which is supposed to be used for Solr, need to be installed with "lucidworks-hdpsearch".&lt;/P&gt;&lt;P&gt;3. Resource usage depends on the Size of Index ( present and estimated growth of index ). Refer following for further understanding on resource usage:&lt;/P&gt;&lt;P&gt;&lt;A href="https://wiki.apache.org/solr/SolrPerformanceProblems" target="_blank"&gt;https://wiki.apache.org/solr/SolrPerformanceProblems&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 18 Jul 2016 02:46:45 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Solr-installation/m-p/125747#M34501</guid>
      <dc:creator>PARTOMIA</dc:creator>
      <dc:date>2016-07-18T02:46:45Z</dc:date>
    </item>
    <item>
      <title>Re: Solr installation</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Solr-installation/m-p/125748#M34502</link>
      <description>&lt;P&gt;Thanks &lt;A rel="user" href="https://community.cloudera.com/users/331/rsingh.html" nodeid="331"&gt;@Ravi&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 20 Jul 2016 14:08:06 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Solr-installation/m-p/125748#M34502</guid>
      <dc:creator>SK1</dc:creator>
      <dc:date>2016-07-20T14:08:06Z</dc:date>
    </item>
    <item>
      <title>Re: Solr installation</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Solr-installation/m-p/125749#M34503</link>
      <description>&lt;P&gt;Thanks &lt;A rel="user" href="https://community.cloudera.com/users/36/lgeorge.html" nodeid="36"&gt;@lgeorge&lt;/A&gt;. &lt;/P&gt;</description>
      <pubDate>Wed, 20 Jul 2016 14:08:24 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Solr-installation/m-p/125749#M34503</guid>
      <dc:creator>SK1</dc:creator>
      <dc:date>2016-07-20T14:08:24Z</dc:date>
    </item>
    <item>
      <title>Re: Solr installation</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Solr-installation/m-p/125750#M34504</link>
      <description>&lt;P&gt;
&lt;/P&gt;&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/331/rsingh.html" nodeid="331"&gt;@Ravi&lt;/A&gt; Can you please help me how to setup buffer memory for my solr cluster. I am getting following error. &lt;/P&gt;&lt;P&gt;[solr@m1 solr]$ /opt/lucidworks-hdpsearch/solr/bin/solr create -c test -d /opt/lucidworks-hdpsearch/solr/server/solr/configsets/data_driven_schema_configs_hdfs/conf -n test -s 2 -rf 2&lt;/P&gt;&lt;P&gt;Connecting to ZooKeeper at m1.hdp22:2181,m2.hdp22:2181,w1.hdp22:2181&lt;/P&gt;&lt;P&gt;Uploading /opt/lucidworks-hdpsearch/solr/server/solr/configsets/data_driven_schema_configs_hdfs/conf for config test to ZooKeeper at m1.hdp22:2181,m2.hdp22:2181,w1.hdp22:2181&lt;/P&gt;&lt;P&gt;Creating new collection 'test' using command:&lt;/P&gt;&lt;P&gt;&lt;A href="http://192.168.56.42:8983/solr/admin/collections?action=CREATE&amp;amp;name=test&amp;amp;numShards=2&amp;amp;replicationFactor=2&amp;amp;maxShardsPerNode=2&amp;amp;collection.configName=test" target="_blank"&gt;http://192.168.56.42:8983/solr/admin/collections?action=CREATE&amp;amp;name=test&amp;amp;numShards=2&amp;amp;replicationFactor=2&amp;amp;maxShardsPerNode=2&amp;amp;collection.configName=test&lt;/A&gt;&lt;/P&gt;&lt;P&gt;{&lt;/P&gt;&lt;P&gt;  "responseHeader":{&lt;/P&gt;&lt;P&gt;    "status":0,&lt;/P&gt;&lt;P&gt;    "QTime":4812},&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;  "failure":{"":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at &lt;A href="http://192.168.56.41:8983/solr:" target="_blank"&gt;http://192.168.56.41:8983/solr:&lt;/A&gt; Error CREATEing SolrCore 'test_shard1_replica1': Unable to create core [test_shard1_replica1] Caused by: Direct buffer memory"},&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;  "success":{"":{&lt;/P&gt;&lt;P&gt;      "responseHeader":{&lt;/P&gt;&lt;P&gt;        "status":0,&lt;/P&gt;&lt;P&gt;        "QTime":4659},&lt;/P&gt;&lt;P&gt;      "core":"test_shard2_replica1"}}}&lt;/P&gt;</description>
      <pubDate>Wed, 20 Jul 2016 17:01:57 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Solr-installation/m-p/125750#M34504</guid>
      <dc:creator>SK1</dc:creator>
      <dc:date>2016-07-20T17:01:57Z</dc:date>
    </item>
    <item>
      <title>Re: Solr installation</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Solr-installation/m-p/125751#M34505</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/2273/saurabhmcakiet.html" nodeid="2273"&gt;@Saurabh Kumar&lt;/A&gt;&lt;/P&gt;&lt;P&gt;The error which are you getting is :&lt;/P&gt;&lt;P&gt;"&lt;STRONG&gt;Unable to create core [test_shard1_replica1] Caused by: Direct buffer memory"} "&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Looks to me that you have set up the Direct Memory ( to enable Block Cache ) as true in the "solrconfig.xml" file i.e.&lt;/P&gt;&lt;PRE&gt;&amp;lt;bool name="solr.hdfs.blockcache.direct.memory.allocation"&amp;gt;true&amp;lt;/bool&amp;gt;&lt;/PRE&gt;&lt;P&gt;From your "solrconfig.xml", I see the config as:&lt;/P&gt;&lt;PRE&gt;&amp;lt;directoryFactory name="DirectoryFactory" class="solr.HdfsDirectoryFactory"&amp;gt;
&amp;lt;str name="solr.hdfs.home"&amp;gt;hdfs://m1.hdp22:8020/user/solr&amp;lt;/str&amp;gt;
&amp;lt;str name="solr.hdfs.confdir"&amp;gt;/etc/hadoop/conf&amp;lt;/str&amp;gt;
&amp;lt;bool name="solr.hdfs.blockcache.enabled"&amp;gt;true&amp;lt;/bool&amp;gt;
&amp;lt;int name="solr.hdfs.blockcache.slab.count"&amp;gt;1&amp;lt;/int&amp;gt;
&amp;lt;bool name="solr.hdfs.blockcache.direct.memory.allocation"&amp;gt;true&amp;lt;/bool&amp;gt;
&amp;lt;int name="solr.hdfs.blockcache.blocksperbank"&amp;gt;16384&amp;lt;/int&amp;gt;
&amp;lt;bool name="solr.hdfs.blockcache.read.enabled"&amp;gt;true&amp;lt;/bool&amp;gt;
&amp;lt;bool name="solr.hdfs.nrtcachingdirectory.enable"&amp;gt;true&amp;lt;/bool&amp;gt;
&amp;lt;int name="solr.hdfs.nrtcachingdirectory.maxmergesizemb"&amp;gt;16&amp;lt;/int&amp;gt;
&amp;lt;int name="solr.hdfs.nrtcachingdirectory.maxcachedmb"&amp;gt;192&amp;lt;/int&amp;gt;
&amp;lt;/directoryFactory&amp;gt;&lt;/PRE&gt;&lt;P&gt;I will suggest to turn off the Direct Memory if you do not plan to use it for now and then try the creation of collection.&lt;/P&gt;&lt;P&gt;To disable it, edit the "solrconfig.xml" and looks for property - "solr.hdfs.blockcache.direct.memory.allocation". &lt;/P&gt;&lt;P&gt;Make the value of this property to "false" i.e.&lt;/P&gt;&lt;PRE&gt;&amp;lt;bool name="solr.hdfs.blockcache.direct.memory.allocation"&amp;gt;false&amp;lt;/bool&amp;gt;&lt;/PRE&gt;&lt;P&gt;The final "solrconfig.xml" will therefore look like :&lt;/P&gt;&lt;PRE&gt;                &amp;lt;directoryFactory name="DirectoryFactory" class="solr.HdfsDirectoryFactory"&amp;gt;                  &amp;lt;str name="solr.hdfs.home"&amp;gt;hdfs://m1.hdp22:8020/user/solr&amp;lt;/str&amp;gt;
                &amp;lt;bool name="solr.hdfs.blockcache.enabled"&amp;gt;true&amp;lt;/bool&amp;gt;
                &amp;lt;int name="solr.hdfs.blockcache.slab.count"&amp;gt;1&amp;lt;/int&amp;gt;
                &amp;lt;bool name="solr.hdfs.blockcache.direct.memory.allocation"&amp;gt;false&amp;lt;/bool&amp;gt;
                &amp;lt;int name="solr.hdfs.blockcache.blocksperbank"&amp;gt;16384&amp;lt;/int&amp;gt;
                &amp;lt;bool name="solr.hdfs.blockcache.read.enabled"&amp;gt;true&amp;lt;/bool&amp;gt;
                &amp;lt;bool name="solr.hdfs.blockcache.write.enabled"&amp;gt;false&amp;lt;/bool&amp;gt;
                &amp;lt;bool name="solr.hdfs.nrtcachingdirectory.enable"&amp;gt;true&amp;lt;/bool&amp;gt;
                &amp;lt;int name="solr.hdfs.nrtcachingdirectory.maxmergesizemb"&amp;gt;16&amp;lt;/int&amp;gt;
                &amp;lt;int name="solr.hdfs.nrtcachingdirectory.maxcachedmb"&amp;gt;192&amp;lt;/int&amp;gt;
                &amp;lt;/directoryFactory&amp;gt;

&lt;/PRE&gt;</description>
      <pubDate>Thu, 21 Jul 2016 02:28:23 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Solr-installation/m-p/125751#M34505</guid>
      <dc:creator>PARTOMIA</dc:creator>
      <dc:date>2016-07-21T02:28:23Z</dc:date>
    </item>
    <item>
      <title>Re: Solr installation</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Solr-installation/m-p/125752#M34506</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/331/rsingh.html" nodeid="331"&gt;@Ravi&lt;/A&gt;: Thanks a lot, It helped me to avoid direct memory issue but now I encountered another issue, so can you please help me on this also. &lt;/P&gt;&lt;P&gt;[solr@m1 solr]$ /opt/lucidworks-hdpsearch/solr/bin/solr create -c test -d /opt/lucidworks-hdpsearch/solr/server/solr/configsets/data_driven_schema_configs_hdfs/conf -n test -s 2 -rf 2&lt;/P&gt;&lt;P&gt;Connecting to ZooKeeper at m1.hdp22:2181,m2.hdp22:2181,w1.hdp22:2181&lt;/P&gt;&lt;P&gt;Uploading /opt/lucidworks-hdpsearch/solr/server/solr/configsets/data_driven_schema_configs_hdfs/conf for config test to ZooKeeper at m1.hdp22:2181,m2.hdp22:2181,w1.hdp22:2181&lt;/P&gt;&lt;P&gt;Creating new collection 'test' using command:&lt;/P&gt;&lt;P&gt;&lt;A href="http://192.168.56.41:8983/solr/admin/collections?action=CREATE&amp;amp;name=test&amp;amp;numShards=2&amp;amp;replicationFactor=2&amp;amp;maxShardsPerNode=2&amp;amp;collection.configName=test" target="_blank"&gt;http://192.168.56.41:8983/solr/admin/collections?action=CREATE&amp;amp;name=test&amp;amp;numShards=2&amp;amp;replicationFactor=2&amp;amp;maxShardsPerNode=2&amp;amp;collection.configName=test&lt;/A&gt;&lt;/P&gt;&lt;P&gt;{&lt;/P&gt;&lt;P&gt;  "responseHeader":{&lt;/P&gt;&lt;P&gt;    "status":0,&lt;/P&gt;&lt;P&gt;    "QTime":6299},&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;  "failure":{"":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at &lt;A href="http://192.168.56.41:8983/solr:" target="_blank"&gt;http://192.168.56.41:8983/solr:&lt;/A&gt; Error CREATEing SolrCore 'test_shard1_replica1': Unable to create core [test_shard1_replica1] Caused by: Java heap space"},&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;  "success":{"":{&lt;/P&gt;&lt;P&gt;      "responseHeader":{&lt;/P&gt;&lt;P&gt;        "status":0,&lt;/P&gt;&lt;P&gt;        "QTime":5221},&lt;/P&gt;&lt;P&gt;      "core":"test_shard2_replica1"}}}&lt;/P&gt;</description>
      <pubDate>Thu, 21 Jul 2016 14:32:19 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Solr-installation/m-p/125752#M34506</guid>
      <dc:creator>SK1</dc:creator>
      <dc:date>2016-07-21T14:32:19Z</dc:date>
    </item>
    <item>
      <title>Re: Solr installation</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Solr-installation/m-p/125753#M34507</link>
      <description>&lt;A rel="user" href="https://community.cloudera.com/users/331/rsingh.html" nodeid="331"&gt;@Ravi&lt;/A&gt;&lt;P&gt;Hey Ravi, thanks I have solved it by changing value of SOLR_HEAP to 1024 MB in /opt/lucidworks-hdpsearch/solr/bin/solr.in.sh. Thanks once again for all your help. &lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;SOLR_HEAP="1024m"&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;[solr@m1 solr]$ /opt/lucidworks-hdpsearch/solr/bin/solr create -c test -d /opt/lucidworks-hdpsearch/solr/server/solr/configsets/data_driven_schema_configs_hdfs/conf -n test -s 2 -rf 2&lt;/P&gt;&lt;P&gt;Connecting to ZooKeeper at m1.hdp22:2181,m2.hdp22:2181,w1.hdp22:2181&lt;/P&gt;&lt;P&gt;Uploading /opt/lucidworks-hdpsearch/solr/server/solr/configsets/data_driven_schema_configs_hdfs/conf for config test to ZooKeeper at m1.hdp22:2181,m2.hdp22:2181,w1.hdp22:2181&lt;/P&gt;&lt;P&gt;Creating new collection 'test' using command:&lt;/P&gt;&lt;P&gt;&lt;A href="http://192.168.56.42:8983/solr/admin/collections?action=CREATE&amp;amp;name=test&amp;amp;numShards=2&amp;amp;replicationFactor=2&amp;amp;maxShardsPerNode=2&amp;amp;collection.configName=test"&gt;http://192.168.56.42:8983/solr/admin/collections?action=CREATE&amp;amp;name=test&amp;amp;numShards=2&amp;amp;replicationFactor=2&amp;amp;maxShardsPerNode=2&amp;amp;collection.configName=test&lt;/A&gt;&lt;/P&gt;&lt;P&gt;{&lt;/P&gt;&lt;P&gt;  "responseHeader":{&lt;/P&gt;&lt;P&gt;    "status":0,&lt;/P&gt;&lt;P&gt;    "QTime":8494},&lt;/P&gt;&lt;P&gt;  "success":{"":{&lt;/P&gt;&lt;P&gt;      "responseHeader":{&lt;/P&gt;&lt;P&gt;        "status":0,&lt;/P&gt;&lt;P&gt;        "QTime":8338},&lt;/P&gt;&lt;P&gt;      "core":"test_shard1_replica1"}}}&lt;/P&gt;</description>
      <pubDate>Thu, 21 Jul 2016 14:46:18 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Solr-installation/m-p/125753#M34507</guid>
      <dc:creator>SK1</dc:creator>
      <dc:date>2016-07-21T14:46:18Z</dc:date>
    </item>
    <item>
      <title>Re: Solr installation</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Solr-installation/m-p/125754#M34508</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/2273/saurabhmcakiet.html" nodeid="2273"&gt;@Saurabh Kumar&lt;/A&gt;&lt;/P&gt;&lt;P&gt;You are welcome.&lt;/P&gt;&lt;P&gt;For the issue with &lt;STRONG&gt;Java heap space &lt;/STRONG&gt;, its due to Java_Heap for Solr Process. By default Solr process is started with only 512MB. We can increase this by editing the Solr config files or via solr command line options as:&lt;/P&gt;&lt;PRE&gt;/opt/lucidworks-hdpsearch/solr/bin/solr -m 2g create -c test -d /opt/lucidworks-hdpsearch/solr/server/solr/configsets/data_driven_schema_configs_hdfs/conf -n test -s 2 -rf 2&lt;/PRE&gt;&lt;P&gt;This will resolve the &lt;STRONG&gt;Java heap space&lt;/STRONG&gt; issue.&lt;/P&gt;</description>
      <pubDate>Thu, 21 Jul 2016 15:13:20 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Solr-installation/m-p/125754#M34508</guid>
      <dc:creator>PARTOMIA</dc:creator>
      <dc:date>2016-07-21T15:13:20Z</dc:date>
    </item>
    <item>
      <title>Re: Solr installation</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Solr-installation/m-p/125755#M34509</link>
      <description>&lt;P&gt;Thanks &lt;A rel="user" href="https://community.cloudera.com/users/331/rsingh.html" nodeid="331"&gt;@Ravi&lt;/A&gt;. I have solved it by changing value of SOLR_HEAP to
1024 MB in /opt/lucidworks-hdpsearch/solr/bin/solr.in.sh. Thanks once again for
all your help.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;SOLR_HEAP="1024m"&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;[solr@m1 solr]$
/opt/lucidworks-hdpsearch/solr/bin/solr create -c test -d
/opt/lucidworks-hdpsearch/solr/server/solr/configsets/data_driven_schema_configs_hdfs/conf
-n test -s 2 -rf 2&lt;/P&gt;</description>
      <pubDate>Fri, 22 Jul 2016 12:27:25 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Solr-installation/m-p/125755#M34509</guid>
      <dc:creator>SK1</dc:creator>
      <dc:date>2016-07-22T12:27:25Z</dc:date>
    </item>
    <item>
      <title>Re: Solr installation</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Solr-installation/m-p/125756#M34510</link>
      <description>&lt;P&gt;
	Hi Saurabh,&lt;/P&gt;&lt;P&gt;
	I know this is an older question, but if you (or anyone else) is still looking to monitor Solr Cloud via Ambari, laying a custom service on top of your existing installation might be useful.  The following will allow you to integrate Solr Cloud into Ambari, complete with alerts and the ability to start, stop, and monitor status.&lt;/P&gt;&lt;P&gt;
	This setup assumes an existing, standard Solr Cloud installation, with the Sold Cloud UI available on port 8983.&lt;/P&gt;&lt;HR /&gt;
&lt;P&gt;On the Ambari node, create &lt;STRONG&gt;/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/SOLR/package/scripts&lt;/STRONG&gt;.&lt;/P&gt;&lt;P&gt;In &lt;STRONG&gt;/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/SOLR&lt;/STRONG&gt;, create &lt;STRONG&gt;alerts.json&lt;/STRONG&gt; and &lt;STRONG&gt;metainfo.xml&lt;/STRONG&gt;, as follows (you can, of course, change the version to whatever version of Solr you have installed):&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;alerts.json&lt;/STRONG&gt;&lt;/P&gt;&lt;PRE&gt;{
  "SOLR": {
    "service": [],
    "SOLR_CLOUD": [
      {
        "name" : "solr_cloud_ui",
        "label" : "Solr Cloud UI",
        "description" : "This host-level alert is triggered if the Solr Cloud Web UI is unreachable.",
        "interval" : 1,
        "scope" : "ANY",
        "source" : {
          "type" : "WEB",
          "uri" : {
            "http" : "http://0.0.0.0:8983",
            "connection_timeout" : 5.0
          },
          "reporting" : {
            "ok" : {
              "text" : "HTTP {0} response in {2:.3f}s"
            },
            "warning" : {
              "text" : "HTTP {0} response from {1} in {2:.3f}s ({3})"
            },
            "critical" : {
              "text" : "Connection failed to {1} ({3})"
            }
          }
        }
      }
    ]
  }
}
&lt;/PRE&gt;&lt;P&gt;&lt;STRONG&gt;metainfo.xml&lt;/STRONG&gt;&lt;/P&gt;&lt;PRE&gt;&amp;lt;?xml version="1.0"?&amp;gt;
&amp;lt;metainfo&amp;gt;
  &amp;lt;schemaVersion&amp;gt;2.0&amp;lt;/schemaVersion&amp;gt;
  &amp;lt;services&amp;gt;
    &amp;lt;service&amp;gt;
      &amp;lt;name&amp;gt;SOLR&amp;lt;/name&amp;gt;
      &amp;lt;displayName&amp;gt;Solr&amp;lt;/displayName&amp;gt;
      &amp;lt;comment&amp;gt;Solr is an open source enterprise search platform, written in Java, from the Apache Lucene project.&amp;lt;/comment&amp;gt;
      &amp;lt;version&amp;gt;5.2.1&amp;lt;/version&amp;gt;
      &amp;lt;components&amp;gt;
        &amp;lt;component&amp;gt;
          &amp;lt;name&amp;gt;SOLR_CLOUD&amp;lt;/name&amp;gt;
          &amp;lt;displayName&amp;gt;Solr Cloud Server&amp;lt;/displayName&amp;gt;
          &amp;lt;category&amp;gt;MASTER&amp;lt;/category&amp;gt;
          &amp;lt;cardinality&amp;gt;1+&amp;lt;/cardinality&amp;gt;
          &amp;lt;commandScript&amp;gt;
            &amp;lt;script&amp;gt;scripts/solrcloud.py&amp;lt;/script&amp;gt;
            &amp;lt;scriptType&amp;gt;PYTHON&amp;lt;/scriptType&amp;gt;
            &amp;lt;timeout&amp;gt;600&amp;lt;/timeout&amp;gt;
          &amp;lt;/commandScript&amp;gt;
        &amp;lt;/component&amp;gt;
      &amp;lt;/components&amp;gt;
    &amp;lt;/service&amp;gt;
  &amp;lt;/services&amp;gt;
&amp;lt;/metainfo&amp;gt;
&lt;/PRE&gt;&lt;P&gt;In &lt;STRONG&gt;/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/SOLR/package/scripts&lt;/STRONG&gt;, create &lt;STRONG&gt;params.py&lt;/STRONG&gt; and &lt;STRONG&gt;solrcloud.py&lt;/STRONG&gt;, as follows:&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;params.py&lt;/STRONG&gt;&lt;/P&gt;&lt;PRE&gt;cloud_stop = ('/sbin/service', 'solr', 'stop')
cloud_start = ('/sbin/service', 'solr', 'start')
cloud_pid_file = '/opt/lucidworks-hdpsearch/solr/bin/solr-8983.pid'
&lt;/PRE&gt;&lt;P&gt;&lt;STRONG&gt;solrcloud.py&lt;/STRONG&gt;&lt;/P&gt;&lt;PRE&gt;from resource_management import *
from resource_management.core.resources.system import Execute

class Master(Script):
  def install(self, env):
    print 'Installing Solr Cloud';

  def stop(self, env):
    import params
    env.set_params(params)

    Execute((params.cloud_stop), sudo=True)

  def start(self, env):
    import params
    env.set_params(params)

    Execute((params.cloud_start), sudo=True)

  def status(self, env):
    import params
    env.set_params(params)

    from resource_management.libraries.functions import check_process_status

    check_process_status(params.cloud_pid_file)

  def configure(self, env):
    print 'Configuring Solr Cloud';

if __name__ == "__main__":
  Master().execute()
&lt;/PRE&gt;&lt;P&gt;At this point, after restarting Ambari, you will be able to "install" Solr Cloud via the Ambari Add Service wizard, specifying a Solr Cloud Server on whichever hosts Solr is already installed.  As you might note from solrcloud.py, the installation doesn't do anything other than configure Ambari to be aware that the components exist on the hosts.&lt;/P&gt;&lt;P&gt;Once the installation is complete, Solr will be listed as an Ambari Service, with each Solr Cloud server listed as an individual Master component.&lt;/P&gt;&lt;P&gt;Hope this helps.&lt;/P&gt;&lt;P&gt;Joe&lt;/P&gt;</description>
      <pubDate>Fri, 30 Dec 2016 22:48:28 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Solr-installation/m-p/125756#M34510</guid>
      <dc:creator>josephmontenaro</dc:creator>
      <dc:date>2016-12-30T22:48:28Z</dc:date>
    </item>
  </channel>
</rss>

