Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

How to add NiFi and Solr using Ambari

avatar
Contributor

Hi Team ,

I want to added Solr and NiFi into my exiting Ambari Cluster . Please help .

Best regards

~Kishore

1 ACCEPTED SOLUTION

avatar
Master Mentor

@Kishore Kumar

Additionally please refer to:

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.1/bk_solr-search-installation/content/ch_hdp-...

=> Download the Ambari management pack to the Ambari Server host. In this example, /tmp is a temporary directory that stores the management pack before it is installed.

cd /tmp
wget http://public-repo-1.hortonworks.com/HDP-SOLR/hdp-solr-ambari-mp/solr-service-mpack-2.2.8.tar.gz

=> Install the management pack on the Ambari Server host, using the following command: # ambari-server install-mpack --mpack=/tmp/solr-service-mpack-2.2.8.tar.gz You should see the following output: Using python /usr/bin/python Installing management pack Ambari Server 'install-mpack' completed successfully. The management pack has now been added to Ambari. => Add the Solr service, either during initial cluster installation using the Ambari installation wizard or after cluster deployment.

.

View solution in original post

9 REPLIES 9

avatar
Master Mentor

@Kishore Kumar

We can use the HDF management pack and Ambari to add HDF services to an HDP cluster.

You can install "mpacks" to include the HDF components inside the HDP cluster. as described in the following link:

https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.0.1.1/bk_installing-hdf-on-hdp/content/ch_insta...

.

After installing the mpack you should be able to add the HDF components like (NIFI) to ambari cluster.

https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.0.1.1/bk_installing-hdf-on-hdp/content/ch_add-h...

.

avatar
Contributor

Thanks @Jay SenSharma

With this i am able to get the Nifi .

Solr didnt show up into "Choose Services " still .


avatar
Master Mentor

@Kishore Kumar

You can use "Ambari Infra": https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.1.0/bk_ambari-operations/content/ch_ambari_infr...

the Ambari Infra Service has only one component: the Infra Solr Instance. The Infra Solr Instance is a fully managed Apache Solr installation. By default, a single-node SolrCloud installation is deployed when the Ambari Infra Service is chosen for installation; however, you should install multiple Infra Solr Instances so that you have distributed indexing and search for Atlas, Ranger, and LogSearch (Technical Preview).

To install multiple Infra Solr Instances, you simply add them to existing cluster hosts through Ambari’s +Add Service capability. The number of Infra Solr Instances you deploy depends on the number of nodes in the cluster and the services deployed.
.
Infra Solr Instance is intended for use only by HDP components; use by third-party components or applications is not supported.

.

avatar
Master Mentor

@Kishore Kumar

Additionally please refer to:

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.1/bk_solr-search-installation/content/ch_hdp-...

=> Download the Ambari management pack to the Ambari Server host. In this example, /tmp is a temporary directory that stores the management pack before it is installed.

cd /tmp
wget http://public-repo-1.hortonworks.com/HDP-SOLR/hdp-solr-ambari-mp/solr-service-mpack-2.2.8.tar.gz

=> Install the management pack on the Ambari Server host, using the following command: # ambari-server install-mpack --mpack=/tmp/solr-service-mpack-2.2.8.tar.gz You should see the following output: Using python /usr/bin/python Installing management pack Ambari Server 'install-mpack' completed successfully. The management pack has now been added to Ambari. => Add the Solr service, either during initial cluster installation using the Ambari installation wizard or after cluster deployment.

.

avatar
Contributor

@Jay SenSharma While installing the Solr from Ambari , i am getting following exception

File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call raise ExecutionFailed(err_msg, code, out, err) resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/bin/yum -d 0 -e 0 -y install lucidworks-hdpsearch' returned 1. Error: failure: repodata/a2e514aba11a5426bbdbb7744c3d68f3ca97510b-primary.sqlite.bz2 from Delivery-SharedHosting-RedHat-6.5Server-x86_64: [Errno 256] No more mirrors to try. stdout:.

avatar
Master Mentor

@Kishore Kumar

Similar to another thread: https://community.hortonworks.com/questions/122908/having-issue-while-setting-up-the-cluster-using-a...

Execution of '/usr/bin/yum -d 0 -e 0 -y install lucidworks-hdpsearch' returned 1. Error: failure: 
repodata/a2e514aba11a5426bbdbb7744c3d68f3ca97510b-primary.sqlite.bz2

.

Please try the yum cleanup

yum clean all

.

avatar
Contributor

@Jay SenSharma . Tried that option didn't work .

avatar
Master Mentor

@Kishore Kumar

Your repodata does not see to be right .. Either you are using private offline repo which has corrupted entry OR you are using incorrect repo.

Can you please check if you are able to do wget as following on your repo?

Example:

# wget http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6/repodata/7fcacadb44b1a860fb704...

# wget http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6/repodata/repomd.xml

.

In your case the following path seems to be incorrect: OR your "/etc/yum.repos.d/xxxx.repo" is not correct.

repodata/a2e514aba11a5426bbdbb7744c3d68f3ca97510b-primary.sqlite.bz2"

.

avatar
New Contributor

what is the difference bt solr installting using MPACK and solr installing using Ambari infra for Ranger? is it lke Ambari infra is specific to Ranger.? @Jay SenSharma

,

what is difference between solr installation using MPACK and Solar installation using Ambari infra service for ranger? ie. is it Ambari infra is specific to Ranger or I can use either of them (MPACK or Ambari infra) as alternatives to each other? or one of it being used for old versions of hadoop @Jay SenSharma