Support Questions
Find answers, ask questions, and share your expertise

Make Solr to use HDFS

Expert Contributor

Hello ,

 

I have installed Cloudera manager 5  and using it I installed Solr , Zookeper , HDFS and Yarn services. 

I am trying to  do the following :

 

1. Load data to the HDFS

2. Access the HDFS using Solr .

 

Please suggest me steps to acheive the same .

 

Thanks

Bala

Thanks
Bala
18 REPLIES 18

Explorer

Hi Bala,

 

  I have found that very rarely is data truly unstructured.  What kind of data is it?  Typically, there is some form of structure to the data.  Can you send me a sample file kevin@cloudera.com

Expert Contributor
Kevin , The data consists of Rich documents (txt , pdf , doc files) It does not hold any particular structure . Is it possible to extract the data out of this format ??
Thanks
Bala

Explorer

Bala,

 

  It absolutely is.  I was just giving you a sample set of instructions so you could play with a CSV file ingest.  You will be looking to use Apache Tika.  The good news is there is a morphline to help you with that.  The bad new is you will have to write that morphline.  I would recommend starting here: https://github.com/cloudera/search#cdk-morphlines-solr-cell

Expert Contributor
Kevin , in the earlier briefing you have mentioned about morphline . So should i proceed with the earlier steps you have asked me to follow . Or should i go through this first ? https://github.com/cloudera/search#cdk-morphlines-solr-cell
Thanks
Bala

Explorer

You can follow the same steps I sent you, but you will need to switch to https://github.com/cloudera/search#cdk-morphlines-solr-cellcdk-morphline-solr-cell morphline instead of the CSV one in the example.

Expert Contributor
Kevin, How to use CDK ?
Thanks
Bala

Expert Contributor
Hello Kevin ,

I am still not able to figure out how to use the CDK u have mentioned 😞 .. Need help ..

Thanks
Bala
Thanks
Bala

Expert Contributor

Kevin , I followed the steps , It working as expected in dry run. But when i run without dry--run argument . It stops at this step 😞 😞

 

770  [main] INFO  org.apache.solr.cloud.ZkController  – Write file /tmp/1404354031741-0/velocity/facet_fields.vm
771  [main] INFO  org.apache.solr.cloud.ZkController  – Write file /tmp/1404354031741-0/elevate.xml
773  [main] INFO  org.apache.solr.cloud.ZkController  – Write file /tmp/1404354031741-0/admin-extra.menu-bottom.html
774  [main] INFO  org.apache.solr.cloud.ZkController  – Write file /tmp/1404354031741-0/schema.xml
897  [main] INFO  org.apache.solr.hadoop.MapReduceIndexerTool  – Indexing 1 files using 1 real mappers into 1 reducers

 

It stops in 897 itself . I restarted and tried , still the same .

 

Any help .

 

Thanks

Bala

Thanks
Bala

New Contributor

Is incremental load to Solr is possible? Meaning that If the dataset set that is going to load in the solr has some unique keys ( with or without update in other fields of the record) that are already present in the solr collection, I want existing records get updated and new record get inserted in Solr collection. Could you please let me know if it is possible in Solr or not. If yes, please advice in achieving the same.