Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Solr collection functionality

Highlighted

Solr collection functionality

New Contributor

Hi Guys,

I am doing POC for customer to show his 1.transaction data as well as 2.log data in banana visualization through Solr

1.Transaction data : we build pipeline GoldenGate->kafka->Sparkstreaming->hdfs->Solr ->banana

once transaction data stored into hdfs on top of that data we execute "hadoop jar hadoop-lws-job-2.0.1-0-0-hadoop2.jar com.lucidworks.hadoop.ingest.IngestJob -DcsvDelimiter="," -Dlww.commit.on.close=true -cls com.lucidworks.hadoop.ingest.CSVIngestMapper -c test1 -i /user/csv/* -of com.lucidworks.hadoop.io.LWMapRedOutputFormat -s http://hostname:8984/solr" command only we didnt change any schema file so indexes has been created in solr as field_1_s,id,_version,field_2_s,field_3_s,field_4_s our data looks like "157,2900,15,1,26-02-18 07:06:54.000000000"

2.log data : for log data when we tried to create indexes on log file by using the above command it wont create any indexes it creates indexes like file path,content type,content-length so we configured logtash to send log data to solr

Now I want to show both in banana dashboard so what we did is once log data stored in sol again we are send transaction data to collection which is used for log data here is the problem

a.For log data in ID index it generates some id when we send transaction data to that collection id of transaction data also comes under that id only

so how to visualize log and transaction using single collection of solr your guidence is really helpful.

If any way to create different indexes in solr collection

Thanks in advance