Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Solr - Cloudera Manager Logs?

avatar
Contributor

Hello,

 

had someone a Solr (Clouder Search) configuration running that index all of the Cloudera logs? Is this possible?

I would like to have a better log analyzing and alerting than manual checking in Cloudera Manager...

 

Regards

Oliver

1 ACCEPTED SOLUTION

avatar
Contributor

Hello Oliver,

 

Yes, I think it is possible. 

You can set up a pipeline for example with Flume where you create a TailDirSource to ingest data from the log directory, then channel it to a MorphlineSolrSink where you can transform it Solr records. 

 

In your morphline script you can use grok commands to parse the log entries. I'm not aware of any out of the box scripts for CDH for parsing log files, but we have a blog entry which describes an example of processing syslog files, and you can also use the grok constructor app (https://grokconstructor.appspot.com) which is very helpful to create required grok expressions.

 

Please note that the Flume sources like TailDirSource usually do not support multiline inputs (which would be handy for stack traces). The Flume source will process each line of the input file as a separate Flume event and the Morphlines will be invoked separately for each of those - even if we have a readMultiLine command in Morphlines, that is not applicable here since one invocation gets only a single line as input. 

 

I found this github repo which implements a multi-line flume source: https://github.com/qwurey/flume-source-multiline  

 

I did not try this recently but for example you can try this in your flume config:

 

a3.sources.r3.type=com.urey.flume.MultiLineExecSource
a3.sources.r3.lineStartRegex = \\s?\\d\\d\\d\\d-\\d\\d-\\d\\d\\s\\d\\d:\\d\\d:\\d\\d,\\d\\d\\d
a3.sources.r3.command = tail -F /tmp/testtaildir/mylog.log

And for example this expression in your morphlines:

 

  { 
        readMultiLine {
          regex : "(^.+Exception: .+)|(^\\s+at .+)|(^\\s+\\.\\.\\. \\d+ more)|(^\\s*Caused by:.+)"
          negate: false
          what : previous
          charset : UTF-8
        }
      }

 

If you want batch indexing instead of the Near-Real-Time, you can use the MapReduceIndexerTool or the Spark Crunch Indexer instead of Flume, they also work using Morphlines.

 

Best Regards,

Istvan

  

 

 

 

View solution in original post

1 REPLY 1

avatar
Contributor

Hello Oliver,

 

Yes, I think it is possible. 

You can set up a pipeline for example with Flume where you create a TailDirSource to ingest data from the log directory, then channel it to a MorphlineSolrSink where you can transform it Solr records. 

 

In your morphline script you can use grok commands to parse the log entries. I'm not aware of any out of the box scripts for CDH for parsing log files, but we have a blog entry which describes an example of processing syslog files, and you can also use the grok constructor app (https://grokconstructor.appspot.com) which is very helpful to create required grok expressions.

 

Please note that the Flume sources like TailDirSource usually do not support multiline inputs (which would be handy for stack traces). The Flume source will process each line of the input file as a separate Flume event and the Morphlines will be invoked separately for each of those - even if we have a readMultiLine command in Morphlines, that is not applicable here since one invocation gets only a single line as input. 

 

I found this github repo which implements a multi-line flume source: https://github.com/qwurey/flume-source-multiline  

 

I did not try this recently but for example you can try this in your flume config:

 

a3.sources.r3.type=com.urey.flume.MultiLineExecSource
a3.sources.r3.lineStartRegex = \\s?\\d\\d\\d\\d-\\d\\d-\\d\\d\\s\\d\\d:\\d\\d:\\d\\d,\\d\\d\\d
a3.sources.r3.command = tail -F /tmp/testtaildir/mylog.log

And for example this expression in your morphlines:

 

  { 
        readMultiLine {
          regex : "(^.+Exception: .+)|(^\\s+at .+)|(^\\s+\\.\\.\\. \\d+ more)|(^\\s*Caused by:.+)"
          negate: false
          what : previous
          charset : UTF-8
        }
      }

 

If you want batch indexing instead of the Near-Real-Time, you can use the MapReduceIndexerTool or the Spark Crunch Indexer instead of Flume, they also work using Morphlines.

 

Best Regards,

Istvan