Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

do not see snort index in elasticsearch

Highlighted

do not see snort index in elasticsearch

New Contributor

Hi

I tranfered alert.csv (snort alert) to metron by nifi site-to-site. I see warning in /var/log/storm/snort/worker.log:

2018-07-25 13:00:17.308 o.a.s.k.s.i.OffsetManager [WARN] topic-partition [snort-0] has unexpected offset [4280]. Current committed Offset [25313]

2018-07-25 13:00:17.307 o.a.s.k.s.i.OffsetManager [WARN] topic-partition [snort-0] has unexpected offset [4255]. Current committed Offset [25313]
2018-07-25 13:00:17.307 o.a.s.k.s.i.OffsetManager [WARN] topic-partition [snort-0] has unexpected offset [4256]. Current committed Offset [25313]
2018-07-25 13:00:17.307 o.a.s.k.s.i.OffsetManager [WARN] topic-partition [snort-0] has unexpected offset [4257]. Current committed Offset [25313]
,..

I see data in topology state:

storm.png

and I don't see any snort 's index in elasticsearch.I see

[2018-07-25 11:57:55,237][DEBUG][action.search            ] [localhost.localdomain] [squid_index_2018.07.10.17][3], node[fG1aq-n_TMGrsJVyYHd-8g], [P], v[22], s[STARTED], a[id=Is2mCJjFStyQgtfTcbXUoA]: Failed to execute [org.elasticsearch.action.search.SearchRequest@7074af68] lastShard [true]
RemoteTransportException[[localhost.localdomain][10.0.1.68:9300][indices:data/read/search[phase/query]]]; nested: EsRejectedExecutionException[rejected execution of org.elasticsearch.transport.TransportService$4@76a8b7ca on EsThreadPoolExecutor[search, queue capacity = 1000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@311a0a69[Running, pool size = 13, active threads = 13, queued tasks = 999, completed tasks = 9134]]];
Caused by: EsRejectedExecutionException[rejected execution of org.elasticsearch.transport.TransportService$4@76a8b7ca on EsThreadPoolExecutor[search, queue capacity = 1000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@311a0a69[Running, pool size = 13, active threads = 13, queued tasks = 999, completed tasks = 9134]]]
	at org.elasticsearch.common.util.concurrent.EsAbortPolicy.rejectedExecution(EsAbortPolicy.java:50)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
	at org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor.execute(EsThreadPoolExecutor.java:85)
	at org.elasticsearch.transport.TransportService.sendLocalRequest(TransportService.java:372)
	at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:327)
	at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:299)
	at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:142)
	at org.elasticsearch.action.search.SearchQueryThenFetchAsyncAction.sendExecuteFirstPhase(SearchQueryThenFetchAsyncAction.java:66)
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.performFirstPhase(AbstractSearchAsyncAction.java:144)
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.start(AbstractSearchAsyncAction.java:126)
	at org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:115)
	at org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:47)
	at org.elasticsearch.action.support.TransportAction.doExecute(TransportAction.java:149)
	at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:137)
	at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:85)
	at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:58)
	at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:359)
	at org.elasticsearch.client.FilterClient.doExecute(FilterClient.java:52)
	at org.elasticsearch.rest.BaseRestHandler$HeadersAndContextCopyClient.doExecute(BaseRestHandler.java:83)
	at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:359)
	at org.elasticsearch.client.support.AbstractClient.search(AbstractClient.java:582)
	at org.elasticsearch.rest.action.search.RestSearchAction.handleRequest(RestSearchAction.java:85)
	at org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:54)
	at org.elasticsearch.rest.RestController.executeHandler(RestController.java:205)
	at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:166)
	at org.elasticsearch.http.HttpServer.internalDispatchRequest(HttpServer.java:128)
	at org.elasticsearch.http.HttpServer$Dispatcher.dispatchRequest(HttpServer.java:86)
	at org.elasticsearch.http.netty.NettyHttpServerTransport.dispatchRequest(NettyHttpServerTransport.java:449)
	at org.elasticsearch.http.netty.HttpRequestHandler.messageReceived(HttpRequestHandler.java:61)
	at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
	at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
	at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
	at org.elasticsearch.http.netty.pipelining.HttpPipeliningHandler.messageReceived(HttpPipeliningHandler.java:60)
	at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)
	at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
	at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
	at org.jboss.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:145)
	at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
	at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
,...

in /val/log/elasticsearch/metron.log

Is it normal?if no,how can I solve it?

Thanks for answering my problem

1 REPLY 1

Re: do not see snort index in elasticsearch

What does your architecture look like, with respect to Metron, Storm, Elasticsearch, and NiFi? I don't know much about Storm or Metron, but the relevant part of the Elasticsearch error appears to be:

EsThreadPoolExecutor[search, queue capacity =1000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@311a0a69[Running, pool size =13, active threads =13, queued tasks =999, completed tasks =9134]]];

It shows the queue capacity at 1000 and the queued tasks at 999, so I'm guessing any future tasks are being rejected until the queue has room for them. You may need to reduce the throughput in order for for ES to handle the data. Also if there is a Site-To-Site listener in one of those systems (Metron, Storm, e.g.) then they should recognize if no more data can be handled at the moment and return the appropriate response code indicating backpressure is being applied.