Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Storm - missing messages in pipeline

avatar
Explorer

Hi all..We are noticing that there are some messages which get lost during storm processing..below is a brief outline of our pipeline.

13712-smouw.png

We have messages coming to Kafka which then get consumed by 2 different kafka spouts in Storm. One Spout writes the message to raw stream and other storm starts processing the message. We need to store the output of Bolt2 to HDFS and also send it down for further processing which will then eventually end up in ADLS as well.

All the 3 HDFS bolts are configured to write to different folder structures in ADLS. In an ideal scenario I should see all the 3 messages in ADLS ( raw, out of bolt2 and output of bolt3). But we are noticing that raw gets written always but sometimes only one of the output (bolt2 or bolt3) gets written to ADLS. Its inconsistent on which one misses. Sometimes both get written. There aren't any errors/exceptions in log messages.

Did anyone run into such issues? Any insight will be appreciated. Are there any good monitoring tools other than Storm UI that gives insight into what is going on? We are using HDInsight and are hosted on Azure and are using Storm 1.0.1

Thanks.

1 ACCEPTED SOLUTION

avatar
Expert Contributor

@Laxmi Chary thanks for your question. Do you know if there's ever a case where Message from Bolt 2 doesn't get written but from Bolt 3 does get written? Are you anchoring tuples in your topology? collector.emit(tuple, new Field()) [the tuple is the anchor]

Are you doing any microbatching in your topology?

View solution in original post

11 REPLIES 11

avatar
Expert Contributor

@Laxmi Chary thanks for your question. Do you know if there's ever a case where Message from Bolt 2 doesn't get written but from Bolt 3 does get written? Are you anchoring tuples in your topology? collector.emit(tuple, new Field()) [the tuple is the anchor]

Are you doing any microbatching in your topology?

avatar
Explorer

@Ambud SharmaYes. There is a case where the message from Bolt 2 doesn't get written but from bolt3 should get written. But if Bolt2 output is written, Bolt 3 output should always be there. vice versa is not true. Is that a problem?

We are not anchoring tuples. We are extending BaseBasicBolt and from I understand we need to anchor tuples only if we extend BaseRichBolt..Is that incorrect?

No, we are not doing any microbatching.

avatar
Explorer

@Ambud Sharma wondering if u have more insight. Let me know if you need more details. TIA

avatar
Expert Contributor
@Laxmi Chary

You should be anchoring, without anchoring Storm doesn't guarantee at least once semantics which means it's best effort.

Anchoring is a factor of your delivery semantics, you should be using BaseRichBolt, otherwise you don't have a collector.

avatar
Explorer

@Ambud Sharma Doesn't the BaseBasicBolt do that for u?

avatar
Explorer

This is what was mentioned in Storm Applied book

"The beauty of using BaseBasicBolt as our base class is that it automatically provides anchoring and acking for us." and we are using BaseBasicBolt. Are you saying that this is incorrect?

avatar
Expert Contributor

Yes, that is incorrect, https://github.com/apache/storm/blob/master/storm-core/src/jvm/org/apache/storm/topology/base/BaseBa... this bolt class doesn't even have a collector to acknowledge messages.

avatar
Explorer

ok. Do you know if there is any documentation on this?

avatar
Expert Contributor

Here's some example code to show you how explicit anchoring and acking can be done: https://github.com/Symantec/hendrix/blob/current/hendrix-storm/src/main/java/io/symcpe/hendrix/storm...