Support Questions
Find answers, ask questions, and share your expertise

Custom processor in storm creates bottleneck


hello community,

We created a custom storm processor which decodes information from asn1 to xml (ASN12XLTE), which consumes a lot of processing.

Currently we have a single server where everything is installed (ambari, kafka, storm, etc). What parameters can I check so that in this processor does not form a bottleneck.

Thank you very much for your help



That pseudo-cluster itself is a scalability bottleneck. 😉 Storm likes to scale by having many Supervisor (worker) processes). As for the specific stats on your component, you could drill into the Storm UI and find your topology and then drill into it to find your bolt to see how it is doing and gauge for yourself it is working well enough. You'll get rolling statistics of how long it takes to process things such as shown below. You could also scale up the number of instances the bolt has, but again, the single-server pseudo-cluster is likely going to be your first bottleneck.



Hi, Lester.

I appreciate very much that you have responded.

You are right, a server is very little :), but it is what I have for now, if the project goes ahead our manager will provide us with more workers.

Next, I send screenshots with the topology processing.

Currently the server has 56 cores and 256 GB of RAM106581-storm1.jpg



The idea is to decode 1600 files which contain 4000 records each on average, that is 6.400.000 records in total, these must be processed and sent through nifi in less than 10 minutes to another server sftp.

Thanks for the help 😉

; ;