Support Questions
Find answers, ask questions, and share your expertise
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Storm not starting all executors, tasks and workers

Storm not starting all executors, tasks and workers

New Contributor

I'm trying to increase the performance of a Storm topology by adding workers, tasks and executors. The problem I see is that whatever I configure I'm not getting more than 9 workers and 8 executors and 8 tasks. I could not find any comments in the log's that help me to identify the problem.

I run Storm on 8 nodes, each of the nodes have still enough memory to add additional workers and I do not see any errors in the Storm GUI for the topology like out of memory errors.

What can limit the number of Workers, executors and tasks of a Storm topology?

The log of the Storm deployment shows the correct parameters (e.g. 16 workers, 64 tasks and executors) but the Storm GUI is only showing 9 workers and 8 tasks and executors. I've tried even higher numbers with the same result.

One potential cause might be that the kafka topic that is feeding the Storm spout is not partitioned, this is a limitation in Kafka Connect that does not (yet) support it. Could this be the only cause?

Thanks in advance,

Egbert Westerveld

Don't have an account?
Coming from Hortonworks? Activate your account here