When I played with Kafka-Storm streaming analysis, I often met the problem
java.lang.RuntimeException: Could not find leader nimbus from seed hosts
And I even cannot kill the related topology through StormUI or command line.
I think it is because I got stale topology data in storm and zookeeper. My solution is to do directory clean up manually. This idea is borrowed from the bash files in our amazing "trucking demo".
Step 1: Clean up files in storm folder
#paths through Ambari install may start with /mnt
if [ -d "/mnt/hadoop/storm" ]; then
rm -rf /mnt/hadoop/storm/supervisor/isupervisor/*
rm -rf /mnt/hadoop/storm/supervisor/localstate/*
rm -rf /mnt/hadoop/storm/supervisor/stormdist/*
rm -rf /mnt/hadoop/storm/supervisor/tmp/*
rm -rf /mnt/hadoop/storm/workers/*
rm -rf /mnt/hadoop/storm/workers-users/*
#paths in sandbox may start with /hadoop
if [ -d "/hadoop/storm" ]; then
rm -rf /hadoop/storm/supervisor/isupervisor/*
rm -rf /hadoop/storm/supervisor/localstate/*
rm -rf /hadoop/storm/supervisor/stormdist/*
rm -rf /hadoop/storm/supervisor/tmp/*
rm -rf /hadoop/storm/workers/*
rm -rf /hadoop/storm/workers-users/*
rm -rf /hadoop/storm/nimbus/stormdist/*
Step 2: Clear your zookeeper state.
1) stop storm
2) do the following command lines:
Step 3: restart storm
This solution works for me on most of the nimbus failure problem.
Thanks to Ali Bajwa for the kind help!
Is "/mnt/hadoop/storm" or "hadoop/storm" decided by the value of "storm.local.dir" parameter?
@Hajime Were you able to find the answer?