When I played with Kafka-Storm streaming analysis, I often met the problem
java.lang.RuntimeException: Could not find leader nimbus from seed hosts
And I even cannot kill the related topology through StormUI or command line.
I think it is because I got stale topology data in storm and zookeeper. My solution is to do directory clean up manually. This idea is borrowed from the bash files in our amazing "trucking demo".
Step 1: Clean up files in storm folder
#paths through Ambari install may start with /mnt
if [ -d "/mnt/hadoop/storm" ]; then
rm -rf /mnt/hadoop/storm/supervisor/isupervisor/*
rm -rf /mnt/hadoop/storm/supervisor/localstate/*
rm -rf /mnt/hadoop/storm/supervisor/stormdist/*
rm -rf /mnt/hadoop/storm/supervisor/tmp/*
rm -rf /mnt/hadoop/storm/workers/*
rm -rf /mnt/hadoop/storm/workers-users/*
fi
#paths in sandbox may start with /hadoop
if [ -d "/hadoop/storm" ]; then
rm -rf /hadoop/storm/supervisor/isupervisor/*
rm -rf /hadoop/storm/supervisor/localstate/*
rm -rf /hadoop/storm/supervisor/stormdist/*
rm -rf /hadoop/storm/supervisor/tmp/*
rm -rf /hadoop/storm/workers/*
rm -rf /hadoop/storm/workers-users/*
rm -rf /hadoop/storm/nimbus/stormdist/*
fi