When you use Spark from a Zeppelin notebook, and at the end you isse a context stop,
sc.stop()
it affects the context of other running Zeppeling notebooks, making them fail because of no Spark active context. They seem to be sharing the same Spark context.
How can this be avoided?