Support Questions

Find answers, ask questions, and share your expertise
Welcome to the upgraded Community! Read this blog to see What’s New!

livy-on-spark3 interpreter died



trying to setup zeppelin + livy-on-spark3.

Livy-on-spark2 is up and r

spark3.sql runs fine, any spark3.pyspark code returns error "Interpreter died:".

What could be wrong?


New Contributor

I have exact same issue. any hints?

Community Manager

@DarekLinek @Scout Welcome to the Cloudera Community!

To help you get the best possible solution, I have tagged our Spark experts @Bharati and @Gopinath  who may be able to assist you further.

Please keep us updated on your post, and we hope you find a satisfactory solution to your query.


Diana Torres,
Community Moderator

Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.
Learn more about the Cloudera Community:

New Contributor

i have also encountered this weird situation interpreter died when using livy pyspark on spark3. I have lost for 2 days debugging this errors and i've found out that we need to:

1. In the "Spark 3" service on Cloudera manager portal, set spark pyspark python and python driver executable files configuration in the section "Spark 3 Client Advanced Configuration Snippet (Safety Valve) for spark3-conf/spark-defaults.conf" as below:


2. Restart "livy for Spark 3" service on cloudera manager.

3. Restart the Zeppelin Livy Interpreter.

=> After being restarted, the zeppelin's livy interpreter on Spark 3 can execute %pyspark interpreter.

=> Note that, the python version to work with livy spark 3 on zeppelin must be less than or equal 3.7. otherwise it will generate error "required field "type_ignores" missing from Module" when executing pyspark script on Zeppelin.