Support Questions

Find answers, ask questions, and share your expertise

What is the best practice for multiple python environments with Spark2 interpreter in Zeppelin

avatar
New Contributor

Hello

I was wondering how one would best implement the case where multiple users have their own python environment with their respective packages installed in a Spark2 interpreter. I have not implemented or tested the two approaches below yet but was wondering if I am on a completely wrong path and there would be a very easy and straightforward solution.

 

Two ways I could think of would be the following:

1.) For each notebook set the variable zeppelin.python

If one would instantiate the interpreter per user you could run the following at the start.

 

%spark2.conf
zeppelin.python </your/users/python_env>

 

This would add some complexity for the users because they have to write down their environment paths but would otherwise be very easy to implement (If it really works without hiccups).

 

 2.) Create a separate Interpreter for every user

This approach would be to create an interpreter for every user (%spark2 -> %spark2userName).

This would be relatively easy to explain to users and they don't need to remember any paths. They can support themselves since a universal "Just add your username behind %spark2" as instruction would suffice.

I have only dipped my toes slightly into creating new Interpreters so I have no clue how easy or complicated it would be to clone one.

Can you just copy one interpreter's fields and create a copy with new names? I could not find a lot of information on this so any pointers would help.

 

Of course, my approaches might be stupid so if you have an actual solution I am glad to read about it.

1 ACCEPTED SOLUTION

avatar
Master Collaborator

@LegallyBind 

For each python, you need to create separate interpreter. 

View solution in original post

5 REPLIES 5

avatar
Master Collaborator

avatar
Community Manager

@LegallyBind, Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. 



Regards,

Vidya Sargur,
Community Manager


Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.
Learn more about the Cloudera Community:

avatar
New Contributor

Thank you for reply @RangaReddy 

The article describes how to create multiple interpreters for multiple versions of Python.

 

So this is the suggested best practice when one needs multiple Python environments for Spark Interpreters?

avatar
Master Collaborator

@LegallyBind 

For each python, you need to create separate interpreter. 

avatar
Community Manager

@LegallyBind Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. 



Regards,

Vidya Sargur,
Community Manager


Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.
Learn more about the Cloudera Community: