- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
What is the best practice for multiple python environments with Spark2 interpreter in Zeppelin
Created on
09-22-2021
06:41 AM
- last edited on
09-22-2021
10:53 PM
by
VidyaSargur
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello
I was wondering how one would best implement the case where multiple users have their own python environment with their respective packages installed in a Spark2 interpreter. I have not implemented or tested the two approaches below yet but was wondering if I am on a completely wrong path and there would be a very easy and straightforward solution.
Two ways I could think of would be the following:
1.) For each notebook set the variable zeppelin.python
If one would instantiate the interpreter per user you could run the following at the start.
%spark2.conf
zeppelin.python </your/users/python_env>
This would add some complexity for the users because they have to write down their environment paths but would otherwise be very easy to implement (If it really works without hiccups).
2.) Create a separate Interpreter for every user
This approach would be to create an interpreter for every user (%spark2 -> %spark2userName).
This would be relatively easy to explain to users and they don't need to remember any paths. They can support themselves since a universal "Just add your username behind %spark2" as instruction would suffice.
I have only dipped my toes slightly into creating new Interpreters so I have no clue how easy or complicated it would be to clone one.
Can you just copy one interpreter's fields and create a copy with new names? I could not find a lot of information on this so any pointers would help.
Of course, my approaches might be stupid so if you have an actual solution I am glad to read about it.
Created 10-15-2021 11:55 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
For each python, you need to create separate interpreter.
Created 10-06-2021 09:34 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Created 10-10-2021 11:11 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@LegallyBind, Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
Regards,
Vidya Sargur,Community Manager
Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.
Learn more about the Cloudera Community:
Created 10-11-2021 05:13 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you for reply @RangaReddy
The article describes how to create multiple interpreters for multiple versions of Python.
So this is the suggested best practice when one needs multiple Python environments for Spark Interpreters?
Created 10-15-2021 11:55 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
For each python, you need to create separate interpreter.
Created 10-21-2021 10:36 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@LegallyBind Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
Regards,
Vidya Sargur,Community Manager
Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.
Learn more about the Cloudera Community:
