With Apache Spark 2.1, SparkR supports virtual environment. With VirtualEnv when you deploy Spark on YARN, each SparkR job will have its own library directory which binds to the executor’s YARN container. All necessary third-party libraries are installed into that directory. After the Spark job finishes, the directory is deleted and there is no interference with other Spark job’s environment.
SparkR now supports the following two scenarios:
1) Internet enabled cluster allows packages to be installed (say from CRAN) on the executors so that R packages can be used in the tasks.
2) For isolated cluster, the driver first downloads the required packages or download packages ahead of time and then add them to the cluster. Spark will distribute the packages to executors and we can install them when that executor runs.
Both of these scenarios install R packages in a separate local directory on executor for each user, and will not pollute that executor’s native R environment. This directory will be deleted after the executor exit.
The virtualenv support for SparkR is different from PySpark. SparkR is an interactive analytics tools, so people install third-party packages frequently during the user session rather doing it in advance. If users create a spark-shell or SparkR interpreter in Zeppelin, they can try different native R packages across this session. Since users may cache some data in executors and don’t want to restart the session with the cost of lost cached data, they can use any R packages in executors after this improvement.