Thanks for your question.
Standalone R and Python jobs run only on the CDSW edge nodes where we have more control over dependency management using Docker. However these jobs can push workloads into the cluster using tools like PySpark, Sparklyr, Impala, and Hive. This allows you to get full dependency management for R and Python in the edge environment while still scaling specific workloads into the cluster. There is not currently a way to run the R and Python jobs themselves under YARN.
In terms of SparkR, we recommend, but do not directly support, Sparklyr instead of SparkR.
I hope that is helpful.
Tristan