But Apache's document for R interpreter says:
To run R code and visualize plots in Apache Zeppelin, you will need R on your master node (or your dev laptop).
And does sparkR utility distributes the load on the cluster automatically or we need to add any properties for that?
I have spark interpreter where I have set property: master=yarn-client and I have set the spark_home. Is this enough?