Did not upgrade the existing Spark on sandbox, but installed in a separate location while playing with Zeppelin and it worked fine. Below is the script I used to set it up (see readme for Ambari service for Zeppelin for more info)
sudo useradd zeppelin
sudo su zeppelin
wget http://d3kbcqa49mib13.cloudfront.net/spark-1.5.0-bin-hadoop2.6.tgz -O spark-1.5.0.tgz
tar -xzvf spark-1.5.0.tgz
export HDP_VER=`hdp-select status hadoop-client | sed 's/hadoop-client - \(.*\)/\1/'`
echo "spark.driver.extraJavaOptions -Dhdp.version=$HDP_VER" >> spark-1.5.0-bin-hadoop2.6/conf/spark-defaults.conf
echo "spark.yarn.am.extraJavaOptions -Dhdp.version=$HDP_VER" >> spark-1.5.0-bin-hadoop2.6/conf/spark-defaults.conf
@Saptak Sen Why try Spark 1.5.1 when you can try 1.5.2?
Spark is sort of like a client, so you can easily run multiple versions of Spark on one cluster. Simply download Spark from their the apache website - https://spark.apache.org/downloads.html - and invoke it to run it (can configure Zeppelin to use it or just run via the CLI).