Welcome, Eswar!
Since you are using Spark1.6 all you'd need is a hive gateway to explore hive tables from spark sql (no need to manually transport hive-site.xml).
You can add/ensure that the Hive gateway is added to the node from where you are running the spark-shell (in your case there is just one node so it should be your quickstart VM) using CM > Hive > Instances > Gateway Role

As for your requirement of a sample code, you can start by creating a sequence or an array from the shell
scala> val data = Seq(("Falcon", 10), ("IronMan", 40), ("BlackWidow", 10))
Next, parallelize the collection and create a DataFrame from the RDD
scala> val df = sc.parallelize(data).toDF("Name", "Count")
After this set the Hive warehouse path
scala> val options = Map("path" -> "/user/hive/warehouse/avengers")
Followed by saving the table
scala> df.write.options(options).saveAsTable("default.avengers")
Finally, query the table using Spark SQL and beeline
scala> sqlContext.sql("select * from avengers").collect.foreach(println);
[Falcon, 30]
[IronMan, 40]
[BlackWidow, 10]
$ beeline …
> show tables;
> select * from avengers;
Falcon 30
IronMan 40
BlackWidow 10
Hope this helps. Let us know if you already got past it and/or if you are still stuck.
Good Luck!