The persisted RDD data is stored either in memory or on disk, according to the specified level. If it is stored in memory, every partition of the RDD is stored on the executor where it is computed. If all the partitions are on a single executor, then all the RDD data is cached in it. In this case, unless you cause a shuffle, all subsequent operations are performed on that executor. A shuffle can be caused explicitly - using repartition for instance - or implicitly - some operations like groupBy can cause it.
Anyway, from the Spark UI (port 4040 of the node where the driver is running, or if you are using YARN you can access it from the RM UI, through the link "Application Master" of your Spark application) you can check where your data is stored (in the Executors tab) and whether subsequent operations are performed all on the same executor or not (from the Stage UI of the relevant job).