Spark by default looks for files in HDFS but for some reason if you want to load file from the local filesystem, you need to prepend "file://" before the file path. So your code will be
Dataset<Row> jsonTest = spark.read().json("file:///tmp/testJSON.json");
However this will be a problem when you are submitting in cluster mode since cluster mode will execute on the worker nodes. All the worker nodes are expected to have that file in that exact path so it will fail. To overcome, you can pass the file path in the --files parameter while running spark-submit which will put the file on the classpath so you can refer the file by simply calling the file name alone.
For ex, if you submitted the following way:
> spark-submit --master <your_master> --files /tmp/testJSON.json --deploy-mode cluster --class <main_class> <application_jar>
then you can simply read the file the following way:
Dataset<Row> jsonTest = spark.read().json("testJSON.json");