I need to write Spark job output file to NFS mount point from spark2 shell. can you please let me know if there is any way to do it by defining absolute path in Spark2 shell.
Can you share your code example? The should be an option to specify mode='overwrite' when saving a DataFrame:
myDataFrame.save(path='"/output/folder/path"', source='parquet', mode='overwrite')