Hive table with ACID property enabled table will work good for your use case. Also with the help of Merge update you will be able to perform these inserts/updates easily. In typing data warehousing implementing SCD type 2 tables are performed easily with the help of Merge update function in hive.
To answer whether spark in terms of insert/updates you can complex computation with much ease. But when it comes to updating the existing record its better to go with Hive than spark. Hope it Helps!!
Understanding the problem statement here, you just have small files on the local file system and do not want to use any HDFS layer in here. Spark can, but may be an over kill for this scenario.
You can start with basic java jdbc code to get this task accomplished. However, you have Apache NiFi within the HDF stack that can easily solve this problem for you.
You can simply design a dataflow that will monitor for new files into your SFTP location, pull those files to NiFi as soon as they land, optionally parse them/apply some transformations, perform a mysql operation.
We would not make a performance comparison of Spark and NiFi in here for this usecase since we aren't talking even running into a cluster. Since the data volume is low and processing is finite, I would prefer NiFi to be used since the entire life cycle gets reduced. + it is UI driven.
Architecture wise it is not a good option as spark is distributed processing framework (running on multiple nodes) and spark writing to MYSQL on a single node will create performance bottleneck as MYSQL is not distributed.If you want to make full use of Sparks distributed processing capability then it is advisable to use either distributed file system or distributed database.