Underneath the Sqoop, it is MapReduce job to move over all data from relational database to Hadoop. I don't think if you try to run large amount of data on Sqoop, you can avoid the MapReduce. Can I ask why you want to avoid the Mapreduce? If you have different use case, then other tools are exist to avoid Mapreduce.
Curious why the mapper aspect is a concern for you. Sqoop will build mappers for you and run them in parallel with you deciding the number you want (default 4, but no reason why you could not use 1). It will evaluate how best to split the jobs. It will also save off the mapper code in case you want to use it or tweak it for future work.
This is out of my curiosity to test whether sqoop can work without mapper. I am trying to compare the fetch operation in hive as we can bypass mapreduce.
what happens when eval statements are been used in sqoop?. Mapreduce are been bypassed or not been triggered by the sqoop, i there any way to do the same for the import for small table like less then 10 record.
Good question, I have only used Sqoop with the mapper aspect, Sqoop 1 is client based tool so eval tool is more than likely just using jdbc from client jvm. There are other ways to import data and apply structure, if that is what you are looking for. The Hive view can give you some ideas around quickly importing data and applying structure, if you are thinking the self service type aspects. The Tez engine with LLAP have really increased the access times around query.