The weird thing in my scenario is that even if at the end of my statement the systems shows up a message:
17/08/14 15:05:02 INFO mapreduce.Job: Counters: 8 Job Counters Failed map tasks=1 Launched map tasks=1 Rack-local map tasks=1 Total time spent by all maps in occupied slots (ms)=5548 Total time spent by all reduces in occupied slots (ms)=0 Total time spent by all map tasks (ms)=2774 Total vcore-milliseconds taken by all map tasks=2774 Total megabyte-milliseconds taken by all map tasks=4260864 17/08/14 15:05:02 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead 17/08/14 15:05:02 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 19.3098 seconds (0 bytes/sec) 17/08/14 15:05:02 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead 17/08/14 15:05:02 INFO mapreduce.ExportJobBase: Exported 0 records. 17/08/14 15:05:02 ERROR mapreduce.ExportJobBase: Export job failed! 17/08/14 15:05:02 ERROR tool.ExportTool: Error during export: Export job failed!
the table in my sql has load all records in my two proposed txt files so, in theory that worked even with this error but i'm not sure so please if someone can explain that behavior in technical sens i appreciate.