Hi guys, I would like to know if in Sqoop is possible load too many files at the same time as we load whole included files in a folder into a pig relation just indicating in the load something like:
test = LOAD '/user/me/datasets/*' USING PigStorage(',');
I have tried to apply the same logic in a SQOOP stement to load or update in one shoot many files with the same structure into a mysql table with the code:
sqoop export --connect jdbc:mysql://nn01.itversity.com/retail_export --username retail_dba --password itversity \ --table roles --update-key id_emp --update-mode allowinsert --export-dir /user/ingenieroandresangel/datasets/export/* \ -m 1
The weird thing in my scenario is that even if at the end of my statement the systems shows up a message:
17/08/14 15:05:02 INFO mapreduce.Job: Counters: 8 Job Counters Failed map tasks=1 Launched map tasks=1 Rack-local map tasks=1 Total time spent by all maps in occupied slots (ms)=5548 Total time spent by all reduces in occupied slots (ms)=0 Total time spent by all map tasks (ms)=2774 Total vcore-milliseconds taken by all map tasks=2774 Total megabyte-milliseconds taken by all map tasks=4260864 17/08/14 15:05:02 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead 17/08/14 15:05:02 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 19.3098 seconds (0 bytes/sec) 17/08/14 15:05:02 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead 17/08/14 15:05:02 INFO mapreduce.ExportJobBase: Exported 0 records. 17/08/14 15:05:02 ERROR mapreduce.ExportJobBase: Export job failed! 17/08/14 15:05:02 ERROR tool.ExportTool: Error during export: Export job failed!
the table in my sql has load all records in my two proposed txt files so, in theory that worked even with this error but i'm not sure so please if someone can explain that behavior in technical sens i appreciate.