<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Sqoop import does not work anymore in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Sqoop-import-does-not-work-anymore/m-p/195159#M71351</link>
    <description>&lt;P&gt;Hello, I think that the problem come from a threshold of 1000 written in FileOutputCommitterContainer.java :&lt;/P&gt;&lt;P&gt;indeed:&lt;/P&gt;&lt;P&gt;on one side I have this error&lt;/P&gt;&lt;PRE&gt;…17/10/21 02:12:45 INFO mapreduce.Job: Job job_1505194606915_0236 failed with state FAILED due to: Job commit failed: org.apache.hive.hcatalog.common.HCatException : 2012 : Moving of data failed during commit : Could not find a unique destination path for move: file = hdfs://vpbshadop202.mickey.int:8020/data/hive/crim.db/atlas_stats_clob/_SCRATCH0.04665097541205321/part-m-00000 , src = hdfs://vpbshadop202.mickey.int:8020/data/hive/crim.db/atlas_stats_clob/_SCRATCH0.04665097541205321, dest = hdfs://vpbshadop202.mickey.int:8020/data/hive/crim.db/atlas_stats_clob  at org.apache.hive.hcatalog.mapreduce.FileOutputCommitterContainer.getFinalPath(FileOutputCommitterContainer.java:662)  at org.apache.hive.hcatalog.mapreduce.FileOutputCommitterContainer.moveTaskOutputs(FileOutputCommitterContainer.java:515)…&lt;/PRE&gt;&lt;P&gt;and in FileOutputCommitterContainer.java, I can see &lt;/P&gt;&lt;PRE&gt;Could not find a unique destination path for move&lt;/PRE&gt;&lt;P&gt;when counter = maxAppendAttempts = APPEND_COUNTER_WARN_THRESHOLD = 1000&lt;/P&gt;&lt;P&gt;and in the other side I have:&lt;/P&gt;&lt;PRE&gt;hdfs dfs -ls /data/hive/crim.db/atlas_stats_clob/part-m* | wc -l

999&lt;/PRE&gt;&lt;P&gt;is there a way to increase this threshold ?&lt;/P&gt;</description>
    <pubDate>Mon, 20 Nov 2017 18:09:06 GMT</pubDate>
    <dc:creator>f_rey</dc:creator>
    <dc:date>2017-11-20T18:09:06Z</dc:date>
    <item>
      <title>Sqoop import does not work anymore</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Sqoop-import-does-not-work-anymore/m-p/195157#M71349</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;We have an hadoop cluster with 3 nodes in which we had a sqoop import job that worked very well until few days ago. &lt;BR /&gt;The number of files of the external table in 999 files (is it a maximum number ?).&lt;/P&gt;&lt;P&gt;This is the import command :&lt;/P&gt;&lt;PRE&gt;sqoop import -D oraoop.locations=hadop202.mickey.int -D mapred.map.max.attempts=1 -D oraoop.import.consistent.read=false -D oraoop.timestamp.string=false --connect jdbc:oracle:thin:@//CRIDB101:1521/appli --username sqoop -password '******' --table=ATLAS_STATS_20171114 --columns=APPLICATION,USERNAME,OFFICE,STAT_TYPE,STAT_KEY,STAT_INFO,TIME_STAMP,REQUESTER,DETAIL_INFO_1,DETAIL_INFO_2,DETAIL_INFO_3,DETAIL_INFO_4,OWNER,STATS_ID,DB_NAME,PARAMS --where "sqoop = 'Z'" --hcatalog-database=crim --hcatalog-table=atlas_stats_clob --num-mappers=2 --split-by=TIME_STAMP&lt;/PRE&gt;&lt;P&gt;and this the error we get :&lt;/P&gt;&lt;PRE&gt;17/11/14 16:17:31 INFO mapreduce.Job: Job job_1510660800260_0022 failed with state FAILED due to: Job commit failed: org.apache.hive.hcatalog.common.HCatException : 2012 : Moving of data failed during commit : Could not find a unique destination path for move: file = hdfs://hadop202.mickey.int:8020/data/hive/crim.db/atlas_stats_clob/_SCRATCH0.46847233143209766/part-m-00000 , src = hdfs://hadop202.mickey.int:8020/data/hive/crim.db/atlas_stats_clob/_SCRATCH0.46847233143209766, dest = hdfs://hadop202.mickey.int:8020/data/hive/crim.db/atlas_stats_clob
&lt;/PRE&gt;&lt;P&gt;Thanks for help&lt;/P&gt;</description>
      <pubDate>Tue, 14 Nov 2017 23:31:20 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Sqoop-import-does-not-work-anymore/m-p/195157#M71349</guid>
      <dc:creator>f_rey</dc:creator>
      <dc:date>2017-11-14T23:31:20Z</dc:date>
    </item>
    <item>
      <title>Re: Sqoop import does not work anymore</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Sqoop-import-does-not-work-anymore/m-p/195158#M71350</link>
      <description>&lt;P&gt;I did an export/import in another table:&lt;/P&gt;&lt;PRE&gt;export table atlas_stats_clob to '/data/hive/export/';
import table atlas_imported from '/data/hive/export/data/';&lt;/PRE&gt;&lt;P&gt;and try again with the same sqoop import option in the new table:&lt;/P&gt;&lt;PRE&gt;sqoop import-D oraoop.locations=hadop202.mickey.int-D mapred.map.max.attempts=1-D oraoop.import.consistent.read=false-D oraoop.timestamp.string=false--connect jdbc:oracle:thin:@//CRIDB101:1521/appli --username sqoop -password '******' --table=ATLAS_STATS_20171114 --columns=APPLICATION,USERNAME,OFFICE,STAT_TYPE,STAT_KEY,STAT_INFO,TIME_STAMP,REQUESTER,DETAIL_INFO_1,DETAIL_INFO_2,DETAIL_INFO_3,DETAIL_INFO_4,OWNER,STATS_ID,DB_NAME,PARAMS --where "sqoop = 'Z'" --hcatalog-database=crim --hcatalog-table=atlas_imported --num-mappers=2 --split-by=TIME_STAMP&lt;/PRE&gt;&lt;P&gt;but I have the same issue, is there a limit in the number of files for a table? :&lt;/P&gt;</description>
      <pubDate>Thu, 16 Nov 2017 21:31:53 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Sqoop-import-does-not-work-anymore/m-p/195158#M71350</guid>
      <dc:creator>f_rey</dc:creator>
      <dc:date>2017-11-16T21:31:53Z</dc:date>
    </item>
    <item>
      <title>Re: Sqoop import does not work anymore</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Sqoop-import-does-not-work-anymore/m-p/195159#M71351</link>
      <description>&lt;P&gt;Hello, I think that the problem come from a threshold of 1000 written in FileOutputCommitterContainer.java :&lt;/P&gt;&lt;P&gt;indeed:&lt;/P&gt;&lt;P&gt;on one side I have this error&lt;/P&gt;&lt;PRE&gt;…17/10/21 02:12:45 INFO mapreduce.Job: Job job_1505194606915_0236 failed with state FAILED due to: Job commit failed: org.apache.hive.hcatalog.common.HCatException : 2012 : Moving of data failed during commit : Could not find a unique destination path for move: file = hdfs://vpbshadop202.mickey.int:8020/data/hive/crim.db/atlas_stats_clob/_SCRATCH0.04665097541205321/part-m-00000 , src = hdfs://vpbshadop202.mickey.int:8020/data/hive/crim.db/atlas_stats_clob/_SCRATCH0.04665097541205321, dest = hdfs://vpbshadop202.mickey.int:8020/data/hive/crim.db/atlas_stats_clob  at org.apache.hive.hcatalog.mapreduce.FileOutputCommitterContainer.getFinalPath(FileOutputCommitterContainer.java:662)  at org.apache.hive.hcatalog.mapreduce.FileOutputCommitterContainer.moveTaskOutputs(FileOutputCommitterContainer.java:515)…&lt;/PRE&gt;&lt;P&gt;and in FileOutputCommitterContainer.java, I can see &lt;/P&gt;&lt;PRE&gt;Could not find a unique destination path for move&lt;/PRE&gt;&lt;P&gt;when counter = maxAppendAttempts = APPEND_COUNTER_WARN_THRESHOLD = 1000&lt;/P&gt;&lt;P&gt;and in the other side I have:&lt;/P&gt;&lt;PRE&gt;hdfs dfs -ls /data/hive/crim.db/atlas_stats_clob/part-m* | wc -l

999&lt;/PRE&gt;&lt;P&gt;is there a way to increase this threshold ?&lt;/P&gt;</description>
      <pubDate>Mon, 20 Nov 2017 18:09:06 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Sqoop-import-does-not-work-anymore/m-p/195159#M71351</guid>
      <dc:creator>f_rey</dc:creator>
      <dc:date>2017-11-20T18:09:06Z</dc:date>
    </item>
    <item>
      <title>Re: Sqoop import does not work anymore</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Sqoop-import-does-not-work-anymore/m-p/195160#M71352</link>
      <description>&lt;P&gt;External table with 999 files was the problem&lt;/P&gt;</description>
      <pubDate>Mon, 27 Nov 2017 15:49:05 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Sqoop-import-does-not-work-anymore/m-p/195160#M71352</guid>
      <dc:creator>f_rey</dc:creator>
      <dc:date>2017-11-27T15:49:05Z</dc:date>
    </item>
    <item>
      <title>Re: Sqoop import does not work anymore</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Sqoop-import-does-not-work-anymore/m-p/195161#M71353</link>
      <description>&lt;P&gt;Could you try that command with this param "--create-hcatalog-table" ?&lt;/P&gt;</description>
      <pubDate>Mon, 27 Nov 2017 16:17:33 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Sqoop-import-does-not-work-anymore/m-p/195161#M71353</guid>
      <dc:creator>MindGlass</dc:creator>
      <dc:date>2017-11-27T16:17:33Z</dc:date>
    </item>
    <item>
      <title>Re: Sqoop import does not work anymore</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Sqoop-import-does-not-work-anymore/m-p/195162#M71354</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;recently we ran into the same problem after few years of successful imports.&lt;BR /&gt;Did you maybe find a way to overcome the problem?&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 25 Feb 2019 21:53:01 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Sqoop-import-does-not-work-anymore/m-p/195162#M71354</guid>
      <dc:creator>skerjanec</dc:creator>
      <dc:date>2019-02-25T21:53:01Z</dc:date>
    </item>
  </channel>
</rss>

