<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: issue in importing bulk data from oracle in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/issue-in-importing-bulk-data-from-oracle/m-p/180500#M80394</link>
    <description>&lt;P&gt;thanks for your quick reply.&lt;/P&gt;&lt;P&gt;Is there any other solution to accomplish the import with less memory  but slower?? My memory resources are limited. about 55G is assigned to yarn.&lt;/P&gt;&lt;P&gt;an other question, what is the proper size of memory for mappers? I googled a lot for this question and I came with that I need to reduce my mappers memory , and increase them in number. as you said, e.g. 100 mappers. take a look at my &lt;A href="http://data-flair.training/forums/topic/how-to-optimize-mapreduce-jobs-in-hadoop"&gt;ref&lt;/A&gt;.&lt;/P&gt;&lt;P&gt;does this sound OK to you?&lt;/P&gt;&lt;P&gt;p.s. my mapper mem is 3G and my reducers have 2G.&lt;/P&gt;</description>
    <pubDate>Sun, 08 Jul 2018 15:53:51 GMT</pubDate>
    <dc:creator>alizadeh_uut1</dc:creator>
    <dc:date>2018-07-08T15:53:51Z</dc:date>
    <item>
      <title>issue in importing bulk data from oracle</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/issue-in-importing-bulk-data-from-oracle/m-p/180498#M80392</link>
      <description>&lt;P&gt;
	hi,&lt;/P&gt;&lt;P&gt;
	I enter this command to import some data from oracle. It is ok and the result has 1.3 million records.&lt;/P&gt;&lt;PRE&gt;sqoop-import --connect jdbc:oracle:thin:@//serverIP:Port/xxxx   --query "SELECT col1,col2,col3 FROM table where condition AND \$CONDITIONS "  --target-dir /user/root/myresult --split-by col1 -m 10 --username xxx --password xxx&lt;/PRE&gt;&lt;P&gt;but when I delete the condition to import whole table which has 12million records, It fails.&lt;/P&gt;&lt;P&gt;always the first maps are loged as succeeded  and the last one just hangs. but when I check mapreduce logs for succeeded maps, I see that they have been fail with this message:&lt;/P&gt;&lt;PRE&gt;container killed by the applicationmaster. container killed on request. exit code is 143 container exited with a non-zero exit code 143.&lt;/PRE&gt;&lt;P&gt;I googled and I found &lt;/P&gt;&lt;P&gt;&lt;A href="https://stackoverflow.com/questions/42306865/sqoop-job-get-stuck-when-import-data-from-oracle-to-hive" target="_blank"&gt;https://stackoverflow.com/questions/42306865/sqoop-job-get-stuck-when-import-data-from-oracle-to-hive&lt;/A&gt;&lt;/P&gt;&lt;P&gt;as the same issue I have. but this post hasn't been answered yet. It'd be helpful if you take a look.&lt;/P&gt;</description>
      <pubDate>Sat, 07 Jul 2018 20:17:27 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/issue-in-importing-bulk-data-from-oracle/m-p/180498#M80392</guid>
      <dc:creator>alizadeh_uut1</dc:creator>
      <dc:date>2018-07-07T20:17:27Z</dc:date>
    </item>
    <item>
      <title>Re: issue in importing bulk data from oracle</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/issue-in-importing-bulk-data-from-oracle/m-p/180499#M80393</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/13184/alizadehuut1.html" nodeid="13184"&gt;@Sara Alizadeh&lt;/A&gt;, The first command was executed with 10 mappers (-m 10) for 1.3 millions and your mappers might be going out of memory if you are using the same number of mappers for 12 million records. &lt;/P&gt;&lt;P&gt;Do increase the number of mappers (i'd say 100) and re-run the job. &lt;/P&gt;</description>
      <pubDate>Sun, 08 Jul 2018 01:56:31 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/issue-in-importing-bulk-data-from-oracle/m-p/180499#M80393</guid>
      <dc:creator>sandyy006</dc:creator>
      <dc:date>2018-07-08T01:56:31Z</dc:date>
    </item>
    <item>
      <title>Re: issue in importing bulk data from oracle</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/issue-in-importing-bulk-data-from-oracle/m-p/180500#M80394</link>
      <description>&lt;P&gt;thanks for your quick reply.&lt;/P&gt;&lt;P&gt;Is there any other solution to accomplish the import with less memory  but slower?? My memory resources are limited. about 55G is assigned to yarn.&lt;/P&gt;&lt;P&gt;an other question, what is the proper size of memory for mappers? I googled a lot for this question and I came with that I need to reduce my mappers memory , and increase them in number. as you said, e.g. 100 mappers. take a look at my &lt;A href="http://data-flair.training/forums/topic/how-to-optimize-mapreduce-jobs-in-hadoop"&gt;ref&lt;/A&gt;.&lt;/P&gt;&lt;P&gt;does this sound OK to you?&lt;/P&gt;&lt;P&gt;p.s. my mapper mem is 3G and my reducers have 2G.&lt;/P&gt;</description>
      <pubDate>Sun, 08 Jul 2018 15:53:51 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/issue-in-importing-bulk-data-from-oracle/m-p/180500#M80394</guid>
      <dc:creator>alizadeh_uut1</dc:creator>
      <dc:date>2018-07-08T15:53:51Z</dc:date>
    </item>
    <item>
      <title>Re: issue in importing bulk data from oracle</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/issue-in-importing-bulk-data-from-oracle/m-p/180501#M80395</link>
      <description>&lt;P&gt;I solved this issue&lt;/P&gt;&lt;P&gt;Just posting this to those who may have the same problem.&lt;/P&gt;&lt;P&gt;I strengthen the link between my database and my big data servers. The link was slow so sqoop transmission rate got very low.&lt;/P&gt;</description>
      <pubDate>Sat, 21 Jul 2018 15:06:18 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/issue-in-importing-bulk-data-from-oracle/m-p/180501#M80395</guid>
      <dc:creator>alizadeh_uut1</dc:creator>
      <dc:date>2018-07-21T15:06:18Z</dc:date>
    </item>
  </channel>
</rss>

