<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question sqoop import  from oracle to Hadoop not getting completed in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/sqoop-import-from-oracle-to-Hadoop-not-getting-completed/m-p/67943#M79254</link>
    <description>&lt;P&gt;Hi All,&amp;nbsp;&lt;/P&gt;&lt;P&gt;I am new to BigData, I am trying to Load data from Oracle to Hadoop. This is first time I am trying to load data from Oracle to Hadoop.&lt;/P&gt;&lt;P&gt;It is taking time or not getting completed.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Hadoop Version&lt;/P&gt;&lt;P&gt;[oracle@ebsoim 11.1.0]$ hdfs version&lt;BR /&gt;Hadoop 2.6.0-cdh5.14.2&lt;BR /&gt;Subversion&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="http://github.com/cloudera/hadoop" target="_blank" rel="nofollow noopener noreferrer"&gt;http://github.com/cloudera/hadoop&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;-r 5724a4ad7a27f7af31aa725694d3df09a68bb213&lt;BR /&gt;Compiled by jenkins on 2018-03-27T20:40Z&lt;BR /&gt;Compiled with protoc 2.5.0&lt;BR /&gt;From source with checksum 302899e86485742c090f626a828b28&lt;BR /&gt;This command was run using /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-common-2.6.0-cdh5.14.2.jar&lt;BR /&gt;[oracle@ebsoim 11.1.0]$&lt;/P&gt;&lt;P&gt;It is running since last 3 hrs, Select query contains only one row.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Below is the command to select the data from oracle to Hadoop&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;[hdfs@ebsoim ~]$ sqoop import --connect jdbc:oracle:thin:@192.168.56.101:1526:PROD --query "select person_id from HR.PER_ALL_PEOPLE_F where \$CONDITIONS" --username apps -P --target-dir '/tmp/oracle' -m 1&lt;BR /&gt;Warning: /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/bin/../lib/sqoop/../accumulo does not exist! Accumulo imports will fail.&lt;BR /&gt;Please set $ACCUMULO_HOME to the root of your Accumulo installation.&lt;BR /&gt;18/06/05 22:50:39 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.14.2&lt;BR /&gt;Enter password:&lt;BR /&gt;18/06/05 22:50:41 INFO oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop is disabled.&lt;BR /&gt;18/06/05 22:50:41 INFO manager.SqlManager: Using default fetchSize of 1000&lt;BR /&gt;18/06/05 22:50:41 INFO tool.CodeGenTool: Beginning code generation&lt;BR /&gt;18/06/05 22:50:41 INFO manager.OracleManager: Time zone has been set to GMT&lt;BR /&gt;18/06/05 22:50:41 INFO manager.SqlManager: Executing SQL statement: select person_id from HR.PER_ALL_PEOPLE_F where (1 = 0)&lt;BR /&gt;18/06/05 22:50:41 INFO manager.SqlManager: Executing SQL statement: select person_id from HR.PER_ALL_PEOPLE_F where (1 = 0)&lt;BR /&gt;18/06/05 22:50:41 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce&lt;BR /&gt;Note: /tmp/sqoop-hdfs/compile/43977b74d0f6d3f2adbad6c90968547f/QueryResult.java uses or overrides a deprecated API.&lt;BR /&gt;Note: Recompile with -Xlint:deprecation for details.&lt;BR /&gt;18/06/05 22:50:43 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hdfs/compile/43977b74d0f6d3f2adbad6c90968547f/QueryResult.jar&lt;BR /&gt;18/06/05 22:50:43 INFO mapreduce.ImportJobBase: Beginning query import.&lt;BR /&gt;18/06/05 22:50:43 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar&lt;BR /&gt;18/06/05 22:50:43 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps&lt;BR /&gt;18/06/05 22:50:43 INFO client.RMProxy: Connecting to ResourceManager at ebsoim.hdfc.com/192.168.56.101:8032&lt;BR /&gt;18/06/05 22:50:47 INFO db.DBInputFormat: Using read commited transaction isolation&lt;BR /&gt;18/06/05 22:50:47 INFO mapreduce.JobSubmitter: number of splits:1&lt;BR /&gt;18/06/05 22:50:48 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1528221835245_0002&lt;BR /&gt;18/06/05 22:50:49 INFO impl.YarnClientImpl: Submitted application application_1528221835245_0002&lt;BR /&gt;18/06/05 22:50:49 INFO mapreduce.Job: The url to track the job:&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="http://ebsoim.hdfc.com:8088/proxy/application_1528221835245_0002/" target="_blank" rel="nofollow noopener noreferrer"&gt;http://ebsoim.hdfc.com:8088/proxy/application_1528221835245_0002/&lt;/A&gt;&lt;BR /&gt;18/06/05 22:50:49 INFO mapreduce.Job: Running job: job_1528221835245_0002&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;everytime the job is getting stuck at this point.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="hadoop job.PNG" style="width: 600px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/4188iEF5DFF15D806681B/image-size/large?v=v2&amp;amp;px=999" role="button" title="hadoop job.PNG" alt="hadoop job.PNG" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please help me on this since i am trying since last 6-7 Days. But no luck.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks and Regards,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Fri, 16 Sep 2022 13:18:39 GMT</pubDate>
    <dc:creator>daniesh</dc:creator>
    <dc:date>2022-09-16T13:18:39Z</dc:date>
    <item>
      <title>sqoop import  from oracle to Hadoop not getting completed</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/sqoop-import-from-oracle-to-Hadoop-not-getting-completed/m-p/67943#M79254</link>
      <description>&lt;P&gt;Hi All,&amp;nbsp;&lt;/P&gt;&lt;P&gt;I am new to BigData, I am trying to Load data from Oracle to Hadoop. This is first time I am trying to load data from Oracle to Hadoop.&lt;/P&gt;&lt;P&gt;It is taking time or not getting completed.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Hadoop Version&lt;/P&gt;&lt;P&gt;[oracle@ebsoim 11.1.0]$ hdfs version&lt;BR /&gt;Hadoop 2.6.0-cdh5.14.2&lt;BR /&gt;Subversion&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="http://github.com/cloudera/hadoop" target="_blank" rel="nofollow noopener noreferrer"&gt;http://github.com/cloudera/hadoop&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;-r 5724a4ad7a27f7af31aa725694d3df09a68bb213&lt;BR /&gt;Compiled by jenkins on 2018-03-27T20:40Z&lt;BR /&gt;Compiled with protoc 2.5.0&lt;BR /&gt;From source with checksum 302899e86485742c090f626a828b28&lt;BR /&gt;This command was run using /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hadoop-common-2.6.0-cdh5.14.2.jar&lt;BR /&gt;[oracle@ebsoim 11.1.0]$&lt;/P&gt;&lt;P&gt;It is running since last 3 hrs, Select query contains only one row.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Below is the command to select the data from oracle to Hadoop&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;[hdfs@ebsoim ~]$ sqoop import --connect jdbc:oracle:thin:@192.168.56.101:1526:PROD --query "select person_id from HR.PER_ALL_PEOPLE_F where \$CONDITIONS" --username apps -P --target-dir '/tmp/oracle' -m 1&lt;BR /&gt;Warning: /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/bin/../lib/sqoop/../accumulo does not exist! Accumulo imports will fail.&lt;BR /&gt;Please set $ACCUMULO_HOME to the root of your Accumulo installation.&lt;BR /&gt;18/06/05 22:50:39 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.14.2&lt;BR /&gt;Enter password:&lt;BR /&gt;18/06/05 22:50:41 INFO oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop is disabled.&lt;BR /&gt;18/06/05 22:50:41 INFO manager.SqlManager: Using default fetchSize of 1000&lt;BR /&gt;18/06/05 22:50:41 INFO tool.CodeGenTool: Beginning code generation&lt;BR /&gt;18/06/05 22:50:41 INFO manager.OracleManager: Time zone has been set to GMT&lt;BR /&gt;18/06/05 22:50:41 INFO manager.SqlManager: Executing SQL statement: select person_id from HR.PER_ALL_PEOPLE_F where (1 = 0)&lt;BR /&gt;18/06/05 22:50:41 INFO manager.SqlManager: Executing SQL statement: select person_id from HR.PER_ALL_PEOPLE_F where (1 = 0)&lt;BR /&gt;18/06/05 22:50:41 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce&lt;BR /&gt;Note: /tmp/sqoop-hdfs/compile/43977b74d0f6d3f2adbad6c90968547f/QueryResult.java uses or overrides a deprecated API.&lt;BR /&gt;Note: Recompile with -Xlint:deprecation for details.&lt;BR /&gt;18/06/05 22:50:43 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hdfs/compile/43977b74d0f6d3f2adbad6c90968547f/QueryResult.jar&lt;BR /&gt;18/06/05 22:50:43 INFO mapreduce.ImportJobBase: Beginning query import.&lt;BR /&gt;18/06/05 22:50:43 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar&lt;BR /&gt;18/06/05 22:50:43 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps&lt;BR /&gt;18/06/05 22:50:43 INFO client.RMProxy: Connecting to ResourceManager at ebsoim.hdfc.com/192.168.56.101:8032&lt;BR /&gt;18/06/05 22:50:47 INFO db.DBInputFormat: Using read commited transaction isolation&lt;BR /&gt;18/06/05 22:50:47 INFO mapreduce.JobSubmitter: number of splits:1&lt;BR /&gt;18/06/05 22:50:48 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1528221835245_0002&lt;BR /&gt;18/06/05 22:50:49 INFO impl.YarnClientImpl: Submitted application application_1528221835245_0002&lt;BR /&gt;18/06/05 22:50:49 INFO mapreduce.Job: The url to track the job:&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="http://ebsoim.hdfc.com:8088/proxy/application_1528221835245_0002/" target="_blank" rel="nofollow noopener noreferrer"&gt;http://ebsoim.hdfc.com:8088/proxy/application_1528221835245_0002/&lt;/A&gt;&lt;BR /&gt;18/06/05 22:50:49 INFO mapreduce.Job: Running job: job_1528221835245_0002&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;everytime the job is getting stuck at this point.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="hadoop job.PNG" style="width: 600px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/4188iEF5DFF15D806681B/image-size/large?v=v2&amp;amp;px=999" role="button" title="hadoop job.PNG" alt="hadoop job.PNG" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please help me on this since i am trying since last 6-7 Days. But no luck.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks and Regards,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 13:18:39 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/sqoop-import-from-oracle-to-Hadoop-not-getting-completed/m-p/67943#M79254</guid>
      <dc:creator>daniesh</dc:creator>
      <dc:date>2022-09-16T13:18:39Z</dc:date>
    </item>
    <item>
      <title>Re: sqoop import  from oracle to Hadoop not getting completed</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/sqoop-import-from-oracle-to-Hadoop-not-getting-completed/m-p/69633#M79255</link>
      <description>Based on the screenshot, I can see that you have no active nodes, and the only node you have is in unhealthy state.&lt;BR /&gt;&lt;BR /&gt;The jobs was hung because it was waiting for resources to be available due to above.&lt;BR /&gt;&lt;BR /&gt;You need to fix your unhealthy node and make sure it is active before running your job again.</description>
      <pubDate>Fri, 06 Jul 2018 11:03:11 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/sqoop-import-from-oracle-to-Hadoop-not-getting-completed/m-p/69633#M79255</guid>
      <dc:creator>EricL</dc:creator>
      <dc:date>2018-07-06T11:03:11Z</dc:date>
    </item>
  </channel>
</rss>

