<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Sqoop import from Netezza to HDFS failing with java.lang.ArrayIndexOutOfBoundsException in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Sqoop-import-from-Netezza-to-HDFS-failing-with-java-lang/m-p/133223#M23205</link>
    <description>&lt;P&gt;I am able to successfully import few tables from Netezza to HDFS. &lt;/P&gt;&lt;P&gt; Failing tables have primary key constraint on netezza and I see the Sqoop Split-by is using primary key column. I tried changing the split-by to different column and increased the split count as well.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;However I am getting following error message for few tables.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;16/03/15 14:00:23 INFO mapreduce.Job: Task Id : attempt_1456951008977_0160_m_000000_0, Status : FAILED
Error: java.lang.ArrayIndexOutOfBoundsException
  at java.lang.System.arraycopy(Native Method)
  at org.netezza.sql.NzConnection.receiveDbosTuple(NzConnection.java:739)
  at org.netezza.internal.QueryExecutor.getNextResult(QueryExecutor.java:177)
  at org.netezza.internal.QueryExecutor.execute(QueryExecutor.java:73)
  at org.netezza.sql.NzConnection.execute(NzConnection.java:2688)
  at org.netezza.sql.NzStatement._execute(NzStatement.java:849)
  at org.netezza.sql.NzPreparedStatament.executeQuery(NzPreparedStatament.java:169)
  at org.apache.sqoop.mapreduce.db.DBRecordReader.executeQuery(DBRecordReader.java:111)
  at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:235)
  at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
  at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
  at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
  at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
  at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
  at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:415)
  at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)&lt;/P&gt;&lt;P&gt;16/03/15 14:00:43 INFO mapreduce.Job: Task Id : attempt_1456951008977_0160_m_000000_1, Status : FAILED
Error: java.lang.ArrayIndexOutOfBoundsException
  at org.netezza.sql.NzConnection.receiveDbosTuple(NzConnection.java:739)
  at org.netezza.internal.QueryExecutor.update(QueryExecutor.java:340)
  at org.netezza.sql.NzConnection.updateResultSet(NzConnection.java:2704)
  at org.netezza.sql.NzResultSet.next(NzResultSet.java:1924)
  at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:237)
  at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
  at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
  at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
  at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
  at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
  at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:415)
  at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)&lt;/P&gt;</description>
    <pubDate>Thu, 17 Mar 2016 22:58:29 GMT</pubDate>
    <dc:creator>adadi</dc:creator>
    <dc:date>2016-03-17T22:58:29Z</dc:date>
    <item>
      <title>Sqoop import from Netezza to HDFS failing with java.lang.ArrayIndexOutOfBoundsException</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Sqoop-import-from-Netezza-to-HDFS-failing-with-java-lang/m-p/133223#M23205</link>
      <description>&lt;P&gt;I am able to successfully import few tables from Netezza to HDFS. &lt;/P&gt;&lt;P&gt; Failing tables have primary key constraint on netezza and I see the Sqoop Split-by is using primary key column. I tried changing the split-by to different column and increased the split count as well.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;However I am getting following error message for few tables.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;16/03/15 14:00:23 INFO mapreduce.Job: Task Id : attempt_1456951008977_0160_m_000000_0, Status : FAILED
Error: java.lang.ArrayIndexOutOfBoundsException
  at java.lang.System.arraycopy(Native Method)
  at org.netezza.sql.NzConnection.receiveDbosTuple(NzConnection.java:739)
  at org.netezza.internal.QueryExecutor.getNextResult(QueryExecutor.java:177)
  at org.netezza.internal.QueryExecutor.execute(QueryExecutor.java:73)
  at org.netezza.sql.NzConnection.execute(NzConnection.java:2688)
  at org.netezza.sql.NzStatement._execute(NzStatement.java:849)
  at org.netezza.sql.NzPreparedStatament.executeQuery(NzPreparedStatament.java:169)
  at org.apache.sqoop.mapreduce.db.DBRecordReader.executeQuery(DBRecordReader.java:111)
  at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:235)
  at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
  at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
  at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
  at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
  at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
  at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:415)
  at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)&lt;/P&gt;&lt;P&gt;16/03/15 14:00:43 INFO mapreduce.Job: Task Id : attempt_1456951008977_0160_m_000000_1, Status : FAILED
Error: java.lang.ArrayIndexOutOfBoundsException
  at org.netezza.sql.NzConnection.receiveDbosTuple(NzConnection.java:739)
  at org.netezza.internal.QueryExecutor.update(QueryExecutor.java:340)
  at org.netezza.sql.NzConnection.updateResultSet(NzConnection.java:2704)
  at org.netezza.sql.NzResultSet.next(NzResultSet.java:1924)
  at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:237)
  at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
  at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
  at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
  at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
  at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
  at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:415)
  at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)&lt;/P&gt;</description>
      <pubDate>Thu, 17 Mar 2016 22:58:29 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Sqoop-import-from-Netezza-to-HDFS-failing-with-java-lang/m-p/133223#M23205</guid>
      <dc:creator>adadi</dc:creator>
      <dc:date>2016-03-17T22:58:29Z</dc:date>
    </item>
    <item>
      <title>Re: Sqoop import from Netezza to HDFS failing with java.lang.ArrayIndexOutOfBoundsException</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Sqoop-import-from-Netezza-to-HDFS-failing-with-java-lang/m-p/133224#M23206</link>
      <description>&lt;P&gt;Try to increase the JVM heap size, I.e -Xmx and -Xms JVM options&lt;/P&gt;&lt;DIV&gt;

&lt;/DIV&gt;</description>
      <pubDate>Tue, 29 Mar 2016 20:38:13 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Sqoop-import-from-Netezza-to-HDFS-failing-with-java-lang/m-p/133224#M23206</guid>
      <dc:creator>mjohansson</dc:creator>
      <dc:date>2016-03-29T20:38:13Z</dc:date>
    </item>
    <item>
      <title>Re: Sqoop import from Netezza to HDFS failing with java.lang.ArrayIndexOutOfBoundsException</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Sqoop-import-from-Netezza-to-HDFS-failing-with-java-lang/m-p/133225#M23207</link>
      <description>&lt;P&gt;Aruna and I fixed this by upgrading the Netezza jdbc driver... last thing we checked of course.  &lt;/P&gt;&lt;P&gt;Lesson learned, make sure 3rd party vendor jar's are up-to-date (and bug free....)&lt;/P&gt;</description>
      <pubDate>Tue, 29 Mar 2016 21:32:27 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Sqoop-import-from-Netezza-to-HDFS-failing-with-java-lang/m-p/133225#M23207</guid>
      <dc:creator>joe_rochette</dc:creator>
      <dc:date>2016-03-29T21:32:27Z</dc:date>
    </item>
  </channel>
</rss>

