Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Sqoop import from Netezza to HDFS failing with java.lang.ArrayIndexOutOfBoundsException

avatar
Contributor

I am able to successfully import few tables from Netezza to HDFS.

Failing tables have primary key constraint on netezza and I see the Sqoop Split-by is using primary key column. I tried changing the split-by to different column and increased the split count as well.

However I am getting following error message for few tables.

16/03/15 14:00:23 INFO mapreduce.Job: Task Id : attempt_1456951008977_0160_m_000000_0, Status : FAILED Error: java.lang.ArrayIndexOutOfBoundsException at java.lang.System.arraycopy(Native Method) at org.netezza.sql.NzConnection.receiveDbosTuple(NzConnection.java:739) at org.netezza.internal.QueryExecutor.getNextResult(QueryExecutor.java:177) at org.netezza.internal.QueryExecutor.execute(QueryExecutor.java:73) at org.netezza.sql.NzConnection.execute(NzConnection.java:2688) at org.netezza.sql.NzStatement._execute(NzStatement.java:849) at org.netezza.sql.NzPreparedStatament.executeQuery(NzPreparedStatament.java:169) at org.apache.sqoop.mapreduce.db.DBRecordReader.executeQuery(DBRecordReader.java:111) at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:235) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556) at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80) at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

16/03/15 14:00:43 INFO mapreduce.Job: Task Id : attempt_1456951008977_0160_m_000000_1, Status : FAILED Error: java.lang.ArrayIndexOutOfBoundsException at org.netezza.sql.NzConnection.receiveDbosTuple(NzConnection.java:739) at org.netezza.internal.QueryExecutor.update(QueryExecutor.java:340) at org.netezza.sql.NzConnection.updateResultSet(NzConnection.java:2704) at org.netezza.sql.NzResultSet.next(NzResultSet.java:1924) at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:237) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556) at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80) at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

1 ACCEPTED SOLUTION

avatar

Aruna and I fixed this by upgrading the Netezza jdbc driver... last thing we checked of course.

Lesson learned, make sure 3rd party vendor jar's are up-to-date (and bug free....)

View solution in original post

2 REPLIES 2

avatar
Super Collaborator

Try to increase the JVM heap size, I.e -Xmx and -Xms JVM options

avatar

Aruna and I fixed this by upgrading the Netezza jdbc driver... last thing we checked of course.

Lesson learned, make sure 3rd party vendor jar's are up-to-date (and bug free....)