Archives of Support Questions (Read Only)

This is an archived board for historical reference. Information and links may no longer be available or relevant
Announcements
This board is archived and read-only for historical reference. To ask a new question, please post a new topic on the appropriate active board.

Who agreed with this topic

Sqoop GC overhead limit exceeded after CDH5.2 update

avatar

Hi,

 

we updated sqoop from cdh5.0.1 to cdh5.2 and now it fails everytime with a GC overhead limit exceeded error.

The old version was able to import over 14GB of data over one mapper and the import fails now when a mapper gets too many rows. I checked a heap dump and the memory was completely used by over 3.5 million rows of data (-Xmx 1700M).

The connector is mysql-jdbc version 5.1.33 and the job imports the data as text file in a have table.

 

Can I avoid this with a setting or is this a bug that should go to jira?

 

Thank you,

Jürgen

Who agreed with this topic