Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

CDH 5.2 sqoop import job runs out of memory

CDH 5.2 sqoop import job runs out of memory

Expert Contributor

i have simple import sqoop-1 job from MySQL to HDFS. I've migrated it to CDH 5.2 and got problems.

Sqoop-1 runs out of memory during import - it cnsumes more than max allowed memory and is kill by nodeManager. Sqoop ignores

 


mapreduce.map.memory.mb=2048 setting and consumes more. Then it's killed. There is only 4M rows. I don't understasnd why this job is so memory intensive. It should just get batch of records from MySQL and flush it to mapper output.

 

3 REPLIES 3

Re: CDH 5.2 sqoop import job runs out of memory

Expert Contributor
Could you provide your Sqoop command?

Re: CDH 5.2 sqoop import job runs out of memory

Expert Contributor
One more thought: https://issues.apache.org/jira/browse/SQOOP-1617. It looks like there was a bug that popped up. This should be committed shortly.

Re: CDH 5.2 sqoop import job runs out of memory

Expert Contributor

Yes, i've found it in near-by thread.