Support Questions

Find answers, ask questions, and share your expertise

sqoop free form query import to hbase having issue ... while using multiple mappers its importing multiple times

avatar
Super Collaborator

sqoop free form query import to hbase having issue ... while using multiple mappers its importing multiple times sqoop import --connect "jdbc:sqlserver://;database=;username=;password=" --query 'select top 100000 * from where $CONDITIONS' --split-by ID --hbase-table --column-family info --hbase-create-table -m 4 ---- this import query is importing 400000 in place of 10000

1 ACCEPTED SOLUTION

avatar
Master Mentor
@Krishna Srinivas

Please see this https://sqoop.apache.org/docs/1.4.2/SqoopUserGuide.html#_free_form_query_imports

If you want to import the results of a query in parallel, then each map task will need to execute a copy of the query, with results partitioned by bounding conditions inferred by Sqoop. Your query must include the token $CONDITIONS which each Sqoop process will replace with a unique condition expression. You must also select a splitting column with --split-by.

Also, there is probability of experiencing weird results when run in parallel. I don't have jira detials

View solution in original post

5 REPLIES 5

avatar
Master Mentor
@Krishna Srinivas

i don't see table name after from was that intentional?

--single quotes, $CONDITIONS does not need to be escaped
--query ' select top 100 * from dbo.[Orders] where $CONDITIONS ' \ 
   --split-by callID -m 10

Here's a good resource https://danieladeniji.wordpress.com/category/technical/hadoop/sqoop/

avatar
Super Collaborator

@Artem Ervits table name was my typo error. Provided the name in query ... And query is not throwing any error while importing but imports more than 100 . strictly speacking number of records imported to hbase is getting multiplied by the number of mappers allocated to the task . For example : if top 100 are selected and 4 mappers are running for the task , 400 records are being imported to hbase table.

avatar
Master Mentor
@Krishna Srinivas

Please see this https://sqoop.apache.org/docs/1.4.2/SqoopUserGuide.html#_free_form_query_imports

If you want to import the results of a query in parallel, then each map task will need to execute a copy of the query, with results partitioned by bounding conditions inferred by Sqoop. Your query must include the token $CONDITIONS which each Sqoop process will replace with a unique condition expression. You must also select a splitting column with --split-by.

Also, there is probability of experiencing weird results when run in parallel. I don't have jira detials

avatar
Super Collaborator

@Neeraj Sabharwal So, I take it as a best practice to import using single mapper while using free form queries

avatar
Master Guru

Hi @Krishna Srinivas, Using multiple mappers is good practice also for free-form queries, however you have to keep in mind what's your free-form query doing. Each mapper will run a copy of the query with additional WHERE conditions to split the table based on the "--split-by" column. So, in your case each mapper will return 100k records per split, for the total of 400k. If you want 100k per table then you should set use "TOP 25000 ..." For the majority of free-form queries like "WHERE a>100 and b>300" you don't have to worry about the number of records.