Member since
12-10-2015
43
Posts
39
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4652 | 02-04-2016 01:37 AM | |
15774 | 02-03-2016 02:03 AM | |
7172 | 01-26-2016 08:00 AM |
05-10-2016
01:13 AM
Unfortunately, I can't help you there. It's not something I've tried. You should create a new question for that to get better visibility and hopefully attract answers.
... View more
11-10-2017
06:53 AM
@Luis Antonio Torres Listing databases (Sqoop eval) just needs the Namenode access to sql server. Whereas, for import/export (mapreduce tasks) Datanodes also need to have access to the sqlserver. I know your issue is resolved and this is a late reply but someone else might be looking for this in future.
... View more
03-15-2016
07:45 PM
1 Kudo
The Sqoop error 'No columns to generate' could also occur if your sqoop job is unable to determine a driver to use for this job. Try adding --driver com.sqlserver.jdbc.Driver along with the rest of your sqoop import parameters. Hope this helps.
... View more
08-13-2016
11:31 PM
hei @Luis Antonio Torres work around is changing port from 8050 to 8032 ? please point it out thx
... View more
12-11-2015
06:07 AM
1 Kudo
I see! They probably could have phrased the documentation better, IMHO. you will not want to hardcode master in the program,
but rather launch the application with spark-submit and
receive it there The above quote from the documentation was never actually clear to me and I thought that I had to "receive" the master URL by reading it in the code from some configuration or parameter then setting master. Thanks for clearing that up!
... View more