Member since
12-10-2015
43
Posts
39
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4652 | 02-04-2016 01:37 AM | |
15772 | 02-03-2016 02:03 AM | |
7172 | 01-26-2016 08:00 AM |
02-03-2016
01:56 AM
2 Kudos
@Artem Ervits I never got sqoop's hive-import to work in an Oozie workflow, so I came up with a workaround instead. Will provide my workaround as an answer. Thanks.
... View more
02-03-2016
12:05 AM
@Artem Ervits yes, this has been resolved but I didn't accept my own answer because we never found out what exactly was wrong. The issue merely stopped occurring after we reinstalled HDP on a server with no virtualization - I detail this in my answer. In any case, I've accepted my own answer as a reference, for the benefit of others.Thanks!
... View more
01-28-2016
02:39 AM
2 Kudos
@David Tam no, it was in "Running" state before getting killed. The yarn.resourcemanager.address setting in our yarn configs is set to port 8050, so I'm not really sure why there was an attempt to connect to 8032. I tried yarn-client mode, but I still get the same error
... View more
01-26-2016
08:00 AM
I never heard back from our network team with regards to firewall
logs, but our NameNode's OS got corrupted and had to be reformatted and
HDP reinstalled. For some reason we're not encountering this error any
longer. One difference between the original cluster and the new installation
is that we had 4 nodes (1 name node and 3 data nodes) which were
virtualized in a single server. Now, we're running a single node cluster (HDP 2.3.4)
with no virtualization on the server.
... View more
01-20-2016
12:54 AM
@Scott Shaw I removed mapred.task.timeout like you suggested, but I'm still getting the same results. As for the jdbc driver, that's the same one that we downloaded and placed in the Sqoop lib. I'm able to connect to the SQL Server with no problems via SQL Server Management Studio on my workstation which are both in the same corporate network, whereas the Hadoop cluster is on a separate network. I actually had to explicity request for our network guys to allow traffic from the cluster into the SQL Server, so I'm starting to suspect there's something in the network that's causing this issue.
... View more
01-19-2016
01:17 AM
@Neeraj Sabharwal thanks for your response. I checked out the link you shared but it doesn't seem to be the same problem. I'm able to connect to the SQL Server, it's just that I always see that error - particularly in between map tasks. In any case, I'm coordinating with our network guys as well. I can't be sure, but I do believe the SQL Server resides in a different corporate network as the Hadoop cluster. I'll update my post once I get further details.
... View more
01-19-2016
01:10 AM
@Scott Shaw Pinged the SQL Server and no dropped packets. I can also telnet to port 1433 and nmap also says the port is open. However, I tried it the other way around - I'm unable to ping the Hadoop cluster from the SQL Server. I'm not really sure if this is significant.
... View more
01-18-2016
06:46 AM
1 Kudo
On HDP 2.3.2 with Sqoop 1.4.6, I'm trying to import tables from SQL Server 2008. I'm able to successfully connect to the SQL Server because I can list databases and tables etc. However, every single time during imports I run into the following error: Error: java.lang.RuntimeException: java.lang.RuntimeException:
com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection
to the host x.x.x.x, port 1433 has failed. Error: "connect timed
out. Verify the connection properties. Make sure that an instance of
SQL Server is running on the host and accepting TCP/IP connections at
the port. Make sure that TCP connections to the port are not blocked
by a firewall.". Again, I am actually able to successfully import from SQL Server, but
only after a couple of retries. However, regardless of whether the
import succeeded or failed, I always get the error mentioned above and I
was wondering what could be causing the problem? It's rather cumbersome
to have to keep repeating the imports whenever they fail. I've already turned off the connection time-out on the SQL Server,
and though the connection from the Hadoop cluster and the SQL Server
passes through our corporate firewall, our admins tell me that the
timeout on the firewall is 3600 seconds. The imports fail before getting
anywhere near that mark. Just an example of one of the sqoop commands I use: sqoop import \
-D mapred.task.timeout=0 \
--connect "jdbc:sqlserver://x.x.x.x:1433;database=CEMHistorical" \
--table MsgCallRefusal \
--username hadoop \
--password-file hdfs:///user/sqoop/.adg.password \
--hive-import \
--hive-overwrite \
--create-hive-table \
--split-by TimeStamp \
--hive-table develop.oozie \
--map-column-hive Call_ID=STRING,Stream_ID=STRING,AgentGlobal_ID=STRING Update: After getting in touch with our network team, it seems this is most definitely a network issue. To add context, the Hadoop cluster is on a different VLAN as the SQL Server and it goes through a number of firewalls. To test, I tried importing from a different SQL Server within the same VLAN as the Hadoop cluster and I didn't encounter this exception at all.
... View more
Labels:
- Labels:
-
Apache Sqoop
01-05-2016
05:20 AM
2 Kudos
I tried your suggestion but I'm still getting the same result. In any case, I don't think it could have been the connection URL because as I mentioned above, the command works when used via command-line. It's just when I use it with Oozie that I encounter this error.
... View more
01-05-2016
03:38 AM
2 Kudos
I'm trying to test Oozie's Sqoop action in the following environment:
HDP2.3.2 Sqoop 1.4.6 Oozie 4.2.0 Via the command line, the following sqoop command works: sqoop import \
-D mapred.task.timeout=0 \
--connect jdbc:sqlserver://x.x.x.x:1433;database=CEMHistorical \
--table MsgCallArrival \
--username hadoop \
--password-file hdfs:///user/sqoop/.adg.password \
--hive-import \
--create-hive-table \
--hive-table develop.oozie \
--split-by TimeStamp \
--map-column-hive Call_ID=STRING,Stream_ID=STRING
But when I try to execute the same command via Oozie, I'm running into java.io.IOException: No columns to generate for ClassWriter Below are my `job.properties` and `workflow.xml`: nameNode=hdfs://host.vitro.com:8020
jobTracker=host.vitro.com:8050
projectRoot=${nameNode}/user/${user.name}/tmp/sqoop-test/
oozie.use.system.libpath=true
oozie.wf.application.path=${projectRoot}
<workflow-app name="sqoop-test-wf" xmlns="uri:oozie:workflow:0.4">
<start to="sqoop-import"/>
<action name="sqoop-import" retry-max="10" retry-interval="1">
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<command>import -D mapred.task.timeout=0 --connect jdbc:sqlserver://x.x.x.x:1433;database=CEMHistorical --table MsgCallArrival --username hadoop --password-file hdfs:///user/sqoop/.adg.password --hive-import --create-hive-table --hive-table develop.oozie --split-by TimeStamp --map-column-hive Call_ID=STRING,Stream_ID=STRING</command>
</sqoop>
<ok to="end"/>
<error to="errorcleanup"/>
</action>
<kill name="errorcleanup">
<message>Sqoop Test WF failed. [${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name ="end"/>
</workflow-app> I've attached the full log, but here's an excerpt: 2016-01-05 11:29:21,415 ERROR [main] tool.ImportTool (ImportTool.java:run(613)) - Encountered IOException running import job: java.io.IOException: No columns to generate for ClassWriter
at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1651)
at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:107)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:478)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605)
at org.apache.sqoop.Sqoop.run(Sqoop.java:148)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:184)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:226)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:235)
at org.apache.sqoop.Sqoop.main(Sqoop.java:244)
at org.apache.oozie.action.hadoop.SqoopMain.runSqoopJob(SqoopMain.java:197)
at org.apache.oozie.action.hadoop.SqoopMain.run(SqoopMain.java:177)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:47)
at org.apache.oozie.action.hadoop.SqoopMain.main(SqoopMain.java:46)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:236)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) I've been struggling with this problem for quite some time now, any help would be greatly appreciated!
... View more
Labels: