Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

sqoop import from oracle fails

avatar
Explorer

Hello gurus,

 

i have an error while offloading oracle table to hdfs, here is the command:

 

sqoop import -Dmapreduce.job.queuename=root.username \
--connect jdbc:oracle:thin:@//someExadataHostname/dbInstance \
--username user \
--password welcome1 \
--table TB_RECHARGE_DIM_APPLICATION \
--target-dir /data/in/sqm/dev/unprocessed/sqoop/oracle_db_exa_test \
--delete-target-dir \
--m 1

 

it throws an error:

Warning: /opt/cloudera/parcels/CDH-5.10.1-1.cdh5.10.1.p0.10/bin/../lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
18/01/10 14:27:24 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.10.1
18/01/10 14:27:24 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
18/01/10 14:27:24 INFO teradata.TeradataManagerFactory: Loaded connector factory for 'Cloudera Connector Powered by Teradata' on version 1.5c5
18/01/10 14:27:25 INFO oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop is disabled.
18/01/10 14:27:25 INFO manager.SqlManager: Using default fetchSize of 1000
18/01/10 14:27:25 INFO tool.CodeGenTool: Beginning code generation
18/01/10 14:27:29 INFO manager.OracleManager: Time zone has been set to GMT
18/01/10 14:27:29 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM TB_RECHARGE_DIM_APPLICATION t WHERE 1=0
18/01/10 14:27:29 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce
Note: /tmp/s/compile/926451c21b6a6623f9763b96c7afa503/TB_RECHARGE_DIM_APPLICATION.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
18/01/10 14:27:31 INFO orm.CompilationManager: Writing jar file: /tmp/compile/926451c21b6a6623f9763b96c7afa503/TB_RECHARGE_DIM_APPLICATION.jar
18/01/10 14:27:32 INFO tool.ImportTool: Destination directory /data/in/sqm/dev/unprocessed/sqoop/oracle_db_exa_test deleted.
18/01/10 14:27:32 INFO manager.OracleManager: Time zone has been set to GMT
18/01/10 14:27:34 INFO manager.OracleManager: Time zone has been set to GMT
18/01/10 14:27:34 INFO mapreduce.ImportJobBase: Beginning import of TB_RECHARGE_DIM_APPLICATION
18/01/10 14:27:34 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
18/01/10 14:27:34 INFO manager.OracleManager: Time zone has been set to GMT
18/01/10 14:27:34 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
18/01/10 14:27:34 INFO hdfs.DFSClient: Created token for hnat_sqm: HDFS_DELEGATION_TOKEN owner=hnat_sqm@TELKOMSEL.CO.ID, renewer=yarn, realUser=, issueDate=1515569254366, maxDate=1516174054366, sequenceNumber=29920785, masterKeyId=849 on ha-hdfs:nameservice1
18/01/10 14:27:34 INFO security.TokenCache: Got dt for hdfs://nameservice1; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:nameservice1, Ident: (token for hnat_sqm: HDFS_DELEGATION_TOKEN owner=hnat_sqm@TELKOMSEL.CO.ID, renewer=yarn, realUser=, issueDate=1515569254366, maxDate=1516174054366, sequenceNumber=29920785, masterKeyId=849)
18/01/10 14:28:10 WARN hdfs.DFSClient: Slow waitForAckedSeqno took 33367ms (threshold=30000ms). File being written: /user/hnat_sqm/.staging/job_1508590044386_4156415/libjars/commons-lang3-3.4.jar, block: BP-673686138-10.54.0.2-1453972538527:blk_3947617000_2874005894, Write pipeline datanodes: [DatanodeInfoWithStorage[10.54.1.110:50010,DS-bfb333fb-f63f-4c85-b60f-3ce0889fe16d,DISK], DatanodeInfoWithStorage[10.54.0.187:50010,DS-5c692f55-614c-4d33-9e83-0758d2d54555,DISK], DatanodeInfoWithStorage[10.54.0.183:50010,DS-8530593e-b498-455e-9aaa-b1a12c8ec3b2,DISK]]
18/01/10 14:28:13 INFO db.DBInputFormat: Using read commited transaction isolation
18/01/10 14:28:14 INFO mapreduce.JobSubmitter: number of splits:1
18/01/10 14:28:14 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1508590044386_4156415
18/01/10 14:28:14 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:nameservice1, Ident: (token for hnat_sqm: HDFS_DELEGATION_TOKEN owner=hnat_sqm@TELKOMSEL.CO.ID, renewer=yarn, realUser=, issueDate=1515569254366, maxDate=1516174054366, sequenceNumber=29920785, masterKeyId=849)
18/01/10 14:28:15 INFO impl.YarnClientImpl: Submitted application application_1508590044386_4156415
18/01/10 14:28:15 INFO mapreduce.Job: The url to track the job: https://host:8090/proxy/application_1508590044386_4156415/
18/01/10 14:28:15 INFO mapreduce.Job: Running job: job_1508590044386_4156415
18/01/10 14:28:28 INFO mapreduce.Job: Job job_1508590044386_4156415 running in uber mode : false
18/01/10 14:28:28 INFO mapreduce.Job:  map 0% reduce 0%
18/01/10 14:29:38 INFO mapreduce.Job: Task Id : attempt_1508590044386_4156415_m_000000_0, Status : FAILED
Error: java.lang.RuntimeException: java.lang.RuntimeException: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
    at org.apache.sqoop.mapreduce.db.DBInputFormat.setDbConf(DBInputFormat.java:170)
    at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:161)
    at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
    at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:749)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.RuntimeException: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
    at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:223)
    at org.apache.sqoop.mapreduce.db.DBInputFormat.setDbConf(DBInputFormat.java:168)
    ... 10 more
Caused by: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
    at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:673)
    at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:715)
    at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:385)
    at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:30)
    at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:564)
    at java.sql.DriverManager.getConnection(DriverManager.java:664)
    at java.sql.DriverManager.getConnection(DriverManager.java:247)
    at org.apache.sqoop.mapreduce.db.DBConfiguration.getConnection(DBConfiguration.java:302)
    at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:216)
    ... 11 more
Caused by: oracle.net.ns.NetException: The Network Adapter could not establish the connection
    at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:445)
    at oracle.net.resolver.AddrResolution.resolveAndExecute(AddrResolution.java:464)
    at oracle.net.ns.NSProtocol.establishConnection(NSProtocol.java:594)
    at oracle.net.ns.NSProtocol.connect(NSProtocol.java:229)
    at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1360)
    at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:486)
    ... 19 more
Caused by: java.net.SocketTimeoutException: connect timed out
    at java.net.PlainSocketImpl.socketConnect(Native Method)
    at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
    at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
    at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
    at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
    at java.net.Socket.connect(Socket.java:589)
    at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:162)
    at oracle.net.nt.ConnOption.connect(ConnOption.java:133)
    at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:411)
    ... 24 more

 

i dont know why the network cant established connection but it successfully launching the job (it means sqoop able to connect and identify oracle table is exist right?). the map task never finished so..

 

any idea about this? thank you

3 REPLIES 3

avatar
Contributor

Hi,

 

When sqoop connects to a DB describe an object, it does so from the host where sqoop is run. But the actual data extract runs on the cluster - can the worker nodes also connect to the DB? Sometimes firewall rules get in the way here.

 

If you can't fix connectivity from the cluster to the DB, and you're just running a single map job, you could also run the extract in local mode with "-jt local" to run the whole extract on the sqoop host.

 

Regards,

Mark

avatar
New Contributor

avatar
Expert Contributor

Hi,

 

Looks like there is a connectivity issue from the NM to the Oracle database.

 

Can you please run the below command from all the Node Managers.

 

Note:- You need to install telnet utility prior running this command.

 

## telnet <oracle full hostname> <oracle port number>

 

Kindly check if all the NM are able to connect, If any of them is not able to connect then kindly check with network team.

 

Regards

Nitish