Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

My Sqoop exporting is failed for couple of tables.

My Sqoop exporting is failed for couple of tables.

New Contributor

Hi,

I use hadoop as processing job, pick the data from one Db and export to SQl server. But when it is exporting 2 tables getting failed. I have checked in Yarn logs, i see this error.

 

LogType:stderr

Log Upload Time:Fri Jan 31 03:49:00 -0500 2020

LogLength:0

Log Contents:

End of LogType:stderr



LogType:stdout

Log Upload Time:Fri Jan 31 03:49:00 -0500 2020

LogLength:0

Log Contents:

End of LogType:stdout



LogType:syslog

Log Upload Time:Fri Jan 31 03:49:00 -0500 2020

LogLength:6334

Log Contents:

2020-01-31 00:28:01,652 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties

2020-01-31 00:28:01,719 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).

2020-01-31 00:28:01,720 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system started

2020-01-31 00:28:01,722 INFO [main] org.apache.hadoop.mapred.YarnChild: Executing with tokens:

2020-01-31 00:28:01,722 INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: mapreduce.job, Service: job_1580258449267_103956, Ident: (org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier@30bce90b)

2020-01-31 00:28:01,800 INFO [main] org.apache.hadoop.mapred.YarnChild: Sleeping for 0ms before retrying again. Got null now.

2020-01-31 00:28:02,157 INFO [main] org.apache.hadoop.mapred.YarnChild: mapreduce.cluster.local.dir for child: /hdfs/app/local.hdprd-c01-r06-08.cisco.com.logs/usercache/phodisvc/appcache/application_1580258449267_103956

2020-01-31 00:28:02,256 INFO [main] com.pepperdata.supervisor.agent.resource.O: Set a new configuration for the first time.

2020-01-31 00:28:02,348 INFO [main] com.pepperdata.common.reflect.d: Method not implemented in this version of Hadoop: org.apache.hadoop.fs.FileSystem.getGlobalStorageStatistics

2020-01-31 00:28:02,349 INFO [main] com.pepperdata.common.reflect.d: Method not implemented in this version of Hadoop: org.apache.hadoop.fs.FileSystem$Statistics.getBytesReadLocalHost

2020-01-31 00:28:02,372 INFO [main] com.pepperdata.supervisor.agent.resource.u: Scheduling statistics report every 2000 millisecs

2020-01-31 00:28:02,522 INFO [Pepperdata Statistics Reporter] com.pepperdata.supervisor.protocol.handler.http.Handler: Shuffle URL path prefix: /mapOutput

2020-01-31 00:28:02,522 INFO [Pepperdata Statistics Reporter] com.pepperdata.supervisor.protocol.handler.http.Handler: Initialized shuffle handler, starting uncontrolled.

2020-01-31 00:28:02,554 INFO [main] org.apache.hadoop.mapred.Task: mapOutputFile class: org.apache.hadoop.mapred.MapRFsOutputFile

2020-01-31 00:28:02,554 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id

2020-01-31 00:28:02,581 INFO [main] org.apache.hadoop.mapred.Task:  Using ResourceCalculatorProcessTree : [ ]

2020-01-31 00:28:02,701 INFO [main] org.apache.hadoop.mapred.MapTask: Processing split: Paths:/app/SmartAnalytics/Apps/CSP/hivewarehouse/csp_tsbi.db/contracts_passim_data_platform_mood_db_p01/000000_0:0+158703

2020-01-31 00:28:02,705 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: map.input.file is deprecated. Instead, use mapreduce.map.input.file

2020-01-31 00:28:02,705 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: map.input.start is deprecated. Instead, use mapreduce.map.input.start

2020-01-31 00:28:02,705 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: map.input.length is deprecated. Instead, use mapreduce.map.input.length

2020-01-31 00:28:03,163 WARN [Thread-12] org.apache.sqoop.mapreduce.SQLServerExportDBExecThread: Error executing statement: java.sql.BatchUpdateException: The conversion of a datetime2 data type to a datetime data type resulted in an out-of-range value.

2020-01-31 00:28:03,163 WARN [Thread-12] org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread: Trying to recover from DB write failure:

java.sql.BatchUpdateException: The conversion of a datetime2 data type to a datetime data type resulted in an out-of-range value.

        at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeBatch(SQLServerPreparedStatement.java:1178)

        at org.apache.sqoop.mapreduce.SQLServerExportDBExecThread.executeStatement(SQLServerExportDBExecThread.java:96)

        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.write(SQLServerAsyncDBExecThread.java:272)

        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.run(SQLServerAsyncDBExecThread.java:240)

2020-01-31 00:28:03,164 WARN [Thread-12] org.apache.sqoop.mapreduce.db.SQLServerConnectionFailureHandler: Cannot handle error with SQL State: S0003

2020-01-31 00:28:03,165 ERROR [Thread-12] org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread: Failed to write records.

java.io.IOException: Registered handler cannot recover error with SQL State: S0003, error code: 242

        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.write(SQLServerAsyncDBExecThread.java:293)

        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.run(SQLServerAsyncDBExecThread.java:240)

Caused by: java.sql.BatchUpdateException: The conversion of a datetime2 data type to a datetime data type resulted in an out-of-range value.

        at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeBatch(SQLServerPreparedStatement.java:1178)

        at org.apache.sqoop.mapreduce.SQLServerExportDBExecThread.executeStatement(SQLServerExportDBExecThread.java:96)

        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.write(SQLServerAsyncDBExecThread.java:272)

        ... 1 more

2020-01-31 00:28:03,165 ERROR [Thread-12] org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread: Got exception in update thread: java.io.IOException: Registered handler cannot recover error with SQL State: S0003, error code: 242

        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.write(SQLServerAsyncDBExecThread.java:293)

        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.run(SQLServerAsyncDBExecThread.java:240)

Caused by: java.sql.BatchUpdateException: The conversion of a datetime2 data type to a datetime data type resulted in an out-of-range value.

        at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeBatch(SQLServerPreparedStatement.java:1178)

        at org.apache.sqoop.mapreduce.SQLServerExportDBExecThread.executeStatement(SQLServerExportDBExecThread.java:96)

        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.write(SQLServerAsyncDBExecThread.java:272)

        ... 1 more



2020-01-31 00:28:03,169 INFO [Thread-13] org.apache.sqoop.mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false

2020-01-31 00:28:03,173 ERROR [main] org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread: Asynchronous writer thread encountered the following exception: java.io.IOException: Registered handler cannot recover error with SQL State: S0003, error code: 242

End of LogType:syslog

 

Please anyone could help me on this, really appreciate it. TIA

Don't have an account?
Coming from Hortonworks? Activate your account here