Created 10-26-2022 01:44 AM
The sqoop import action job in oozie is still running.
What should I check?
oozie version:5.2.1 hadoop version:3.3.4 sqoop version:1.4.7
help Me....
>>> Invoking Sqoop command line now >>>
2022-10-26 08:26:56,510 [main] WARN org.apache.sqoop.tool.SqoopTool - $SQOOP_CONF_DIR has not been set in the environment. Cannot check for additional configuration.
2022-10-26 08:26:56,543 [main] INFO org.apache.sqoop.Sqoop - Running Sqoop version: 1.4.7
2022-10-26 08:26:56,556 [main] WARN org.apache.sqoop.tool.BaseSqoopTool - Setting your password on the command-line is insecure. Consider using -P instead.
2022-10-26 08:26:56,566 [main] WARN org.apache.sqoop.ConnFactory - $SQOOP_CONF_DIR has not been set in the environment. Cannot check for additional configuration.
2022-10-26 08:26:56,607 [main] INFO org.apache.sqoop.manager.MySQLManager - Preparing to use a MySQL streaming resultset.
2022-10-26 08:26:56,607 [main] INFO org.apache.sqoop.tool.CodeGenTool - Beginning code generation
2022-10-26 08:26:56,890 [main] INFO org.apache.sqoop.manager.SqlManager - Executing SQL statement: SELECT t.* FROM `calendar` AS t LIMIT 1
2022-10-26 08:26:56,912 [main] INFO org.apache.sqoop.manager.SqlManager - Executing SQL statement: SELECT t.* FROM `calendar` AS t LIMIT 1
2022-10-26 08:26:56,923 [main] INFO org.apache.sqoop.orm.CompilationManager - $HADOOP_MAPRED_HOME is not set
log4j: Finalizing appender named [EventCounter].
2022-10-26 08:26:58,284 [main] INFO org.apache.sqoop.orm.CompilationManager - Writing jar file: /tmp/sqoop-hadoop/compile/92647049c21a99ec3fe668f737f0bf1a/calendar.jar
2022-10-26 08:26:58,296 [main] WARN org.apache.sqoop.manager.MySQLManager - It looks like you are importing from mysql.
2022-10-26 08:26:58,296 [main] WARN org.apache.sqoop.manager.MySQLManager - This transfer can be faster! Use the --direct
2022-10-26 08:26:58,296 [main] WARN org.apache.sqoop.manager.MySQLManager - option to exercise a MySQL-specific fast path.
2022-10-26 08:26:58,296 [main] INFO org.apache.sqoop.manager.MySQLManager - Setting zero DATETIME behavior to convertToNull (mysql)
2022-10-26 08:26:58,305 [main] INFO org.apache.sqoop.mapreduce.ImportJobBase - Beginning import of calendar
2022-10-26 08:26:58,306 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2022-10-26 08:26:58,311 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.jar is deprecated. Instead, use mapreduce.job.jar
2022-10-26 08:26:58,329 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
2022-10-26 08:26:58,331 [main] WARN org.apache.sqoop.mapreduce.JobBase - SQOOP_HOME is unset. May not be able to find all job dependencies.
2022-10-26 08:26:58,393 [main] INFO org.apache.hadoop.yarn.client.DefaultNoHARMFailoverProxyProvider - Connecting to ResourceManager at bigdata/172.3.031.123:8032
2022-10-26 08:26:58,496 [main] INFO org.apache.hadoop.mapreduce.JobResourceUploader - Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/oozie/.staging/job_1666660764861_0196
2022-10-26 08:26:58,659 [main] INFO org.apache.sqoop.mapreduce.db.DBInputFormat - Using read commited transaction isolation
2022-10-26 08:26:58,694 [main] INFO org.apache.hadoop.mapreduce.JobSubmitter - number of splits:1
2022-10-26 08:26:58,800 [main] INFO org.apache.hadoop.mapreduce.JobSubmitter - Submitting tokens for job: job_1666660764861_0196
2022-10-26 08:26:58,801 [main] INFO org.apache.hadoop.mapreduce.JobSubmitter - Executing with tokens: [Kind: YARN_AM_RM_TOKEN, Service: , Ident: (appAttemptId { application_id { id: 195 cluster_timestamp: 1666660764861 } attemptId: 1 } keyId: 1397232171)]
2022-10-26 08:26:58,980 [main] INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl - Submitted application application_1666660764861_0196
2022-10-26 08:26:59,016 [main] INFO org.apache.hadoop.mapreduce.Job - The url to track the job: http://bigdata:8088/proxy/application_1666660764861_0196/
2022-10-26 08:26:59,016 [main] INFO org.apache.hadoop.mapreduce.Job - The url to track the job: http://bigdata:8088/proxy/application_1666660764861_0196/
2022-10-26 08:26:59,017 [main] INFO org.apache.hadoop.mapreduce.Job - Running job: job_1666660764861_0196
2022-10-26 08:26:59,017 [main] INFO org.apache.hadoop.mapreduce.Job - Running job: job_1666660764861_0196
Created 01-08-2024 09:42 PM
Observing the provided snippet, it's evident that the job with ID application_1666660764861_0196 is currently in progress.
To gather more insights into the ongoing Sqoop job, please review the progress details of this specific application (application_1666660764861_0196) .
Check YARN ResourceManager Web UI:
Open your web browser and navigate to the YARN ResourceManager Web UI , Look for the specific job ID mentioned in the logs (application_1666660764861_0196). This UI provides details about the running job, its progress, and any errors.
Review Hadoop Cluster Logs:
Examine the Hadoop cluster logs for any potential issues. Hadoop logs can provide insights into resource constraints, node failures, or other problems that might be affecting your job.
Check Database Connection:
Ensure that the database you are importing from is accessible and that the connection parameters (such as username, password, JDBC URL) are correct. Sometimes, jobs can hang if there are issues with the database.
Verify Network Connectivity:
Ensure that there are no network issues between the cluster nodes and the database. Check for any firewalls or network restrictions that might be impacting connectivity.
Resource Utilization:
Check the resource utilization on your Hadoop cluster. Ensure that there are enough resources (CPU, memory) available for the job to run.
By systematically checking these areas, you should be able to gather more information about why the Sqoop job is still running and address any issues that may be preventing it from completing successfully.
Created 01-12-2024 03:04 AM
@yoiun, Did the response assist in resolving your query? If it did, kindly mark the relevant reply as the solution, as it will aid others in locating the answer more easily in the future.
Regards,
Vidya Sargur,