Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

HADOOP:Application failed 2 times due to AM Container for exited with exitCode: 1

avatar
New Contributor

我正在尝试使用 Sqoop 将数据从 PostgreSQL 提取到 HDFS,但一直收到错误

2024-07-02 13:26:20,913 INFO mapreduce.JobSubmitter:拆分数:1
2024-07-02 13:26:21,183 INFO mapreduce.JobSubmitter:提交作业令牌:job_1719844718570_0005
2024-07-02 13:26:21,183 INFO mapreduce.JobSubmitter:使用令牌执行:[]
2024-07-02 13:26:21,336 INFO conf.Configuration:未找到 resource-types.xml
2024-07-02 13:26:21,337 INFO resource.ResourceUtils:无法找到“resource-types.xml”。
2024-07-02 13:26:21,382 INFO impl.YarnClientImpl:已提交的应用程序 application_1719844718570_0005
2024-07-02 13:26:21,410 INFO mapreduce.Job:跟踪作业的 URL:http://master:8088/proxy/application_1719844718570_0005/
2024-07-02 13:26:21,411 INFO mapreduce.Job:正在运行的作业:job_1719844718570_0005
2024-07-02 13:26:25,455 INFO mapreduce.Job:作业 job_1719844718570_0005 正在运行uber 模式:false
2024-07-02 13:26:25,456 INFO mapreduce.Job:map 0%reduce 0%
2024-07-02 13:26:25,480 INFO mapreduce.Job:作业 job_1719844718570_0005 失败,状态为 FAILED,原因是:应用程序 application_1719844718570_0005 由于 AM 而失败 2 次,appattempt_1719844718570_0005_000002 的容器退出,退出代码为:1
此次尝试失败。诊断:[2024-07-02 13:26:24.824]容器启动异常。
容器 ID:container_1719844718570_0005_02_000001
退出代码:1

[2024-07-02 13:26:24.826]容器以非零退出代码 1 退出。错误文件:prelaunch.err。prelaunch.err
的最后 4096 个字节:
stderr 的最后 4096 个字节:
log4j:WARN 找不到记录器 (org.apache.hadoop.mapreduce.v2.app.MRAppMaster) 的附加程序。[2024-07-02 13:26:24.827]容器以非零退出代码 1 退出。错误文件:prelaunch.err。prelaunch.err
的最后 4096 个字节:
stderr 的最后 4096 个字节:
log4j:WARN 找不到记录器 (org.apache.hadoop.mapreduce.v2.app.MRAppMaster) 的附加程序。应用程序失败。2024-07-02
13:26:25,494 INFO mapreduce.Job:计数器:0
2024-07-02 13:26:25,502 WARN mapreduce.Counters:Group FileSystemCounters 已弃用。改用 org.apache.hadoop.mapreduce.FileSystemCounter
2024-07-02 13:26:25,503 INFO mapreduce.ImportJobBase:在 18.3462 秒内传输了 0 个字节(0 字节/秒)
2024-07-02 13:26:25,508 WARN mapreduce.Counters:组 org.apache.hadoop.mapred.Task$Counter 已弃用。改用 org.apache.hadoop.mapreduce.TaskCounter
2024-07-02 13:26:25,509 INFO mapreduce.ImportJobBase:已检索到 0 条记录。2024-07-02
13:26:25,509 错误 tool.ImportTool:导入失败:导入作业失败!

 

“然后,我还向 mapred-site.xml 和 yarn-site.xml 添加了以下属性:

xml
 
<属性
> <名称>yarn.application.classpath</名称>
<值>/usr/local/hadoop-3.1.3/etc/hadoop:/usr/local/hadoop-3.1.3/share/hadoop/common/lib/*:/usr/local/hadoop-3.1.3/share/hadoop/common/*:/usr/local/hadoop-3.1.3/share/hadoop/hdfs:/usr/local/hadoop-3.1.3/share/hadoop/hdfs/lib/*:/usr/local/hadoop-3.1.3/share /hadoop/hdfs/*:/usr/local/hadoop-3.1.3/share/hadoop/mapreduce/lib/*:/usr/local/hadoop-3.1.3/share/hadoop/mapreduce/*:/usr/local/hadoop-3.1.3/share/hadoop/yarn:/usr/local/hadoop-3.1.3/share/hadoop/yarn/lib/*:/usr/local/hadoop-3.1.3/share/hadoop/yarn/*</value>
</property>
2 REPLIES 2

avatar
Community Manager

Translation:

I'm trying to use Sqoop to pull data from PostgreSQL to HDFS but I keep getting the error

2024-07-02 13:26:20,913 INFO mapreduce.JobSubmitter: Number of splits: 1
2024-07-02 13:26:21,183 INFO mapreduce.JobSubmitter: Submit job token: job_1719844718570_0005
2024-07-02 13:26:21,183 INFO mapreduce.JobSubmitter: Executing with token: []
2024-07-02 13:26:21,336 INFO conf.Configuration: resource-types.xml not found
2024-07-02 13:26:21,337 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2024-07-02 13:26:21,382 INFO impl.YarnClientImpl: Submitted application application_1719844718570_0005
2024-07-02 13:26:21,410 INFO mapreduce.Job: URL of tracking job: http://master:8088/proxy/application_1719844718570_0005/
2024-07-02 13:26:21,411 INFO mapreduce.Job: Running job: job_1719844718570_0005
2024-07-02 13:26:25,455 INFO mapreduce.Job: Job job_1719844718570_0005 is running uber mode: false
2024-07-02 13:26:25,456 INFO mapreduce.Job: map 0%reduce 0%
2024-07-02 13:26:25,480 INFO mapreduce.Job: Job job_1719844718570_0005 failed with status FAILED due to: Application application_1719844718570_0005 failed 2 times due to AM, appattempt_1719844718570_0005_000 The container exited with 002, the exit code is: 1
This attempt failed. Diagnosis: [2024-07-02 13:26:24.824] Container startup abnormality.
Container ID: container_1719844718570_0005_02_000001
Exit code: 1

[2024-07-02 13:26:24.826] Container exited with non-zero exit code 1. Error file: prelaunch.err. prelaunch.err
The last 4096 bytes of:
Last 4096 bytes of stderr:
log4j: WARN No appender found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster). [2024-07-02 13:26:24.827] Container exited with non-zero exit code 1. Error file: prelaunch.err. prelaunch.err
The last 4096 bytes of:
Last 4096 bytes of stderr:
log4j: WARN No appender found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster). Application failed. 2024-07-02
13:26:25,494 INFO mapreduce.Job: Counter: 0
2024-07-02 13:26:25,502 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
2024-07-02 13:26:25,503 INFO mapreduce.ImportJobBase: 0 bytes transferred in 18.3462 seconds (0 bytes/sec)
2024-07-02 13:26:25,508 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2024-07-02 13:26:25,509 INFO mapreduce.ImportJobBase: 0 records retrieved. 2024-07-02
13:26:25,509 Error tool.ImportTool: Import failed: Import job failed!

"I then also added the following properties to mapred-site.xml and yarn-site.xml:

xml

<properties
> <name>yarn.application.classpath</name>
<value>/usr/local/hadoop-3.1.3/etc/hadoop:/usr/local/hadoop-3.1.3/share/hadoop/common/lib/*:/usr/local/hadoop-3.1.3/ share/hadoop/common/*:/usr/local/hadoop-3.1.3/share/hadoop/hdfs:/usr/local/hadoop-3.1.3/share/hadoop/hdfs/lib/*:/usr/local /hadoop-3.1.3/share /hadoop/hdfs/*:/usr/local/hadoop-3.1.3/share/hadoop/mapreduce/lib/*:/usr/local/hadoop-3.1.3/share/hadoop /mapreduce/*:/usr/local/hadoop-3.1.3/share/hadoop/yarn:/usr/local/hadoop-3.1.3/share/hadoop/yarn/lib/*:/usr/local/hadoop- 3.1.3/share/hadoop/yarn/*</value>
</property>

 

 


Regards,

Diana Torres,
Community Moderator


Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.
Learn more about the Cloudera Community:

avatar
Community Manager

@bigluman Welcome to the Cloudera Community!

To help you get the best possible solution, I have tagged our Hadoop experts @sandeepV2 @Asok  who may be able to assist you further.

Please keep us updated on your post, and we hope you find a satisfactory solution to your query.


Regards,

Diana Torres,
Community Moderator


Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.
Learn more about the Cloudera Community: