Member since
11-17-2021
1123
Posts
254
Kudos Received
29
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 902 | 11-05-2025 10:13 AM | |
| 336 | 10-16-2025 02:45 PM | |
| 687 | 10-06-2025 01:01 PM | |
| 602 | 09-24-2025 01:51 PM | |
| 493 | 08-04-2025 04:17 PM |
07-02-2024
10:43 AM
1 Kudo
Translation:
I'm trying to use Sqoop to pull data from PostgreSQL to HDFS but I keep getting the error
2024-07-02 13:26:20,913 INFO mapreduce.JobSubmitter: Number of splits: 1 2024-07-02 13:26:21,183 INFO mapreduce.JobSubmitter: Submit job token: job_1719844718570_0005 2024-07-02 13:26:21,183 INFO mapreduce.JobSubmitter: Executing with token: [] 2024-07-02 13:26:21,336 INFO conf.Configuration: resource-types.xml not found 2024-07-02 13:26:21,337 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'. 2024-07-02 13:26:21,382 INFO impl.YarnClientImpl: Submitted application application_1719844718570_0005 2024-07-02 13:26:21,410 INFO mapreduce.Job: URL of tracking job: http://master:8088/proxy/application_1719844718570_0005/ 2024-07-02 13:26:21,411 INFO mapreduce.Job: Running job: job_1719844718570_0005 2024-07-02 13:26:25,455 INFO mapreduce.Job: Job job_1719844718570_0005 is running uber mode: false 2024-07-02 13:26:25,456 INFO mapreduce.Job: map 0%reduce 0% 2024-07-02 13:26:25,480 INFO mapreduce.Job: Job job_1719844718570_0005 failed with status FAILED due to: Application application_1719844718570_0005 failed 2 times due to AM, appattempt_1719844718570_0005_000 The container exited with 002, the exit code is: 1 This attempt failed. Diagnosis: [2024-07-02 13:26:24.824] Container startup abnormality. Container ID: container_1719844718570_0005_02_000001 Exit code: 1
[2024-07-02 13:26:24.826] Container exited with non-zero exit code 1. Error file: prelaunch.err. prelaunch.err The last 4096 bytes of: Last 4096 bytes of stderr: log4j: WARN No appender found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster). [2024-07-02 13:26:24.827] Container exited with non-zero exit code 1. Error file: prelaunch.err. prelaunch.err The last 4096 bytes of: Last 4096 bytes of stderr: log4j: WARN No appender found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster). Application failed. 2024-07-02 13:26:25,494 INFO mapreduce.Job: Counter: 0 2024-07-02 13:26:25,502 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead 2024-07-02 13:26:25,503 INFO mapreduce.ImportJobBase: 0 bytes transferred in 18.3462 seconds (0 bytes/sec) 2024-07-02 13:26:25,508 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead 2024-07-02 13:26:25,509 INFO mapreduce.ImportJobBase: 0 records retrieved. 2024-07-02 13:26:25,509 Error tool.ImportTool: Import failed: Import job failed!
"I then also added the following properties to mapred-site.xml and yarn-site.xml:
xml <properties > <name>yarn.application.classpath</name> <value>/usr/local/hadoop-3.1.3/etc/hadoop:/usr/local/hadoop-3.1.3/share/hadoop/common/lib/*:/usr/local/hadoop-3.1.3/ share/hadoop/common/*:/usr/local/hadoop-3.1.3/share/hadoop/hdfs:/usr/local/hadoop-3.1.3/share/hadoop/hdfs/lib/*:/usr/local /hadoop-3.1.3/share /hadoop/hdfs/*:/usr/local/hadoop-3.1.3/share/hadoop/mapreduce/lib/*:/usr/local/hadoop-3.1.3/share/hadoop /mapreduce/*:/usr/local/hadoop-3.1.3/share/hadoop/yarn:/usr/local/hadoop-3.1.3/share/hadoop/yarn/lib/*:/usr/local/hadoop- 3.1.3/share/hadoop/yarn/*</value> </property>
... View more
07-01-2024
10:28 AM
@kaif Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
07-01-2024
10:27 AM
@NidhiPal09 Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
07-01-2024
10:27 AM
@prfbessa Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
07-01-2024
10:23 AM
@duyvo Welcome to the Cloudera Community! As this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post. Thanks.
... View more
07-01-2024
10:22 AM
@uinn Welcome to the Cloudera Community! To help you get the best possible solution, I have tagged our Hive experts @cravani @james_jones who may be able to assist you further. Please keep us updated on your post, and we hope you find a satisfactory solution to your query.
... View more
06-28-2024
10:17 AM
1 Kudo
@NIFI-USER Welcome to the Cloudera Community! To help you get the best possible solution, I have tagged our NiFi experts @steven-matison @SAMSAL who may be able to assist you further. Please keep us updated on your post, and we hope you find a satisfactory solution to your query.
... View more
06-28-2024
10:15 AM
1 Kudo
@Romux Welcome to the Cloudera Community! To help you get the best possible solution, I have tagged our NiFi experts @MattWho @mburgess who may be able to assist you further. Please keep us updated on your post, and we hope you find a satisfactory solution to your query.
... View more
06-28-2024
10:09 AM
1 Kudo
@Azusaings As this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post. Thanks.
... View more
06-26-2024
07:52 AM
@Trilok Welcome to the Cloudera Community! To help you get the best possible solution, I have tagged our NiFi experts @MattWho @joseomjr who may be able to assist you further. Please keep us updated on your post, and we hope you find a satisfactory solution to your query.
... View more