- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Hadoop archive job unsuccessful
- Labels:
-
Apache Hadoop
-
Apache Oozie
Created ‎08-04-2016 02:00 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have scheduled an oozie job to run hadoop archive jobs. Oozie returns successful but my archives are not created successfully. Surprisingly, there is no log to trace the root cause of this failure. What can be the cause of this failure?
Log Type: stderr Log Upload Time: Thu Aug 04 04:02:58 +0200 2016 Log Length: 2264 16/08/04 04:02:31 INFO impl.TimelineClientImpl: Timeline service address: http://server:8188/ws/v1/timeline/ 16/08/04 04:02:31 INFO client.RMProxy: Connecting to ResourceManager at server:8050 16/08/04 04:02:33 INFO impl.TimelineClientImpl: Timeline service address: http://server:8188/ws/v1/timeline/ 16/08/04 04:02:33 INFO client.RMProxy: Connecting to ResourceManager at server:8050 16/08/04 04:02:33 INFO impl.TimelineClientImpl: Timeline service address: http://server:8188/ws/v1/timeline/ 16/08/04 04:02:33 INFO client.RMProxy: Connecting to ResourceManager at server:8050 16/08/04 04:02:34 INFO mapreduce.JobSubmitter: number of splits:281 16/08/04 04:02:34 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1469778114081_4192 16/08/04 04:02:34 INFO mapreduce.JobSubmitter: Kind: mapreduce.job, Service: job_1469778114081_4189, Ident: (org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier@5b5c0057) 16/08/04 04:02:34 INFO mapreduce.JobSubmitter: Kind: RM_DELEGATION_TOKEN, Service: server:8050, Ident: (owner=hdfs, renewer=oozie mr token, realUser=oozie, issueDate=1470276000752, maxDate=1470880800752, sequenceNumber=89118, masterKeyId=139) 16/08/04 04:02:34 INFO impl.YarnClientImpl: Submitted application application_1469778114081_4192 16/08/04 04:02:34 INFO mapreduce.Job: The url to track the job: http://server:8088/proxy/application_1469778114081_4192/ 16/08/04 04:02:34 INFO mapreduce.Job: Running job: job_1469778114081_4192 16/08/04 04:02:45 INFO mapreduce.Job: Job job_1469778114081_4192 running in uber mode : false 16/08/04 04:02:45 INFO mapreduce.Job: map 0% reduce 0% 16/08/04 04:02:45 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=FAILED. Redirecting to job history server 16/08/04 04:02:46 INFO mapreduce.Job: Job job_1469778114081_4192 failed with state FAILED due to: Job failed! log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.impl.MetricsSystemImpl). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Created ‎08-04-2016 02:49 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Can you access http://server:8088/proxy/application_1469778114081_4192 ? There should be some helpful logs that tell you exactly what happened (from the MapReduce side).
Created ‎08-04-2016 02:49 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Can you access http://server:8088/proxy/application_1469778114081_4192 ? There should be some helpful logs that tell you exactly what happened (from the MapReduce side).
Created ‎08-05-2016 05:15 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you @Ryan Cicak. I checked the syslog and it gave an hint of issues in the hdfs staging directory. I plan to use the yarn user to run but i would think any user should be able to run it. I can launch it successfully in CLI with user HDFS.
Job init failed org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.io.FileNotFoundException: File does not exist: hdfs://server:8020/user/hdfs/.staging/job_1469778114081_4925/job.splitmetainfo
