<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Oozie workflow actions fail with &amp;quot;response from timeline server&amp;quot; error in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Oozie-workflow-actions-fail-with-quot-response-from-timeline/m-p/237953#M199764</link>
    <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;When I start an oozie workflow, then regardless of action type(sqoop, spark or ssh) it always fails with the same error from syslog:&lt;/P&gt;&lt;PRE&gt;2019-04-08 14:54:33,393 &lt;STRONG&gt;ERROR [pool-10-thread-1] org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl: Response from the timeline server is not successful&lt;/STRONG&gt;, &lt;STRONG&gt;HTTP error code: 500, Server response: {"exception":"WebApplicationException","message":"org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 280 actions: IOException: 280 times, servers with issues: null","javaClassName":"javax.ws.rs.WebApplicationException"} &lt;/STRONG&gt;2019-04-08 14:54:33,394 E&lt;STRONG&gt;RROR [Job ATS Event Dispatcher] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:&lt;/STRONG&gt; &lt;STRONG&gt;Exception while publishing configs on JOB_SUBMITTED Event &amp;nbsp;for the job&lt;/STRONG&gt; : job_1554726387894_0011 org.apache.hadoop.yarn.exceptions.YarnException: Failed while publishing entity &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl$TimelineEntityDispatcher.dispatchEntities(TimelineV2ClientImpl.java:548) &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl.putEntities(TimelineV2ClientImpl.java:149) &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.publishConfigsOnJobSubmittedEvent(JobHistoryEventHandler.java:1254) &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.processEventForNewTimelineService(JobHistoryEventHandler.java:1414) &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleTimelineEvent(JobHistoryEventHandler.java:742) &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.access$1200(JobHistoryEventHandler.java:93) &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$ForwardingEventHandler.handle(JobHistoryEventHandler.java:1795) &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$ForwardingEventHandler.handle(JobHistoryEventHandler.java:1791) &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197) &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126) &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Response from the timeline server is not successful, HTTP error code: 500, Server response: {"exception":"WebApplicationException","message":"org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 280 actions: IOException: 280 times, servers with issues: null","javaClassName":"javax.ws.rs.WebApplicationException"} &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl.putObjects(TimelineV2ClientImpl.java:322) &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl.putObjects(TimelineV2ClientImpl.java:251) &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl$EntitiesHolder$1.call(TimelineV2ClientImpl.java:374) &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl$EntitiesHolder$1.call(TimelineV2ClientImpl.java:367) &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at java.util.concurrent.FutureTask.run(FutureTask.java:266) &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl$TimelineEntityDispatcher$1.publishWithoutBlockingOnQueue(TimelineV2ClientImpl.java:478) &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl$TimelineEntityDispatcher$1.run(TimelineV2ClientImpl.java:433) &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)&lt;/PRE&gt;&lt;P&gt;What is causing this error?&lt;/P&gt;&lt;P&gt;example workflow.xml&lt;/P&gt;&lt;PRE&gt;&amp;lt;workflow-app xmlns = "uri:oozie:workflow:0.4" name="hadoop_main_workflow"&amp;gt;


&amp;nbsp; &amp;nbsp; &amp;lt;!-- start --&amp;gt;
&amp;nbsp; &amp;nbsp; &amp;lt;start to = "spark_job"/&amp;gt;

&amp;nbsp; &amp;nbsp; &amp;lt;action name="spark_job" retry-max="5" retry-interval="5"&amp;gt;
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;lt;spark xmlns="uri:oozie:spark-action:0.2"&amp;gt;
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;lt;job-tracker&amp;gt;${resourceManager}&amp;lt;/job-tracker&amp;gt;
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;lt;name-node&amp;gt;${nameNode}&amp;lt;/name-node&amp;gt;
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;lt;master&amp;gt;yarn&amp;lt;/master&amp;gt;
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;lt;mode&amp;gt;client&amp;lt;/mode&amp;gt;
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;lt;name&amp;gt;spark_job&amp;lt;/name&amp;gt;
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;lt;jar&amp;gt;spark_job.py&amp;lt;/jar&amp;gt;
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;lt;spark-opts&amp;gt;
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; --master yarn
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; --deploy-mode client
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; --driver-memory 11288m
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; --executor-memory 24GB
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; --num-executors 8
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; --conf spark.dynamicAllocation.enabled=true
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; --conf spark.executor.cores=2
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; --conf spark.shuffle.service.enabled=true
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; --conf spark.yarn.driver.memoryOverhead=1024
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; --conf spark.yarn.executor.memoryOverhead=1024
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; --jars /usr/hdp/3.1.0.0-78/hive_warehouse_connector/hive-warehouse-connector-assembly-1.0.0.3.1.0.0-78.jar
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; --conf spark.security.credentials.hiveserver2.enabled=false
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; --py-files /usr/hdp/3.1.0.0-78/hive_warehouse_connector/pyspark_hwc-1.0.0.3.1.0.0-78.zip
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;lt;/spark-opts&amp;gt;
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;lt;file&amp;gt;spark_job.py&amp;lt;/file&amp;gt;
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;lt;/spark&amp;gt;
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;lt;ok to="end"/&amp;gt;
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;lt;error to="kill"/&amp;gt;
&amp;nbsp; &amp;nbsp; &amp;lt;/action&amp;gt;

&amp;nbsp; &amp;nbsp; &amp;lt;kill name = "kill_job"&amp;gt;
&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;lt;message&amp;gt;Job failed&amp;lt;/message&amp;gt;
&amp;nbsp; &amp;nbsp; &amp;lt;/kill&amp;gt;
&amp;nbsp; &amp;nbsp; &amp;lt;end name = "end" /&amp;gt;

&amp;lt;/workflow-app&amp;gt;&lt;/PRE&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;job.properties:&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;PRE&gt;nameNode=hdfs://namenodehost:8020
resourceManager=namenodehost:8050
queueName=${nameNode}/user/oozie/workflows/hadoop_main_workflow
oozie.use.system.libpath=true
oozie.wf.application.path=${nameNode}/user/oozie/workflows/hadoop_main_workflow
oozie.action.sharelib.for.sqoop=sqoop
oozie.action.sharelib.for.spark=spark2&lt;/PRE&gt;&lt;P&gt;&lt;BR /&gt;Stack:&lt;BR /&gt;HDP 3.1.0&lt;/P&gt;&lt;P&gt;oozie 4.3.1.3.1.0.0-78&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;</description>
    <pubDate>Tue, 09 Apr 2019 21:44:48 GMT</pubDate>
    <dc:creator>gpjalocha</dc:creator>
    <dc:date>2019-04-09T21:44:48Z</dc:date>
  </channel>
</rss>

