<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Measuring Spark job performance in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Measuring-Spark-job-performance/m-p/308936#M223728</link>
    <description>&lt;P&gt;Thanks, that's very helpful input in terms of debugging performance issues and tuning jobs. So far as I can see there aren't any metrics that provide simple ways to track overall job performance over time on a shared cluster. Aggregated resource usage seems the closest thing, but on a shared cluster I think I will just need to accept that can vary widely depending on the state of the cluster for 2 identical job runs, so while it gives some indication of job performance, there's not really any panacea.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I think I was looking for something like I would usually track for web app (e.g. response times of a web server under a given load), which help me to spot when performance regressions happen. I guess not such a straightforward thing to do with the kinds of workload Spark handles!&lt;/P&gt;</description>
    <pubDate>Wed, 06 Jan 2021 11:26:22 GMT</pubDate>
    <dc:creator>TimmehG</dc:creator>
    <dc:date>2021-01-06T11:26:22Z</dc:date>
  </channel>
</rss>

