<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Method for recovering from a full HDD due to Ambari Metrics Collector? in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Method-for-recovering-from-a-full-HDD-due-to-Ambari-Metrics/m-p/100891#M13642</link>
    <description>&lt;P&gt;I recently setup HDP (HBase) on a single VM which had ~15GB of space. The installation went fine but after ~2 months the system ran out of HDD space. I'd like to come up with a method for clearing out the metrics or truncating them. While researching this I drilled down to this directory where the bulk of the space is being used:&lt;/P&gt;&lt;PRE&gt;$ du -sh /var/lib/ambari-metrics-collector/hbase/data/default/*  | sort -rh | head -5
7.1G   /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE
403M   /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_MINUTE
209M   /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD
76M    /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE_HOURLY
45M    /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_HOURLY
&lt;/PRE&gt;&lt;P&gt;I've toyed with several methods of truncating these files using the `truncate -s 0 &amp;lt;file&amp;gt;` command but this trashes the files so that they're no longer usable by AMS. &lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;
&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Questions&lt;/STRONG&gt;&lt;/P&gt;&lt;UL&gt;
&lt;LI&gt;Is there a simple way to reset the metrics?&lt;/LI&gt;&lt;LI&gt;Is there a safe way to delete the data collected periodically, from say a cron job?&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&lt;STRONG&gt;
&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;NOTE:&lt;/STRONG&gt; This is a small installation and I don't have the ability to throw more HDD space at the problem. I'd like to keep AMS enabled if possible.&lt;/P&gt;</description>
    <pubDate>Tue, 29 Dec 2015 00:41:17 GMT</pubDate>
    <dc:creator>slm</dc:creator>
    <dc:date>2015-12-29T00:41:17Z</dc:date>
    <item>
      <title>Method for recovering from a full HDD due to Ambari Metrics Collector?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Method-for-recovering-from-a-full-HDD-due-to-Ambari-Metrics/m-p/100891#M13642</link>
      <description>&lt;P&gt;I recently setup HDP (HBase) on a single VM which had ~15GB of space. The installation went fine but after ~2 months the system ran out of HDD space. I'd like to come up with a method for clearing out the metrics or truncating them. While researching this I drilled down to this directory where the bulk of the space is being used:&lt;/P&gt;&lt;PRE&gt;$ du -sh /var/lib/ambari-metrics-collector/hbase/data/default/*  | sort -rh | head -5
7.1G   /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE
403M   /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_MINUTE
209M   /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD
76M    /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE_HOURLY
45M    /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_HOURLY
&lt;/PRE&gt;&lt;P&gt;I've toyed with several methods of truncating these files using the `truncate -s 0 &amp;lt;file&amp;gt;` command but this trashes the files so that they're no longer usable by AMS. &lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;
&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Questions&lt;/STRONG&gt;&lt;/P&gt;&lt;UL&gt;
&lt;LI&gt;Is there a simple way to reset the metrics?&lt;/LI&gt;&lt;LI&gt;Is there a safe way to delete the data collected periodically, from say a cron job?&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&lt;STRONG&gt;
&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;NOTE:&lt;/STRONG&gt; This is a small installation and I don't have the ability to throw more HDD space at the problem. I'd like to keep AMS enabled if possible.&lt;/P&gt;</description>
      <pubDate>Tue, 29 Dec 2015 00:41:17 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Method-for-recovering-from-a-full-HDD-due-to-Ambari-Metrics/m-p/100891#M13642</guid>
      <dc:creator>slm</dc:creator>
      <dc:date>2015-12-29T00:41:17Z</dc:date>
    </item>
    <item>
      <title>Re: Method for recovering from a full HDD due to Ambari Metrics Collector?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Method-for-recovering-from-a-full-HDD-due-to-Ambari-Metrics/m-p/100892#M13643</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/1461/hortonworks.html" nodeid="1461"&gt;@Sam Mingolelli&lt;/A&gt; Which version of Amabri are you using? Ambari 2.1 does allow you to truncate. It would be easier to remove AMS and reinstall. We also recommend a dedicated minimum of 10 GB for AMS. See: &lt;A href="https://cwiki.apache.org/confluence/display/AMBARI/Disk+space+utilization+guidance"&gt;https://cwiki.apache.org/confluence/display/AMBAR...&lt;/A&gt;&lt;/P&gt;&lt;P&gt;You may also want to edit your TTL settings - &lt;A href="https://cwiki.apache.org/confluence/display/AMBARI/Known+Issues" target="_blank"&gt;https://cwiki.apache.org/confluence/display/AMBARI/Known+Issues&lt;/A&gt; and here - &lt;A href="https://cwiki.apache.org/confluence/display/AMBARI/Configuration" target="_blank"&gt;https://cwiki.apache.org/confluence/display/AMBARI/Configuration&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 29 Dec 2015 02:53:32 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Method-for-recovering-from-a-full-HDD-due-to-Ambari-Metrics/m-p/100892#M13643</guid>
      <dc:creator>SQLShaw</dc:creator>
      <dc:date>2015-12-29T02:53:32Z</dc:date>
    </item>
    <item>
      <title>Re: Method for recovering from a full HDD due to Ambari Metrics Collector?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Method-for-recovering-from-a-full-HDD-due-to-Ambari-Metrics/m-p/100893#M13644</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/186/sshaw.html" nodeid="186"&gt;@Scott Shaw&lt;/A&gt; - I'm using 2.0.1 of Ambari. I never thought of that, so I can remove AMS and then re-install it to get it to recreate it when I happen upon the out of HDD space issue? I'll look thru the links to see how to dial down the TTLs for AMS. Thanks for the info!&lt;/P&gt;</description>
      <pubDate>Tue, 29 Dec 2015 03:36:47 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Method-for-recovering-from-a-full-HDD-due-to-Ambari-Metrics/m-p/100893#M13644</guid>
      <dc:creator>slm</dc:creator>
      <dc:date>2015-12-29T03:36:47Z</dc:date>
    </item>
    <item>
      <title>Re: Method for recovering from a full HDD due to Ambari Metrics Collector?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Method-for-recovering-from-a-full-HDD-due-to-Ambari-Metrics/m-p/100894#M13645</link>
      <description>&lt;P&gt;BTW - this is a single node of HortonWorks so it seems odd that it would require so much space? I'm going w/ the default options when I do a server install too.&lt;/P&gt;</description>
      <pubDate>Tue, 29 Dec 2015 03:39:26 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Method-for-recovering-from-a-full-HDD-due-to-Ambari-Metrics/m-p/100894#M13645</guid>
      <dc:creator>slm</dc:creator>
      <dc:date>2015-12-29T03:39:26Z</dc:date>
    </item>
    <item>
      <title>Re: Method for recovering from a full HDD due to Ambari Metrics Collector?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Method-for-recovering-from-a-full-HDD-due-to-Ambari-Metrics/m-p/100895#M13646</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/1461/hortonworks.html" nodeid="1461"&gt;@Sam Mingolelli&lt;/A&gt; Agree, but its only because those are the set minimum TTL requirements. Change the TTL values and you should be able to get by with less space. It took 2 months to fill up for you. I've seen it fill up in days with multi-node clusters.&lt;/P&gt;</description>
      <pubDate>Tue, 29 Dec 2015 05:14:48 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Method-for-recovering-from-a-full-HDD-due-to-Ambari-Metrics/m-p/100895#M13646</guid>
      <dc:creator>SQLShaw</dc:creator>
      <dc:date>2015-12-29T05:14:48Z</dc:date>
    </item>
    <item>
      <title>Re: Method for recovering from a full HDD due to Ambari Metrics Collector?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Method-for-recovering-from-a-full-HDD-due-to-Ambari-Metrics/m-p/100896#M13647</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/1461/hortonworks.html" nodeid="1461"&gt;@Sam Mingolelli&lt;/A&gt; has this been addressed? Please accept the best answer or provide your own solution.&lt;/P&gt;</description>
      <pubDate>Sat, 06 Feb 2016 04:15:35 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Method-for-recovering-from-a-full-HDD-due-to-Ambari-Metrics/m-p/100896#M13647</guid>
      <dc:creator>aervits</dc:creator>
      <dc:date>2016-02-06T04:15:35Z</dc:date>
    </item>
  </channel>
</rss>

