<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: HDFS NameNode Capacity Alert on Fresh Install in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/HDFS-NameNode-Capacity-Alert-on-Fresh-Install/m-p/217022#M178931</link>
    <description>&lt;P&gt;During the initial setup of HDFS using Ambari, configuring the pathway to /home/hadoop/hdfs/data, or any /home pathway for that matter, is illegal. After further digging and research, it is written as a security measure to prevent writing to the /home directory. I had to literally take the long way home.&lt;/P&gt;</description>
    <pubDate>Wed, 03 May 2017 01:43:34 GMT</pubDate>
    <dc:creator>joshua_petree</dc:creator>
    <dc:date>2017-05-03T01:43:34Z</dc:date>
    <item>
      <title>HDFS NameNode Capacity Alert on Fresh Install</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-NameNode-Capacity-Alert-on-Fresh-Install/m-p/217013#M178922</link>
      <description>&lt;P&gt;I just installed Ambari 2.5.0.3 with HDP 2.6 and I am getting a critical alert stating my NameNode capacity is already 100%. After running a report, its only allocated 28kb! How do I fix this?&lt;/P&gt;</description>
      <pubDate>Tue, 02 May 2017 00:25:05 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-NameNode-Capacity-Alert-on-Fresh-Install/m-p/217013#M178922</guid>
      <dc:creator>joshua_petree</dc:creator>
      <dc:date>2017-05-02T00:25:05Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS NameNode Capacity Alert on Fresh Install</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-NameNode-Capacity-Alert-on-Fresh-Install/m-p/217014#M178923</link>
      <description>&lt;P&gt;Do you mean DataNode capacity? Which report? Is this a virtual or bare-metal environment? What are the hardware specs?&lt;/P&gt;</description>
      <pubDate>Tue, 02 May 2017 02:40:01 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-NameNode-Capacity-Alert-on-Fresh-Install/m-p/217014#M178923</guid>
      <dc:creator>slachterman</dc:creator>
      <dc:date>2017-05-02T02:40:01Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS NameNode Capacity Alert on Fresh Install</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-NameNode-Capacity-Alert-on-Fresh-Install/m-p/217015#M178924</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/11295/slachterman.html" nodeid="11295"&gt;@slachterman&lt;/A&gt; Thank you for responding. The CRIT alert is stating my NameNode capacity is maxed out and the dfsadmin report on the NameNode is stating the same: all 28kb of the dfs are used. As I run the same report on the DataNodes, I get the same results, they only have 28kb allocated to dfs and they are full as well.&lt;/P&gt;&lt;P&gt;As for the setup, I am using 1x physical machine with 1TB hdd to run the Ambari server/Edge Server/Secondary Master, 1x VM w/ 1TB hdd to run as Primary Master, and 3x  VMs (each with a dedicated 2TB hdd) to run as the DataNodes. The purpose of this cluster is to act as a DEV/Admin Sandbox until I am given the funds to build a proper cluster.&lt;/P&gt;&lt;P&gt;On a side note, is there much difference in setting up a cluster with physical machines vs VMs? I wouldn't think so as they both require separated networks and similar design and deployments. Then again, nothing about this setup has gone right so far...&lt;/P&gt;</description>
      <pubDate>Tue, 02 May 2017 21:28:37 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-NameNode-Capacity-Alert-on-Fresh-Install/m-p/217015#M178924</guid>
      <dc:creator>joshua_petree</dc:creator>
      <dc:date>2017-05-02T21:28:37Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS NameNode Capacity Alert on Fresh Install</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-NameNode-Capacity-Alert-on-Fresh-Install/m-p/217016#M178925</link>
      <description>&lt;P&gt;Hi &lt;A rel="user" href="https://community.cloudera.com/users/15477/joshuapetree.html" nodeid="15477" target="_blank"&gt;@Joshua Petree&lt;/A&gt;, that is odd. It seems like the 2 TB disks on your data nodes are not properly associated with your HDFS storage.&lt;/P&gt;&lt;P&gt;What do you see in dfs.datanode.data.dir? Are you sure the listed mount points are backed by the 2 TB disks you intended? &lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="14992-screen-shot-2017-05-02-at-105133-am.png" style="width: 846px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/16197i8AD4139F01F0F74D/image-size/medium?v=v2&amp;amp;px=400" role="button" title="14992-screen-shot-2017-05-02-at-105133-am.png" alt="14992-screen-shot-2017-05-02-at-105133-am.png" /&gt;&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Sun, 18 Aug 2019 03:04:27 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-NameNode-Capacity-Alert-on-Fresh-Install/m-p/217016#M178925</guid>
      <dc:creator>slachterman</dc:creator>
      <dc:date>2019-08-18T03:04:27Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS NameNode Capacity Alert on Fresh Install</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-NameNode-Capacity-Alert-on-Fresh-Install/m-p/217017#M178926</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/11295/slachterman.html" nodeid="11295"&gt;@slachterman&lt;/A&gt;, each VM has the OS (CentOS7) running on their respective hdd and partitioned accordingly. The DN directory is pointing to /hadoop/hdfs/data. However, according to the partition sizes of the OS, this folder is limited to 50 gb. The rest is dedicated to the /home partition, just over 1.7tb on each DN. I did notice during the install of Ambari I was not allowed to point my NameNode or DataNode to /home. Is this the problem?&lt;/P&gt;</description>
      <pubDate>Tue, 02 May 2017 23:45:10 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-NameNode-Capacity-Alert-on-Fresh-Install/m-p/217017#M178926</guid>
      <dc:creator>joshua_petree</dc:creator>
      <dc:date>2017-05-02T23:45:10Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS NameNode Capacity Alert on Fresh Install</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-NameNode-Capacity-Alert-on-Fresh-Install/m-p/217018#M178927</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/15477/joshuapetree.html" nodeid="15477"&gt;@Joshua Petree&lt;/A&gt; yes, you want dfs.datanode.data.dir to be backed by a mount point that has the majority of the data available for HDFS blocks. Please upvote or accept the above answer if this is helpful in resolving your issue. &lt;/P&gt;</description>
      <pubDate>Wed, 03 May 2017 01:18:22 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-NameNode-Capacity-Alert-on-Fresh-Install/m-p/217018#M178927</guid>
      <dc:creator>slachterman</dc:creator>
      <dc:date>2017-05-03T01:18:22Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS NameNode Capacity Alert on Fresh Install</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-NameNode-Capacity-Alert-on-Fresh-Install/m-p/217019#M178928</link>
      <description>&lt;P&gt;I figured it out. Ambari is not allowing the DataNodes to access the /home (where the bulk of the free space is on a default Linux install) directories to store data. However, I created a folder pathway /home/hadoop/hdfs/data and placed a link to folder in the default /hadoop/hdfs pathway, so now the datanode directory technically reads /hadoop/hdfs/data, it executes a redirect to /home/hadoop/hdfs/data.&lt;/P&gt;&lt;P&gt;I had to do this for every datanode. I hope they fix this in the near future. Thanks for the help!&lt;/P&gt;</description>
      <pubDate>Wed, 03 May 2017 01:33:06 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-NameNode-Capacity-Alert-on-Fresh-Install/m-p/217019#M178928</guid>
      <dc:creator>joshua_petree</dc:creator>
      <dc:date>2017-05-03T01:33:06Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS NameNode Capacity Alert on Fresh Install</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-NameNode-Capacity-Alert-on-Fresh-Install/m-p/217020#M178929</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/15477/joshuapetree.html" nodeid="15477"&gt;@Joshua Petree&lt;/A&gt; can you explain further what you mean by "Ambari is not allowing the DataNodes to access the /home directories?"&lt;/P&gt;</description>
      <pubDate>Wed, 03 May 2017 01:34:41 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-NameNode-Capacity-Alert-on-Fresh-Install/m-p/217020#M178929</guid>
      <dc:creator>slachterman</dc:creator>
      <dc:date>2017-05-03T01:34:41Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS NameNode Capacity Alert on Fresh Install</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-NameNode-Capacity-Alert-on-Fresh-Install/m-p/217021#M178930</link>
      <description>&lt;A rel="user" href="https://community.cloudera.com/users/11295/slachterman.html" nodeid="11295"&gt;@slachterman&lt;/A&gt;&lt;P&gt; while this didn't directly help me solve the problem, it did help me think of a way to come to the solution. Thank you for the help!&lt;/P&gt;</description>
      <pubDate>Wed, 03 May 2017 01:38:28 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-NameNode-Capacity-Alert-on-Fresh-Install/m-p/217021#M178930</guid>
      <dc:creator>joshua_petree</dc:creator>
      <dc:date>2017-05-03T01:38:28Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS NameNode Capacity Alert on Fresh Install</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-NameNode-Capacity-Alert-on-Fresh-Install/m-p/217022#M178931</link>
      <description>&lt;P&gt;During the initial setup of HDFS using Ambari, configuring the pathway to /home/hadoop/hdfs/data, or any /home pathway for that matter, is illegal. After further digging and research, it is written as a security measure to prevent writing to the /home directory. I had to literally take the long way home.&lt;/P&gt;</description>
      <pubDate>Wed, 03 May 2017 01:43:34 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-NameNode-Capacity-Alert-on-Fresh-Install/m-p/217022#M178931</guid>
      <dc:creator>joshua_petree</dc:creator>
      <dc:date>2017-05-03T01:43:34Z</dc:date>
    </item>
  </channel>
</rss>

