- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
"/usr" mount issue during the cluster creation using Ambari
- Labels:
-
Apache Ambari
Created on ‎08-17-2017 09:27 AM - edited ‎09-16-2022 05:06 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Team ,
I am having 4 nodes into my cluster . While doing the cluster setup , i encounter with some issue in one node , remain 3 was fine.
After fixing the issue at the problem node , i did the " Retry" , which started configuration again in all 4 nodes .
With that new files again started getting copied into rest all node . That end up with disk space issue for "/usr" mount , while installing the "Mertic Collector" . Before fixing the error of problem node , this was sucess .
I was having 1.5 GB of space free on that node before installation , now its "300MB" .
What i wanted to know , why setup is creating the files again , from previous point of installation .
I have a constrain on adding up more space in that mount point .
Please advice , how should i proceed to make it starting from use of disk space use what it was earlier (1.5GB) ?
How can i proceed from here .
Best regards
~Kishore
Created ‎08-18-2017 09:21 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Good to now then I deserve the credit .....just accept my response !
Created ‎08-17-2017 09:41 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Are you also installing the HDP components on the host where you see /usr mount point is consuming space?
The HDP components also consumes lots of space. According to the HDP installation doc: "A complete installation of HDP 2.6.0 consumes about 6.5 GB of disk space.". As the HDP components and libraries are installed at "/usr/hdp" and "/usr/lib", "/var/lib"
.
If you want to know what is consuming lots of disk space on "/usr" then in order to find that out we can check the output of the following command to know Ambari/HDP or what else is consuming that much space. It should list top 10 directories consuming more space.
# du -a /usr | sort -n -r | head -n 10
.
Created ‎08-17-2017 09:55 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I seen the disk space requirement . I am good with that in other mount point . I have more then 100GB for two mount points .
But the issue with "/usr" , its a default space we get it and cant be extended .
Here is the output of command .
Before install it was "1.5 G" in that mount point . I was not fail at first attempt for that node .
Created ‎08-17-2017 05:03 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Team please reply my post . This is blocking us to proceed with installation .
Created ‎08-17-2017 05:04 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If I need to clean the previous installation from the error node , how to do that ?
Please help me with steps .
Created ‎08-17-2017 05:06 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Regarding your query: "If I need to clean the previous installation from the error node , how to do that ?"
> If you have some previous installation of HDP in your mentioned host, then in order to cleanup the previous installation you can refer to the following article: https://community.hortonworks.com/articles/97489/completely-uninstall-hdp-and-ambari.html
Created ‎08-17-2017 05:29 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Are you using volume groups? if yes,First of all you need to see if you have any available pe's in your volume group (in this case vg00). Check the output of
# vgdisplay vg00
Then, assuming you have some Physical Extents free, you can extend the /usr volume,
# lvextend -L +<x-amount>G /dev/vg00/usr
after that you need to resize your filesystem to reflect the new size of this "partition":
# resize2fs /dev/vg00/usr
assuming it is ext3 or
# resize4fs /dev/vg00/us
if your partition filesystem is ext4 (check in /etc/fstab file if necessary)
If you have no PE's available, you could consider either to shrink another volume in the same group (like home in this case), or add another PV. Regardless, note that the fact you have a partition that's 100GB in size does not mean you have access to 100GB. Since it is a physical volume used for LVM it only means you have 100GB worth of Physical Extents
Created ‎08-18-2017 05:03 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for info . With that I found our some free space was available to extend . Just as side note, used following commands
vgdisplay vg00 |grep Free -->> Tell how much free space available
lvextend -L +3G /dev/mapper/vg00-usr --> Extend the mount point
resize2fs /dev/mapper/vg00-usr --> Resize the /usr mount
Created ‎08-18-2017 09:21 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Good to now then I deserve the credit .....just accept my response !
Created ‎08-23-2017 07:33 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Geoffrey Shelton Okot . Sorry i missed the last message . Can you please let me know how to do Accept the Response ?
