- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Calculate File Descriptor in HBase
- Labels:
-
Apache HBase
-
Cloudera Manager
Created on ‎03-06-2019 09:13 AM - edited ‎09-16-2022 07:12 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello All,
I am looking for some best practices or recommendations to set a best possible value for rlimit_fds (Maximum Process File Descriptors) property. Currently, it is set to default i.e. 32768 and we are getting File Descriptor Threshold alerts.
We would first like to look for a best possible value for rlimit_fds. Is there a formulae or a practice or few checks that can be performed to determine a best value?
Thanks
snm1523
Created ‎03-07-2019 06:23 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
In the output, you should be able to classify the items as network (sockets) / filesystem (files) / etc., and the interest would be in whatever holds the highest share. For ex. if you see a lot more sockets hanging around, check their state (CLOSE_WAIT, etc.). Or if it is local filesystem files, investigate if those files appear relevant.
If you can pastebin your lsof result somewhere, I can take a look.
Created ‎03-06-2019 06:33 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The number should be proportional to your total region store file counts and the number of connecting clients. While the article at https://blog.cloudera.com/blog/2012/03/hbase-hadoop-xceivers/ focuses on DN data transceiver threads in particular, the formulae at the end can be applied similarly to file descriptors in general too.
Created ‎03-07-2019 03:09 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Would you be able to please help me with any quick command / script to identify avoidable open files or files stuck in some process using 'lsof' and guide further actions to take?
I tried running a generic 'lsof | grep java' but it obviously gave me a huge list of files and became a bit difficult to get relevant information.
Thanks
snm1523
Created ‎03-07-2019 06:23 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
In the output, you should be able to classify the items as network (sockets) / filesystem (files) / etc., and the interest would be in whatever holds the highest share. For ex. if you see a lot more sockets hanging around, check their state (CLOSE_WAIT, etc.). Or if it is local filesystem files, investigate if those files appear relevant.
If you can pastebin your lsof result somewhere, I can take a look.
Created ‎06-03-2019 12:27 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you for the help on this.
I was able to identify some information that helped here. Will come back in case need further help.
Will accept your reply as Solution. 🙂
Thanks
snm1523
