Community Articles

Find and share helpful community-sourced technical articles.
Announcements
Celebrating as our community reaches 100,000 members! Thank you!
Labels (1)
avatar
Expert Contributor

Introduction

Performance of a cluster running on Hadoop can be impacted by the OS partitioning. This document is intended to understand the best practices to setup the “/var” folder/partition with optimum size.

Lets try to approach this problem by asking some important questions.

  • What is “/var” used for?
  • How can the “/var” folder run out of disk space?
  • Common issue to expect on a Hadoop cluster if “/var” is out of disk space.
  • How is the current setup of “/var” in my cluster ?
Question 1 - What is /var used for?

From OS perspective, “/var” is commonly used for constantly changing files i.e. variable. The short form of which is “var”.

Example of such files could be the log file, mail, transient file, the printer spool, temporary files, cached data, etc.

For example - “/var/tmp” holds the temporary files between system reboots.

On any node (Hadoop or non-Hadoop), /var directory holds content for a number of applications. It also is used to store downloaded update packages on a temporary basis.

The PackageKit update software downloads updated packages to /var/cache/yum/ by default. /var/ partition should be large enough to download package updates.

An example of application which uses /var is MySql, which by default uses “/var/lib/mysql” as the MySql directory location.

Question 2 - How can /var folder run out of disk space?

/var is much more susceptible to filling up - by accident or by attack.

Some of the directories which can be affected by this is “/var/log”, “/var/tmp”, “/var/crash” etc.

If there is a serious OS issue, the logging can increase tremendously. If the disk space is set too low, like 10GB, this excessive logging can fill in the “disk” space for /var.

Question 3 - Common issue to expect on a Hadoop cluster if “/var” is out of disk space.

/var has been seen to be easily filled by a (possibly misbehaved) application, and that if it wasn't separate from /, the filling of / could cause a kernel panic.


“/var” folder has some very important file/folders locations which are used by default by many kernel and OS applications.

For example –

  • “/var/run” is used for all the running process to keep their PIDs and system information. If “/var” is full due to low disk space configuration, then the application will fail to run.
  • “/var/lock” is the folder which contains locks of the running applications for the files/devices they have locked on. If the disk space runs out the lock is not possible and the existing/new applications will fail.
  • “/var/lib” holds all the dynamic data libraries and files for the applications. If there is no device space left, the application will fail to work.

“/var” is very important from Hadoop perspective to keep all the service running. Running out of Disk space on “/var” can cause Hadoop and dependent services to fail to run on that node.

Question 4 - How is the setup of “/var” in the clusters on my cluster?
  • Are the “Hadoop” separated from the “/var” folder location.
  • Are the huge sized logs or huge number of OS logs still located on the “/var” location, example - “/var/log/messages” and “/var/crash”.
  • If the Kdump is configured to capture the crashdump logs, then risk increases, since these logs are usually huge file sizes - sometime 100 GB or more.
  • The default configuration of the kdump logs use the directory location “/var/crash”.
  • These days, the size of Physical Memory can easily be 500GB ot 1TB, which would spill the kdump logs of huge size ( *note* - kdump logs can be compressed)

The size of “/var” therefore plays important role if /var/crash can be too low for saving the “crashdump” logs.

If there is a OS crash (Kernel Panic etc.) then the crashdump will never be captured complete, since the size of “/var” is too low i.e. 10 GB or 50GB. Without the complete crashdump logs, there can never be a complete analysis of the cause of Kernel Crash.

Answer - Recommendations on the optimum setup of “/var”.
  • Increase the size of “/var” to 50GB at least for all the nodes and have a uniform size across the clusters.
  • Change the location of log file for the “kdump”. Existing log file location is “/var/crash”. Kdump can be configured to put the logs on any other local disk with a size of around 300 - 500GB or as a best measure it can be dumped over network to a remote disk.
  • /var should by default should be separated from the root partition. Depending on the requirement, the “/var/log” and “/var/log/audit” can also be created as a separate partitions.
  • /var should be mounted on a LVM disk to allow increasing the sizes with ease if required.
  • All the Hadoop Services logs should be separated from /var. The Hadoop Logs ideally should be placed in a separate Disk. This disk should be used only for Logs (from Hadoop and Dependent Applications Like MySql etc) and not for anything else. This Log location should never be shared with the core Hadoop Services like HDFS,YARN,ZOOKEEPER directory locations
  • One way to achieve this could be by creating a symlink of "/var/<hadoop_logs> to separate LVM disks.
3,357 Views
0 Kudos