Support Questions

Find answers, ask questions, and share your expertise

I have 4 data nodes and In one of data node yarn container logs is full, can i deleted these logs which older than 1 year?

avatar
Expert Contributor

I have a container logs configured as below

yarn.nodemanager.log-dirs:

/data1/hadoop/yarn/log,/data2/hadoop/yarn/log,/data3/hadoop/yarn/log,/data4/hadoop/yarn/log,/data5/hadoop/yarn/log,/data6/hadoop/yarn/log,/data7/hadoop/yarn/log,/data8/hadoop/yarn/log,/data9/hadoop/yarn/log,/data10/hadoop/yarn/log,/data11/hadoop/yarn/log,/data12/hadoop/yarn/log

/data9/hadoop/yarn/log file system in one of the data node is full, all logs are older than 1 year

Can i deleted these logs ?

1 ACCEPTED SOLUTION

avatar
Super Collaborator

Hi @rama,

You can delete the old container logs. If yarn.nodemanager.log-dir is full no new containers will start on that node.

See also

yarn.nodemanager.disk-health-checker.min-healthy-disks

yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage

yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb

/Best regards, Mats

View solution in original post

3 REPLIES 3

avatar
Super Collaborator

Hi @rama,

You can delete the old container logs. If yarn.nodemanager.log-dir is full no new containers will start on that node.

See also

yarn.nodemanager.disk-health-checker.min-healthy-disks

yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage

yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb

/Best regards, Mats

avatar
Expert Contributor

Thanks@Mats Johansson

I have a another basic question is that, what is this directory each of my Node node having 12 directories,can i increase this.and how the logs are distributed between these 4 nodes/12 directories?

avatar
Expert Contributor

@rama These directories are used by yarn for job logs. There are similar directories used for localization, the yarn-local dirs. They are not distributed so much as used when containers are allocated on that node. They should get cleaned up when jobs complete but can leave orphaned files in the event of a Resource Manager restart or a Node Manager restart. The directories are configured via yarn and it is a comma separated list of locations so you can add additional mounts/directories but they will apply to all node managers managed by Yarn. Hope this helps.