Created 09-23-2016 08:07 AM
I have a container logs configured as below
yarn.nodemanager.log-dirs:
/data1/hadoop/yarn/log,/data2/hadoop/yarn/log,/data3/hadoop/yarn/log,/data4/hadoop/yarn/log,/data5/hadoop/yarn/log,/data6/hadoop/yarn/log,/data7/hadoop/yarn/log,/data8/hadoop/yarn/log,/data9/hadoop/yarn/log,/data10/hadoop/yarn/log,/data11/hadoop/yarn/log,/data12/hadoop/yarn/log
/data9/hadoop/yarn/log file system in one of the data node is full, all logs are older than 1 year
Can i deleted these logs ?
Created 09-23-2016 08:51 AM
Hi @rama,
You can delete the old container logs. If yarn.nodemanager.log-dir is full no new containers will start on that node.
See also
yarn.nodemanager.disk-health-checker.min-healthy-disks
yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage
yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb
/Best regards, Mats
Created 09-23-2016 08:51 AM
Hi @rama,
You can delete the old container logs. If yarn.nodemanager.log-dir is full no new containers will start on that node.
See also
yarn.nodemanager.disk-health-checker.min-healthy-disks
yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage
yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb
/Best regards, Mats
Created 09-23-2016 09:20 AM
Thanks@Mats Johansson
I have a another basic question is that, what is this directory each of my Node node having 12 directories,can i increase this.and how the logs are distributed between these 4 nodes/12 directories?
Created 09-23-2016 02:40 PM
@rama These directories are used by yarn for job logs. There are similar directories used for localization, the yarn-local dirs. They are not distributed so much as used when containers are allocated on that node. They should get cleaned up when jobs complete but can leave orphaned files in the event of a Resource Manager restart or a Node Manager restart. The directories are configured via yarn and it is a comma separated list of locations so you can add additional mounts/directories but they will apply to all node managers managed by Yarn. Hope this helps.