01-05-2017 09:12 AM
We have CDH 5.7.2 installed alongside with Cloudera Manager 5.8.1 at our company. We have configured YARN log aggregation to be enabled and YARN log aggregation retain seconds set to 1 day. For some reason, the YARN job logs in the default HDFS directory /tmp/logs/ are not being deleted. Can anyone explain why this is?
BTW, we have both Hive and Spark jobs running on our cluster.
01-05-2017 01:14 PM
01-05-2017 05:22 PM
As we know "Yarn Aggregate Log Retention" can control only YARN but /tmp/logs is not limited to YARN
So Can you check the YARN log date using below steps.
CM -> Yarn -> Web UI -> Resource Manager web UI -> (it will open 8088 link) Click on Finished link (left side) -> Come down and click on 'Last' button -> Check the log date -> You should see only one day history data as you configured to 1 day
Note: Make sure CM-> Yarn -> Configuration -> Enable Log Aggregation = Enabled
01-05-2017 08:34 PM
To answer your questions:
The /tmp/logs and all subdirs are 770 and the group is hdfs. Should the group be hadoop instead? I see that the yarn user is not part of the hdfs group but is in the hadoop group.
The logs date back to Dec 18 and increase in size less than 1TB per day. We manually delete the logs to prevent it getting to big.
01-09-2017 12:16 AM
10-22-2018 01:06 AM
Hi , my cluster is CDH 5.7.2,CM5.7.0, and I meet the same touble.
we set dfs.permissions.superusergroup=supergroup ; and we run the mapreduce application by 'hdfs' user, the hdfs file like this:
drwxrwx--- - hdfs supergroup 0 2018-06-05 15:01 /tmp/logs/hdfs
and the linux mapping of user to group is :
what should I do to resolve this problem? thanks you very much.
10-22-2018 06:23 PM