We have CDH 5.7.2 installed alongside with Cloudera Manager 5.8.1 at our company. We have configured YARN log aggregation to be enabled and YARN log aggregation retain seconds set to 1 day. For some reason, the YARN job logs in the default HDFS directory /tmp/logs/ are not being deleted. Can anyone explain why this is?
BTW, we have both Hive and Spark jobs running on our cluster.
To answer your questions:
The /tmp/logs and all subdirs are 770 and the group is hdfs. Should the group be hadoop instead? I see that the yarn user is not part of the hdfs group but is in the hadoop group.
The logs date back to Dec 18 and increase in size less than 1TB per day. We manually delete the logs to prevent it getting to big.
Hi , my cluster is CDH 5.7.2,CM5.7.0, and I meet the same touble.
we set dfs.permissions.superusergroup=supergroup ; and we run the mapreduce application by 'hdfs' user, the hdfs file like this:
drwxrwx--- - hdfs supergroup 0 2018-06-05 15:01 /tmp/logs/hdfs
and the linux mapping of user to group is :
what should I do to resolve this problem? thanks you very much.
Thank you so munch！
I change the group of '/tmp/logs' to hadoop , and restart the JobHistoryServer role, everything being OK.
So happy !
Thanks for mentioning the information about the hadoop group and permissions. It would seem, that after applying these settings, all is working.
As we know "Yarn Aggregate Log Retention" can control only YARN but /tmp/logs is not limited to YARN
So Can you check the YARN log date using below steps.
CM -> Yarn -> Web UI -> Resource Manager web UI -> (it will open 8088 link) Click on Finished link (left side) -> Come down and click on 'Last' button -> Check the log date -> You should see only one day history data as you configured to 1 day
Note: Make sure CM-> Yarn -> Configuration -> Enable Log Aggregation = Enabled
I did as you asked and see that the oldest finished is from Dec 18, and I see the logs in HDFS under /tmp/logs.
Log Aggregation is enabled.