Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Cleaning up preview-record-buffer tmp files from /data/hadoop/yarn/local

Solved Go to solution

Cleaning up preview-record-buffer tmp files from /data/hadoop/yarn/local

Contributor

I have a disk running full on one of my Data node:

[ayguha@dh03 hadoop]$ sudo du -h --max-depth=1
674G    ./hdfs
243G    ./yarn
916G    .
[xx@dh03 local]$ sudo du -h --max-depth=1
1.4G    ./filecache
3.2G    ./usercache
68K     ./nmPrivate
242G    .

There are over 1k tmp files accumulating in /data/hadoop/yarn/local

[ayguha@dh03 local]$ ls -l *.tmp | wc -l
1055


./optimized-preview-record-buffer-2808068b-4d54-492e-a31a-385065d25a408826610818023522318.tmp
./preview-record-buffer-24a7477f-01f0-427e-a032-54866df48b197825057363055390034.tmp
./preview-record-buffer-b22020bb-6ec2-4f73-9d65-65dbba50136e527236496621902098.tmp
[ayguha@dh03 local]$ find ./*preview-record-buffer* -type f -mtime +90 | wc -l
973

There are near 1k files that are older than 3 months . Is it safe to delete these files ?

ENV: Hadoop 2.7.1.2.4.0.0-169 HDP 2.4

1 ACCEPTED SOLUTION

Accepted Solutions
Highlighted

Re: Cleaning up preview-record-buffer tmp files from /data/hadoop/yarn/local

Hi @Suhel,

As mentioned in https://community.hortonworks.com/questions/35751/manage-yarn-local-log-dirs-space.html,

We can safely delete files in /yarn/local if there are no applications running.

2 REPLIES 2
Highlighted

Re: Cleaning up preview-record-buffer tmp files from /data/hadoop/yarn/local

Hi @Suhel,

As mentioned in https://community.hortonworks.com/questions/35751/manage-yarn-local-log-dirs-space.html,

We can safely delete files in /yarn/local if there are no applications running.

Re: Cleaning up preview-record-buffer tmp files from /data/hadoop/yarn/local

Contributor

Hi @ssathish,

I did look at the Link you posted and decided to delete the file.

CAUTION: For some reason a few hours later there were inconsistencies in the cluster . One of the data nodes (D5) were clean up was done had corruption in the way containers were processed. Some jobs for which containers were lunched in D5 executed to completion successfully and some other jobs failed due to Vertex failed error. We could not find any errors in RM log/Datanode Log/Node Manager Log

We had to remove D5 off the cluster and reinstall node manager to set things right.

Don't have an account?
Coming from Hortonworks? Activate your account here