Support Questions

Find answers, ask questions, and share your expertise

HDFS storage capacity usage

Contributor

After executing spark-submit several times, I started getting cluster capacity usage alerts. Please see the attached screenshot.

72643-screen-shot-2018-05-06-at-221305.png

I assume that the reason lies in logs. How can I clean the logs and free up the disk space?

72644-screen-shot-2018-05-06-at-221745.png

1 REPLY 1

@Liana Napalkova You should open shell console to host eureambarislave1 for example and check the disk usage by running

# run command as root user or use sudo
# df -h
# du -d 1 -h /

This will show which mount point is running out of space. Depending on your disk partitions and mount points the space issue could come from data directories for hdfs, tmp or logs folder as you said.

Note: If you like to comment this post make sure you tag my name so I receive an update on my email. Also If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.