After executing spark-submit several times, I started getting cluster capacity usage alerts. Please see the attached screenshot.
I assume that the reason lies in logs. How can I clean the logs and free up the disk space?
@Liana Napalkova You should open shell console to host eureambarislave1 for example and check the disk usage by running
# run command as root user or use sudo # df -h # du -d 1 -h /
This will show which mount point is running out of space. Depending on your disk partitions and mount points the space issue could come from data directories for hdfs, tmp or logs folder as you said.
Note: If you like to comment this post make sure you tag my name so I receive an update on my email. Also If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.