We have a CDH 5.15.2 Cluster on AWS which was launched using Cloudera Director Template, with multiple EBS volumes for our worker nodes.
However, all our data is persisted in S3 and we do not store any of our data in EBS volumes.
Since, we do not use EBS volumes, we want to detach couple of EBS volumes(say, 3 volumes out of 5) from each of the worker nodes to avoid unwanted cost.
I could not find these details in cloudera documentation on how to safely detach the EBS volumes.
Would appreciate your help in guiding me here.
Additional details :
From AWS Console, the EBS volumes are shown below
Root device : /dev/sda1
Block devices : /dev/sda1
If I login to the individual worker node and check the volumes, it shows like below :
Filesystem Type Size Used Avail Use% Mounted on
/dev/nvme0n1p2 xfs 512G 24G 489G 5% /
devtmpfs devtmpfs 31G 0 31G 0% /dev
tmpfs tmpfs 31G 0 31G 0% /dev/shm
tmpfs tmpfs 31G 17M 31G 1% /run
tmpfs tmpfs 31G 0 31G 0% /sys/fs/cgroup
/dev/nvme4n1 ext4 504G 1.5G 503G 1% /data3
/dev/nvme3n1 ext4 504G 1.4G 503G 1% /data2
/dev/nvme2n1 ext4 504G 1.2G 503G 1% /data1
/dev/nvme1n1 ext4 504G 1.3G 503G 1% /data0
cm_processes tmpfs 31G 24K 31G 1% /run/cloudera-scm-agent/process
tmpfs tmpfs 6.2G 0 6.2G 0% /run/user/1000
Some more details :
CDH Linux and AWS console equivalent volumes :
In cdh linux --- In AWS console
/dev/nvme0n1p2 --- /dev/sda1
/dev/nvme4n1 --- /dev/sdai
/dev/nvme3n1 --- /dev/sdag
/dev/nvme2n1 --- /dev/sdah
/dev/nvme1n1 --- /dev/sdaf