Member since
06-18-2020
3
Posts
0
Kudos Received
0
Solutions
06-19-2020
07:48 AM
Hi. Tis hadoop cluster hacer 1.4PB size, so for this node we have this situation size on the Mount points: [root@ithbda108 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md2 459G 50G 385G 12% / tmpfs 126G 36K 126G 1% /dev/shm /dev/md0 453M 77M 349M 19% /boot /dev/sda4 6.6T 6.3T 323G 96% /u01 /dev/sdb4 6.6T 6.3T 321G 96% /u02 /dev/sdc1 7.1T 6.8T 314G 96% /u03 /dev/sdd1 7.1T 6.8T 314G 96% /u04 /dev/sde1 7.1T 6.8T 318G 96% /u05 /dev/sdf1 7.1T 6.8T 323G 96% /u06 /dev/sdg1 7.1T 6.8T 325G 96% /u07 /dev/sdh1 7.1T 6.8T 323G 96% /u08 /dev/sdi1 7.1T 6.8T 324G 96% /u09 /dev/sdj1 7.1T 6.8T 324G 96% /u10 /dev/sdk1 7.1T 6.8T 324G 96% /u11 /dev/sdl1 7.1T 6.8T 322G 96% /u12 cm_processes 126G 200M 126G 1% /var/run/cloudera-scm-agent/process ithbda103.sopbda.telcel.com:/opt/exportdir 459G 338G 98G 78% /opt/shareddir I suppose that it can be an issue about space disk and there is no space left on the device at the time of writing into log4j. Any idea in what action can we do to solve the space left on the mount points? Some cloudera procedure to optimize that the process can be up?
... View more