We are running hadoop HA cluster using AWS EC2 instances with 17 Data ndoes (All instances are M4.4xlarge including name nodes). All the DN's are configured with 16TB (EBS st1) volumes for hdfs.
Now we are running out of HDFS storage and looking to extend the storage. Since 16TB is max limit for st1 EBS we cannot extend the existing volume.
Trying to add additional 16TB volumes to few data nodes and update "DataNode directories" in ambari with this new volume path.
Will this approach impact any performance issue with cluster ? Any other things need be considered in this approach ?