HDFS space was full, so I wanted to add space to it.
Since running on AWS, I added a nice 500GB volume per instance, and mounted it properly.
Now my mistake was to just change the HDFS configuration and add it to 'dfs.datanode.data.dir', forgetting the actual Ambari components are running inside docker - so it actually caused filling the docker itself, and not the HDFS mounted volume.
Found the solution myself - while the host has the device, the docker container will see it only after restart.
I restarted one-by-one, and then mounted the device from within the docker.
Is there a better way to do it ?
@Shushu Inbar As you've probably found while searching, there are a number of ways to do this, and their likelyhood of working varies depending on the exact configuration. Without knowing a lot more I'd say restarting the container is the safest and most likely method to work!
I am very straight forward - using cloudbreak as-is, with the given blueprint, or AWS.
Now that I know how to do it, I will continue doing it the same way if needed, but if there is a way to do it simpler via cloudbreak/ambari - it would be great. In any way, I think better Ambari documentation is needed in order to clarify the process.
@Shushu Inbar OK, so essentially any mount point under /hadoopfs/ will be automounted to the instances, the easiest way to currently do this is mount the new drives on the hosts and then either restart the cluster via Cloudbreak, or as you have done, restart the instances one by one. Then add via Ambari.
You will see some improvements in this area with the next release of Cloudbreak, so stay tuned for that!
Hope that helps.