Each block of data has at least 3 replicas across the other nodes (depending on your configuration). In your particular case when you brought the cluster back up the Namenode would be expecting x blocks of data to be on the node it is shutdown. Regardless of the ambari-agent being running or not, when you started it up it sent a block report to the namenode. If the Namenode "sees" in that block report that original blocks of data(before you shutdown the cluster) are missing, it will simply replicate these block from a healthy data node to other nodes. So in this example a block needs to have 3 replicas across the cluster. If When receiving all block reports from all data node, the Namenode sees that certain blocks are not compliant with that rule, then it will replicate those blocks to other healthy nodes automatically
About starting back up the nodes you are safe to do it as HDFS is prepared to "deal" with this kind of situation.
One thing to look at is if you setup local directories on the failed mountpoint for the services that were running on the node.Make sure that is not the case, and if yes you are ok to startup again the services and ambari-agent.