There may be some marginal gain in terms of network backplane throughput, however, it's not really necessary, and balanced against cost, availability and flexibility. The A8-11 instances are more intended for traditional HPC which require non-commodity networking. They are relatively rare compared to the more commodity backed instances in Azure, so can be hard to provision in some regions in large volume. The other key consideration is that they are not portable to other instance classes, so some of the elasticity benefits are lost.
In short, you could in theory need the RDMA networking for very heavy shuffle ML (maybe for deep learning or some of the newer neural net and graph algorithms in spark) but the cost doesn't usually justify this, and you're usually going to be better off with D class instances for YARN and HDFS.