There are couple of things to consider in this scenario.
1. Time to recover when a machine fails : With these new disks you have 78 TB of data on each data node. Depending on the network speed and how many datanodes you have in the cluster reconstruction can take some time.
2. Booting up the cluster : if all your datanodes have such huge capacity -- the block reports could get quite large. This would cause increase in bootup times especially during the initial bootup of data node, and possibly in larger block reports. Disk scans can be expensive -- but I am hoping that these 16 TB disks are all Samsung SSDs and quite fast.
3. Data Imbalance : If you are adding these disks because you are running out of space, then you have this issue that older disks have far less space. if you are running a round-robin block placement algorithm (which most probably you are) then it is possible to get errors from these nodes since datanode would try to write to these older disks with more data. Balancer may not solve this issue since balancer tries to achieve good data distribution over the cluster not between disks on a node. if you run into this problem -- There are 2 ways to fix it.
1. Decommission the node and let it rejoin. That process of rebuilding a data node will create an even distribution of data on all disks.
2, Run disk Balancer - A tool that is still a work in progress - tracked in HDFS-1312.
So generally what @Sunile Manjee said is correct. While this should not cause any performance issues, it does have potential to cause operational issues.