Hi @Ali there's really no single right way to use HDFS tiered storage. It's a flexible framework that can be implemnted to suite your particular usage requirement. There was a recent graphic added to some of the datalake 3.0 discussion which is quite relevant here and at least gives some view into one set of potential options.

The number of replicas should always be 3 for production HDFS.
I personally prefer and recommend having tiers tied to specific storage node types, i.e. a Hot node with 12 x SSD, a Warm node with 12 x 2TB etc, as this means if I need to add more capacity to a tier I just add a node of that type, and if a node goes down I know what type I need to replace, it just keeps things simpler.
It's all managed via Ambari, and with the upcoming Erasure Coding that's coming with 3.x we will see yet another potential layer appear in this design as shown above.
I prefer whenever possible to keep things simple, you can of course mix 1 replica on hot, 2 on warm etc etc, but in my experience that gets over complicated very quickly.
I hope that helps.