No you don't need the same number of hosts as present in the original cluster. However you would need to map the hosts and/or their hostgroups to their relevant configurations. Say for instance u have zookeeper on 1 node on your dev setup but then in your next environment you have a 3 node zookeeper so your quorum property will change for anything that uses zookeeper (eg. hbase, hive etc). The specific value will then need to point to the specific host_group name and the blueprint processor will take care of the rest. However I have found this to be limiting although it may work in more general scenarios where you scale out i.e have the same profile of machines in every environment. Also there were a few bugs like the hive jdbc url is not processed by the blueprint processor.
Indeed the approach I took to ansibilize the installation and it may be similar to any other automation tool you may use is to supply the values into the configuration from say your ansible inventory i.e. automate the values that need to be set for the hosts. and then define the host_groups using those hosts and assign the components to those host_groups. This maintains flexibility and then eventually in an environment which maps your production environment u can just scale out.
Note that in version 2.6 and beyond you would need to register the stack version using the VDF file rather than specifying the stack version in the blueprint.