At present you can consume messages from a Kafka cluster which have been encoded using Confluent Schema Registry serializers. However, we would read them in the raw byte form and the data would not be very useful unless you are able to handle that elsewhere in the flow. With the HDF 3.0 release we've now provided support for schema registries in general which include a built-in simple schema registry in Apache NiFi and the ability to leverage the Hortonworks Schema Registry. Rather than pushing such logic into Kafka specific serializers and deserializers we have a more powerful and broadly applicable reader and writer mechanism which is both format and schema aware without processors having to be worried about anything other than Record objects which is what the readers and writers deserialize and serialize.
So, i said all that to say that a good logical step for us then is to consider adding support for the Confluent Schema Registry. There is a ticket for this work in the community and hopefully it will be progressed soon.