We are using Python 3.6 to write some code around pyaudio, tensorflow and Deep Speech to capture audio, store it in a wave file and then process it with Deep Speech to extract some text. This example is running in OSX without a GPU on Tensorflow v1.11.
The Mozilla Github repo for their Deep Speech implementation has nice getting started information that I used to integrate our flow with Apache NiFi.
This pre-trained model is available for English. For other languages, you will need to build your own. You can use a beef HDP 3.1 cluster to train this. Note: THIS IS A 1.8 GIG DOWNLOAD. That may be an issue for laptops, devices or small data people.
Apache NiFi Flow
The flow is simple, we call our shell script that runs Python that records audio and sends it to Deep Speech for processing.
We get back a voice_string in JSON that we turn into a record for querying and filtering in Apache NiFi.
I am handling a few voice commands for "Save", "Load" and "Move". As you can imagine you can handle pretty much anything you want. It's a simple way to use voice to control streaming data flows or just to ingest large streams of text. Even using advanced Deep Learning, text recognition is still not the strongest.
If you are going to load balance connections between nodes, you have options on compression and load balancing strategies. This can come in handy if you have a lot of servers.
HW13125:DeepSpeech tspann$ ./runnifi.sh
TensorFlow: v1.11.0-9-g97d851f04e
DeepSpeech: unknown
2018-12-10 14:36:43.714433: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
{"systemtime": "12/10/2018 14:36:43", "voice_string": "one two three or five six seven eight nine"}
We can run this on top of YARN 3.1 as dockerized or non-dockerized workloads.
Setting up nodes to run HDF 3.3 - Apache NiFi and friends is easy in the cloud or on-premise in OpenStack with super devops tools.
When running Apache NiFi it is easy to monitor in Ambari: