Created on 09-25-201602:10 PM - edited 08-17-201909:46 AM
As part of a live Drone ingest, I want to identify what in the image. The metadata provides a ton of information on GPS, altitude and image characteristics, but not what's in the image.
IBM, Microsoft and Google all have APIs that do a good job of this and they for the most part of "free" tiers. I wanted to run something locally using libraries installed on my cluster. For my first option, I used TensorFlow Inception-v3
Image Recognition. In future articles I will cover PaddlePaddle, OpenCV and some other Deep Learning and non-deep learning options for Image Recognition. Also I will show the entire Drone to Front-End flow including Phoenix, Spring Boot, Zeppelin, LeafletJS and more. This will be done as part of a meetup presentation with a certified drone pilot.
To Run My TensorFlow Binary From HDF 2.0 I use the ExecuteStreamCommand to run a shell script containing the information below:
In my script I pull the file out of HDFS (that was loaded by HDF 2.0) and then run the binary versino of TensorFlow that I compiled for Centos7. If you can't or don't want to install Bezel and build that, they you can run the python script, it's a little bit slower and has slightly different output.