Community Articles

Find and share helpful community-sourced technical articles.
Celebrating as our community reaches 100,000 members! Thank you!
Master Guru



This is for people preparing to attend my talk on Deep Learning at DataWorks Summit Berling 2018 ( on Thursday April 19, 2018 at 11:50AM Berlin time.

This is for running Apache MXNet on a Raspberry Pi.

Let's get this installed!

git clone

The installation instructions at Apache MXNet's website ( are amazing. Pick your platform and your style. I am doing this the simplest way on Linux path.


This builds on previous builds, so see those articles. We installed the drivers for Sense Hat, Intel Movidius and the USB Web Cam previously. Please note that versions for Raspberry Pi, Apache MXNet, Python and other drivers are updated every few months so if you are reading this post DWS 2018 you should check the relevant libraries and update to the latest versions.

You need Python, Python Devel and PIP installed and you may need to run as root. You will also need OpenCV installed as mentioned in the previous article.

In this combined Python script we grab Sense-Hat sensors for temperature, humidity and more. We also run Movidius image analysis and Apache MXNet Inception on the image that we capture with our web cam. Apache MXNet is now in version 1.1, so you may want to upgrade.

pip install --upgrade pip
pip install scikit-image

git clone

sudo apt-get update -y
sudo apt-get install python-pip python-opencv python-scipy python-picamera -y
sudo apt-get -y install git cmake build-essential g++-4.8 c++-4.8 liblapack* libblas* libopencv*
git clone --recursive mxnet --branch 1.0.0
cd incubator-mxnet
export USE_OPENCV = 0
cd python
pip install --upgrade pip
pip install -e .
pip install mxnet==1.0.0

MiniFi Flow to Run Python Script and Send Over Images (Running on Raspberry Pi)


Routing on Server to Process Either an Image or a JSON


Our Apache NiFi Server Receiving Input from Raspberry Pi


Apache NiFi Server Processing The Input


We route to two different processing flows, with one for saving images, the other adds a schema and converts the JSON data into Apache AVRO. The AVRO content is merged and we send that to a central HDF 3.1 cluster that can write to HDFS. We can either stream to an ACID Hive table or convert AVRO to Apache ORC and store it to HDFS and autogenerate an external Hive table on top of it. You can find many examples of both of these processes in my links below. We could also insert into Apache HBase or insert into an Apache Phoenix table. Or do all of those and send it to Slack, Email, Store in an RDBMS like MySQL and anything else you could think of.

Generated Schema



We are using Apache MiniFi Java Agent 0.3.0. I will be adding a follow up including MiniFi 0.40 with the native C++ TensorFlow and USB Cam. See this awesome article for TensorFlow:


Source Code:

This is too easy!