Created on 12-18-201605:51 AM - edited 08-17-201907:17 AM
To send data to my firewall protected internal Hadoop cluster, I have my remote Raspberry Pi 3 with attached Sense-Hat use a cron job to send MQTT messages to a cloud hosted MQTT Broker. NiFi then subscribes to that queue and pulls down the messages asynchronously. I use JSON as my packaging format as that is very easy to work with and process in NiFi and elsewhere.
The Sense-Hat connects to your non-Pi Zero and provides a number of easy to access sensors including ones for temperature, humidity, X,Y, Z, Roll and Pitch. It also has a nice light display that can be used to display graphical messages. For data capture needs it also has standard Linux reports like CPU temperature that we can also grab. You could also grab memory usage and disk space. Since the API to access Sense-Hat is Python, it makes sense to keep my access program in Python. So I have a small Python program that reads and formats the Sense Hat sensor values, puts them into a JSON document and sends them up to my MQTT cloud broker. It's a very simple NiFi flow to ingest these values as they arrive. I have previously read these values via a REST API I wrote on the Pi using Flask, but that interface requires direct synchronous access from the cluster to the Pi, which is not usually possible. I would love to push from the Pi to NiFi, but again that would require a direct network connection. Having the asynchronous break, is great for performance and also allows either party to be offline and have the broker queue hold the messages until we return.
My cloud provider for MQTT has a free plan and I am using that for this demo. They have a web UI that you can see some statistics and information on the broker, messages, topics and queue.
Once ingested, I pull out the fields I like from the JSON received from MQTT, format it into a SQL Insert and then call the PutSQL processor to upsert those values in HBase through the Phoenix layer. I have added UUID() from NiFi as a primary key to allow for uniqueness for every row ingested.
Once we have ingested data into Apache Phoenix it is very easy to display the data for exploration and graphing in Apache Zeppelin notebooks via the jdbc(phoenix) interpreter.
CREATE TABLE sensor (sensorpk varchar not null primary key, cputemp varchar,
humidity decimal(10,1),pressure decimal(10,1),temp decimal(5,1),tempf decimal(5,1),temph decimal(5,1), tempp decimal(5,1), pitch decimal(10,1), roll decimal(10,1), yaw decimal(10,1), x decimal(10,1), y decimal(10,1), z decimal(10,1));
Please note: that the sensors on the sense hat are not industrial grade or extremely accurate. For commercial purposes you will want more precise industrial sensors, battery backups, special casing and higher end devices.