1973
Posts
1225
Kudos Received
124
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1848 | 04-03-2024 06:39 AM | |
| 2883 | 01-12-2024 08:19 AM | |
| 1586 | 12-07-2023 01:49 PM | |
| 2351 | 08-02-2023 07:30 AM | |
| 3244 | 03-29-2023 01:22 PM |
02-13-2018
06:53 PM
I have posted an ExecuteSparkInteractive article
... View more
01-02-2018
05:19 PM
1 Kudo
Initial Login to ODroid XU4 GPS Command Line Check WiFi Signal Strength Install Ubuntu Extras Above is our device with GPS The ODroid XU4 is a powerful little inexpensive IoT device. It is similar to a Raspberry PI and compatible with many of the libraries. I chose to install ARMBIAN Ubuntu which is a good OS. It is very easy to add GPS libraries, Apache MiniFi, JDK 8 and Python for processing. Apache NiFi processing flow Apache MiniFi Flow on Device Apache NiFi Processing See: https://community.hortonworks.com/content/kbentry/155604/iot-ingesting-camera-data-from-nanopi-duo-devices.html Source Code https://github.com/tspannhw/odroidxu4-gps-python-minifi Software sudo apt-get update sudo apt-get upgrade sudo apt-get dist-upgrade armbian-config sudo add-apt-repository ppa:webupd8team/java sudo apt-get update sudo apt-get install oracle-java8-installer -y sudo apt-get install oracle-java8-set-default -y blkid swapon /dev/sda1 sudo apt-get install gpsd gpsd-clients sudo dpkg-reconfigure gpsd sudo gpsmon /dev/ttyACM0 sudo apt-get install python-gps cgps -s Resources http://www.hardkernel.com/main/products/prdt_info.php?g_code=G148048570542 https://wiki.odroid.com/odroid-xu4/gettingstart/linux/install_linux https://magazine.odroid.com https://magazine.odroid.com/wp-content/uploads/odroid-xu4-user-manual.pdf#page=6
https://magazine.odroid.com/article/running-yolo-odroid-yolodroid/ http://machinethink.net/blog/object-detection-with-yolo/ https://www.armbian.com/odroid-xu4/ https://docs.armbian.com/ https://docs.armbian.com/User-Guide_Getting-Started/ https://forum.armbian.com/topic/1768-booting-into-command-line-on-opi-pc/ https://docs.armbian.com/User-Guide_Armbian-Config/ https://magazine.odroid.com/wp-content/uploads/ODROID-Magazine-201712.pdf https://magazine.odroid.com/article/running-yolo-odroid-yolodroid/ https://wiki.odroid.com/odroid-xu4/odroid-xu4 https://www.packtpub.com/books/content/working-webcam-and-pi-camera Hardware Armbian Ubuntu 16.04.3 LTS 4.9.61-odroidxu4 Two quad core CPUs and a GPU
2GB LPDDR3
USB 3 Power Supply Powered USB Hub Active Cooling Fan Stratus GPYes USB GPS Device Example Run of GPS Device root@odroidxu4:/opt/demo# ./gps2.sh
[{"utc": "2018-01-02T00:19:25.000Z", "epx": "25.829", "epv": "62.56", "diskfree": "54166.8 MB", "altitude": "41.844", "memory": 7.6, "eps": "1.43", "longitude": "-74.529222193", "ts": "2018-01-02 00:19:23", "ept": "0.005", "host": "odroidxu4", "track": "141.59", "mode": "3", "time": "2018-01-02T00:19:25.000Z", "latitude": "40.268098177", "climb": "-0.042", "ipaddress": "192.168.1.202", "cputemp": 53.0, "speed": "0.189"}]
Software and Device Setup We need to setup a Swap area to deal with not enough RAM for builds. root@odroidxu4:~# sudo blkid
/dev/mmcblk1p1: UUID="7a81b72b-eee2-428f-9706-2aa0bd7e766f" TYPE="ext4" PARTUUID="7cbb190c-01"
/dev/zram7: UUID="794ef788-2696-4f56-97da-8c9e179be92d" TYPE="swap"
/dev/zram0: UUID="99613ecb-d41b-4dbe-a85c-2134800fcd8e" TYPE="swap"
/dev/zram1: UUID="57aaeb7c-b970-456d-a415-3e9151dc5fbf" TYPE="swap"
/dev/zram2: UUID="61a42b3a-e605-49e4-844b-467212228878" TYPE="swap"
/dev/zram3: UUID="7f7539df-3500-489b-990b-929d09896a42" TYPE="swap"
/dev/zram4: UUID="49d3a36a-5150-4c31-8acc-c77c512f8a31" TYPE="swap"
/dev/zram5: UUID="9ae98fa0-507d-415a-95d3-e24e9a4a8b2f" TYPE="swap"
/dev/zram6: UUID="4644d45f-dd3e-4bab-9507-1e28fe4b9d40" TYPE="swap"
/dev/mmcblk1: PTUUID="7cbb190c" PTTYPE="dos"
/dev/sda1: UUID="3C09-B728" TYPE="vfat"
root@odroidxu4:~# sudo mkswap /dev/sda1
mkswap: /dev/sda1: warning: wiping old vfat signature.
Setting up swapspace version 1, size = 114.6 GiB (123010527232 bytes)
no label, UUID=24360a18-0939-47b1-b383-27982d8a2007
root@odroidxu4:~# sudo swapon /dev/sda1
root@odroidxu4:~# sudo swapon
NAME TYPE SIZE USED PRIO
/dev/zram0 partition 124.6M 0B 5
/dev/zram1 partition 124.6M 0B 5
/dev/zram2 partition 124.6M 0B 5
/dev/zram3 partition 124.6M 0B 5
/dev/zram4 partition 124.6M 0B 5
/dev/zram5 partition 124.6M 0B 5
/dev/zram6 partition 124.6M 0B 5
/dev/zram7 partition 124.6M 0B 5
/dev/sda1 partition 114.6G 0B -1
sudo apt-get install gpsd gpsd-clients
sudo gpsmon /dev/ttyACM0
sudo cgps -s
sudo apt-get install python-gps
sudo apt-get install gpsd gpsd-clients
sudo dpkg-reconfigure gpsd
sudo gpsmon /dev/ttyACM0
sudo pip install psutil
sudo pip install --upgrade pip
sudo pip install psutil
___ _ _ _ __ ___ _ _ _
/ _ \ __| |_ __ ___ (_) __| | \ \/ / | | | || |
| | | |/ _` | '__/ _ \| |/ _` | \ /| | | | || |_
| |_| | (_| | | | (_) | | (_| | / \| |_| |__ _|
\___/ \__,_|_| \___/|_|\__,_| /_/\_\___/ |_|
Welcome to ARMBIAN 5.36 user-built Ubuntu 16.04.3 LTS 4.9.61-odroidxu4
System load: 0.00 0.01 0.00 Up time: 10 min
Memory usage: 3 % of 1993MB IP: 192.168.1.202
CPU temp: 58°C
Usage of /: 6% of 57G
[ General system configuration (beta): armbian-config ]
Last login: Mon Jan 1 23:19:21 2018 from 192.168.1.193
pip install --upgrade pip setuptools wheel
pip install -upgrade psutil<br>
... View more
Labels:
01-02-2018
04:58 PM
2 Kudos
This is pretty close to a Raspberry Pi Zero. This inexpensive box is another useful IoT device for ingesting some data very inexpensively. Machine Browsing root@NanoPi-Duo:/home/pi# cpu_freq
INFO: HARDWARE=sun8i
CPU0 online=1 temp=48092 governor=ondemand cur_freq=312000
CPU1 online=1 temp=48092 governor=ondemand cur_freq=312000
CPU2 online=1 temp=48092 governor=ondemand cur_freq=312000
CPU3 online=1 temp=48092 governor=ondemand cur_freq=312000 Source Code for Python and Shell Script https://github.com/tspannhw/nifi-nanopi-duo Hardware The FA-CAM202 is a 200M USB camera. 512MB RAM Ubuntu 16.04.3 LTS 4.11.2 Similar SBC to Raspberry PI ARM Software Setup sudo apt-get install fswebcam -y sudo apt-get install libv4l-dev -y sudo apt-get install python-opencv -y sudo npi-config sudo apt-get update sudo apt-get install libcv-dev libopencv-dev -y pip install psutil pip2 install psutil curl http://192.168.1.193:8080/nifi-api/site-to-site/ -v inferred.avro.schema { "type" : "record", "name" : "NANO", "fields" : [ { "name" : "diskfree", "type" : "string", "doc" : "Type inferred from '\"23329.1 MB\"'" }, { "name" : "cputemp", "type" : "double", "doc" : "Type inferred from '55.0'" }, { "name" : "host", "type" : "string", "doc" : "Type inferred from '\"NanoPi-Duo\"'" }, { "name" : "endtime", "type" : "string", "doc" : "Type inferred from '\"2018-01-04 20:23:56\"'" }, { "name" : "ipaddress", "type" : "string", "doc" : "Type inferred from '\"192.168.1.191\"'" }, { "name" : "h", "type" : "int", "doc" : "Type inferred from '342'" }, { "name" : "ts", "type" : "string", "doc" : "Type inferred from '\"2018-01-04 20:23:43\"'" }, { "name" : "filename", "type" : "string", "doc" : "Type inferred from '\"/opt/demo/images/2018-01-04_2023.jpg.faces.jpg\"'" }, { "name" : "w", "type" : "int", "doc" : "Type inferred from '342'" }, { "name" : "memory", "type" : "double", "doc" : "Type inferred from '60.1'" }, { "name" : "y", "type" : "int", "doc" : "Type inferred from '264'" }, { "name" : "x", "type" : "int", "doc" : "Type inferred from '877'" } ] } hive.ddl CREATE EXTERNAL TABLE IF NOT EXISTS nanopi (diskfree STRING, cputemp DOUBLE, host STRING, endtime STRING, ipaddress STRING, h INT, ts STRING, filename STRING, w INT, memory DOUBLE, y INT, x INT) STORED AS ORC
LOCATION '/nano' Apache NiFi The flow in Apache NiFi is pretty simple. We receive the flowfiles from the remote Apache MiniFi box. 1. RouteOnAttribute: Send images to the file system, continue processing the JSON 2. AttributeCleanerProcessor: Clean up the attributes Not really needed for this dataset. 3. UpdateAttribute: Set the schema name to reference the registry 4. SplitJSON: Split JSON into one array per flowfile 5. ConvertRecord: Convert JSON Tree to AVRO with embedded schema 6. ConvertAvroToORC: Build an Apache ORC file (we could add a step before for MergeContent) 7. PutHDFS: Store in Hadoop File System forever Apache MiniFi We have a simple flow in Apache MiniFi. 1. ExecuteProcess: Run a shell script to grab a timestamp filenamed image from the USB webcam. Then call a Python script that does OpenCV Face Detection and adds some local variables to a JSON array with facial squares. 2. GetFile: retrieve all the images from the box and send them to Apache NiFi. Example Data [{"diskfree": "23445.5 MB", "cputemp": 56.7, "host": "NanoPi-Duo", "endtime": "2018-01-04 17:40:30", "ipaddress": "192.168.1.191", "h": 55, "ts": "2018-01-04 17:40:20", "filename": "/opt/demo/images/2018-01-04_1740.jpg.faces.jpg", "w": 55, "memory": 22.6, "y": 471, "x": 270}, {"diskfree": "23445.5 MB", "cputemp": 56.7, "host": "NanoPi-Duo", "endtime": "2018-01-04 17:40:30", "ipaddress": "192.168.1.191", "h": 67, "ts": "2018-01-04 17:40:20", "filename": "/opt/demo/images/2018-01-04_1740.jpg.faces.jpg", "w": 67, "memory": 22.6, "y": 625, "x": 464}] Resources http://wiki.friendlyarm.com/wiki/index.php/NanoPi_Duo https://diyprojects.io/orange-pi-armbian-control-camera-python-opencv/#.Wku2kFQ-f1J https://www.armbian.com/nanopi-duo/ https://github.com/opencv/opencv.git https://github.com/informramiz/Face-Detection-OpenCV https://realpython.com/blog/python/face-detection-in-python-using-a-webcam/ https://realpython.com/blog/python/face-recognition-with-python/ Face
... View more
Labels:
05-16-2018
06:48 PM
This can happen if spark1 and spark2 are both running on same node.Try to kill the process. Then delete the service and add it to a separate node.It must work.
... View more
12-30-2017
12:05 AM
3 Kudos
Oracle -> GoldenGate -> Apache Kafka -> Apache NiFi / Hortonworks Schema Registry -> JDBC Database
Sometimes you need to process any number of table changes sent from tools via Apache Kafka. As long as they have proper header data and records in JSON, it's really easy in Apache NiFi.
Requirements:
Process Each Partition Separately
Process Records in Order as each message is an Insert, Update or Delete to an existing table in our receiving JDBC store.
Re-process if data lost
For The Main Processor for Routing, It must only run on the Primary Node.
Enforcing Order
We use the Kafka.Offset to order the records, which makes sense in Apache Kafka topics.
After Insert, Update, Delete queries are built, let's confirm and enforce that strict ordering.
To further confirm processing in order, we make each connection in the flow FirstInFirstOutPrioritizer.
To Route, We Route Each Partition to A Different Processor Group (One Local, The Other Remote)
Let's Store Some Data in HDFS for each Table
Connect To Kafka and Grab From our Topic
Let's Connect to our JDBC Store
Let's do an Update (Table Name is Dynamic)
The Jolt Processor has an awesome tester for trying out Jolt
Make sure we connect our remote partitions
Routing From Routing Server (Primary Node)
For Processing Partition 0 (Run on the Routing Server)
We infer the schema with our InferAvroSchema, so we don't need to know the embedded table layouts before a record arrives. In production it makes sense to know all these in advance and do integration tests and versioning of schemas. This is where Hortonworks Scheme Registry is awesome. We name the avro record after the table dynamically. We can get and store permanent schema in the Hortonworks Schema Registry.
Process The Next Partition 1 .. (We can have one server or cluster per partition)
Process the Partition 1 Kafka Records from the Topic
This Flow Will Convert Our Embedded JSON Table Record into New SQL
Input: {"ID":2001,"GD":"F","DPTID":2,"FIRSTNAME":"Tim","LAST":"Spann"}
Output: INSERT INTO THETABLE (ID, GD, DPTID, FIRSTNAME, LAST) VALUES (?, ?, ?, ?, ?)
sql.args.5.value Spann
sql.table THETABLE
With all the field being parameters for a SQL Injection safe parameter based insert, update or delete based on control sent.
Golden Gate Messages
{"table": "SCHEMA1.TABLE7","op_type": "I","op_ts": "2017-11-01 04:31:56.000000","current_ts": "2017-11-01T04:32:04.754000","pos": "00000000310000020884","after": {"ID":1,"CODE": "B","NAME":"STUFF","DESCR" :"Department","ACTIVE":1}}
Using a simple EvaluateJsonPath we pull out these control fields, example: $.before.
The Table Name for ConvertJSONtoSQL: ${table:substringAfter('.')}. This is to remove all leading schema / tablespace name. From the drop down for each of the three we pick either UPDATE, INSERT or DELETE based on the op_type.
We follow this with a PutSQL which will execute on our destination JDBC database sink.
After that I collect all the attributes convert them to a JSON flowfile and save that to HDFS for logging and reporting. This step could be skipped or could be in another format or sent elsewhere.
Control Fields
pos: position
table: table to update in the data warehouse
current_ts: time stamp
op_ts: time stamp
op_type: operation type (I – insert, U- update, D – delete)
Important Apache NiFi System Fields
kafka.offset
kafka.partition
kafka.topic
We can Route and process these for special handling.
To Create HDFS Directories for Changes
su hdfs <br>hdfs dfs -mkdir -p /new/T1 <br>hdfs dfs -mkdir -p /new/T2 <br>hdfs dfs -mkdir -p /poc/T3
hdfs dfs -chmod -R 777 /new <br>hdfs dfs -ls -R /new
To Create a Test Apache Kafka Topic
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 2 --topic goldengate
Creating a MYSQL Database As Recipient JDBC Server
wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.45.tar.gz
mysql
create database mydw;
CREATE USER 'nifi'@'%' IDENTIFIED BY 'MyPassWordIsSoAwesome!!!!';
GRANT ALL PRIVILEGES ON *.* TO 'nifi'@'%' WITH GRANT OPTION;
commit;
SHOW GRANTS FOR 'nifi'@'%';
#Create some tables in the database for your records.
create table ALOG (
AID VARCHAR(1),
TIMESEC INT,
SOMEVAL VARCHAR(255),
PRIMARY KEY (AID, TIMESEC)
);
Jolt Filter
Attribute: afterJolt
${op_type:equalsIgnoreCase("D"):ifElse("none", "after")}
Attribute: beforeJolt
${op_type:equalsIgnoreCase("D"):ifElse("before", "none")}
Jolt Script to Transform JSON
[ {
"operation": "shift",
"spec": {
"${beforeJolt}": {
"*": "&"
},
"${afterJolt}": {
"*": "&"
}
}
}, {
"operation": "shift",
"spec": {
"*": "&"
}
} ]
Primary Node Flow Template
primarynode.xml
Partition X Node Flow Template
remotenode.xml
References:
https://www.xenonstack.com/blog/data-ingestion-using-apache-nifi-for-building-data-lakes-twitter-data
https://community.hortonworks.com/questions/107536/nifi-uneven-distribution-consumekafka.html
https://community.hortonworks.com/articles/57262/integrating-apache-nifi-and-apache-kafka.html
https://community.hortonworks.com/articles/80284/hdf-2x-adding-a-new-nifi-node-to-an-existing-secur.html
https://dev.mysql.com/downloads/connector/j/
... View more
Labels:
12-29-2017
07:23 PM
2 Kudos
Processing Voice Data Using a Raspberry Pi 3 with Google AIY Voice Kit is an easy way to control data flows in an organization. The AIY Voice Kit is a good way to prototype voice ingest. It is not perfect, but for low end inexpensive hardware it is a good solution for those that speak clearly and are willing to repeat a phrase a few times for activation. I was able to easily add a word for it to react to. I also have it waiting for someone to press the button to activate. Setup and Run Steps 1. Install box 2. Setup GCP Account 3. As Pi User
~/bin/AIY-voice-kit-shell.sh
src/assistant_library_demo.py 4. Download and Unzip Apache MiniFi 5. Custom Action
controlx 6. Run it
pi@raspberrypi:~/AIY-voice-kit-python/src $ ./controlx.py 7. start minifi 8. Apache MiniFi will tail the command file and look for audio files Code Added to Google Example elif event.type == EventType.ON_RECOGNIZING_SPEECH_FINISHED and event.args: print('You said:', event.args['text']) text = event.args['text'].lower() f=open("/opt/demo/commands.txt", "a+") f.write(text) f.close() if 'process' in text : self._assistant.stop_conversation() self._say_ip() elif 'computer' in text: self._assistant.stop_conversation() self._say_ip() elif text == 'ip address' : self._assistant.stop_conversation() self._say_ip() def _say_ip(self): ip_address = subprocess.check_output("hostname -I | cut -d' ' -f1", shell=True) f=open("/opt/demo/commands.txt", "a+") f.write('My IP address is %s' % ip_address.decode('utf-8')) f.close() # aiy.audio.say('My IP address is %s' % ip_address.decode('utf-8')) #aiy.audio.say('Turning on LED') #voice_name = '/opt/demo/voices/voice_{0}.wav'.format(strftime("%Y%m%d%H%M%S",gmtime())) #aiy.audio.record_to_wave(voice_name, 5) aiy.voicehat.get_led().set_state(aiy.voicehat.LED.BLINK) aiy.audio.say('MiniFy') print('My IP address is %s' % ip_address.decode('utf-8')) Apache NiFi Ingest Flow Flow File for a Sound File Flow File For Commands Apache MiniFi Ingest Apache NiFi Overview Text Command Processing AIYVOICE Schema in Hortonworks Schema Registry Source Code: https://github.com/tspannhw/nifi-googleaiy Results: An example of text deciphered from my speaking into the AIY Voice Kit. what was the process that we could try that a different way right now when was text put it in the file My IP address is 192.168.1.199 The Recorded Audio Files Sent ../images/voice_20171229195641.wav ../images/voice_20171229202456.wav ../images/voice_20171229203028.wav Inferred Schema inferred.avro.schema
{ "type" : "record", "name" : "aiyvoice", "fields" : [ { "name" : "schema", "type" : "string", "doc" : "Type inferred from '\"aiyvoice\"'" }, { "name" : "path", "type" : "string", "doc" : "Type inferred from '\"./\"'" }, { "name" : "schemaname", "type" : "string", "doc" : "Type inferred from '\"aiyvoice\"'" }, { "name" : "commands0", "type" : "string", "doc" : "Type inferred from '\"process process process process process process process process processMy IP address is 192.168.1.199\n\"'" }, { "name" : "filename", "type" : "string", "doc" : "Type inferred from '\"commands.3088-3190.txt\"'" }, { "name" : "ssaddress", "type" : "string", "doc" : "Type inferred from '\"192.168.1.199:60896\"'" }, { "name" : "receivedFrom", "type" : "string", "doc" : "Type inferred from '\"AIY_GOOGLE_VOICE_PI\"'" }, { "name" : "sshost", "type" : "string", "doc" : "Type inferred from '\"192.168.1.199\"'" }, { "name" : "mimetype", "type" : "string", "doc" : "Type inferred from '\"text/plain\"'" }, { "name" : "tailfileoriginalpath", "type" : "string", "doc" : "Type inferred from '\"/opt/demo/commands.txt\"'" }, { "name" : "uuid", "type" : "string", "doc" : "Type inferred from '\"76232a9a-baf1-4f6b-9707-6d532653bbe8\"'" }, { "name" : "RouteOnAttributeRoute", "type" : "string", "doc" : "Type inferred from '\"commands\"'" } ] } Apache NiFi Created Apache Hive DDL hive.ddl CREATE EXTERNAL TABLE IF NOT EXISTS aiyvoice (path STRING, filename STRING, commands0 STRING, ssaddress STRING, receivedFrom STRING, sshost STRING, mimetype STRING, tailfileoriginalpath STRING, uuid STRING, RouteOnAttributeRoute STRING) STORED AS ORC Resources:
https://github.com/google/aiyprojects-raspbian/blob/master/src/action.py https://github.com/google/aiyprojects-raspbian/blob/aiyprojects/HACKING.md https://aiyprojects.withgoogle.com/voice/#project-overview https://projects.raspberrypi.org/en/projects/google-voice-aiy https://github.com/googlesamples/assistant-sdk-python
https://developers.google.com/assistant/sdk/guides/library/python/embed/run-sample http://ktinkerer.co.uk/play-bbc-radio-raspberry-pi-aiy-project/ https://github.com/ktinkerer/aiyprojects-raspbian/tree/radio_player/src https://github.com/google/aiyprojects-raspbian https://github.com/n8kowald/raspi-voice-actions/blob/master/action.py https://github.com/n8kowald/raspi-voice-actions https://medium.com/@aallan/a-magic-mirror-powered-by-aiy-projects-and-the-raspberry-pi-e6a0fea3b4d6
... View more
Labels:
12-28-2017
06:51 PM
3 Kudos
Adding a Movidius NCS to a Raspberry Pi is a great way to augment your deep learning edge programming.
Hardware Setup
Raspberry Pi 3
Movidius NCS
PS 3 Eye Camera
128GB USB Stick
Software Setup
Standard NCS Setup
Python 3
sudo apt-get update
sudo apt-get install python-pip python-dev
sudo apt-get install python3-pip python3-dev
sudo apt-get install fswebcam
sudo apt-get install curl wget unzip
sudo apt-get install oracle-java8-jdk
sudo apt-get install sense-hat
sudo apt-get install octave -y
sudo apt install python3-gst-1.0
pip3 install psutil
Apache MiniFi https://www.apache.org/dyn/closer.lua?path=/nifi/minifi/0.3.0/minifi-0.3.0-bin.zip
git clone https://github.com/movidius/ncsdk.git
git clone https://github.com/movidius/ncappzoo.git
Run Example
~/workspace/ncappzoo/apps/image-classifier
Shell Script
/root/workspace/ncappzoo/apps/image-classifier/run.sh
#!/bin/bash
DATE=$(date +"%Y-%m-%d_%H%M")
fswebcam -r 1280x720 --no-banner /opt/demo/images/$DATE.jpg
python3 -W ignore image-classifier2.py /opt/demo/images/$DATE.jpg
Forked Python Code
I added some extra variables I like to send from IoT devices (time, ip address, CPU temp) for flows and formatted as JSON.
#!/usr/bin/python3
# ****************************************************************************
# Copyright(c) 2017 Intel Corporation.
# License: MIT See LICENSE file in root directory.
# ****************************************************************************
# How to classify images using DNNs on Intel Neural Compute Stick (NCS)
import mvnc.mvncapi as mvnc
import skimage
from skimage import io, transform
import numpy
import os
import sys
import json
import time
import sys, socket
import subprocess
import datetime
from time import sleep
from time import gmtime, strftime
starttime= strftime("%Y-%m-%d %H:%M:%S",gmtime())
# User modifiable input parameters
NCAPPZOO_PATH = os.path.expanduser( '~/workspace/ncappzoo' )
GRAPH_PATH = NCAPPZOO_PATH + '/caffe/GoogLeNet/graph'
IMAGE_PATH = sys.argv[1]
LABELS_FILE_PATH = NCAPPZOO_PATH + '/data/ilsvrc12/synset_words.txt'
IMAGE_MEAN = [ 104.00698793, 116.66876762, 122.67891434]
IMAGE_STDDEV = 1
IMAGE_DIM = ( 224, 224 )
# ---- Step 1: Open the enumerated device and get a handle to it -------------
# Look for enumerated NCS device(s); quit program if none found.
devices = mvnc.EnumerateDevices()
if len( devices ) == 0:
print( 'No devices found' )
quit()
# Get a handle to the first enumerated device and open it
device = mvnc.Device( devices[0] )
device.OpenDevice()
# ---- Step 2: Load a graph file onto the NCS device -------------------------
# Read the graph file into a buffer
with open( GRAPH_PATH, mode='rb' ) as f:
blob = f.read()
# Load the graph buffer into the NCS
graph = device.AllocateGraph( blob )
# ---- Step 3: Offload image onto the NCS to run inference -------------------
# Read & resize image [Image size is defined during training]
img = print_img = skimage.io.imread( IMAGE_PATH )
img = skimage.transform.resize( img, IMAGE_DIM, preserve_range=True )
# Convert RGB to BGR [skimage reads image in RGB, but Caffe uses BGR]
img = img[:, :, ::-1]
# Mean subtraction & scaling [A common technique used to center the data]
img = img.astype( numpy.float32 )
img = ( img - IMAGE_MEAN ) * IMAGE_STDDEV
# Load the image as a half-precision floating point array
graph.LoadTensor( img.astype( numpy.float16 ), 'user object' )
# ---- Step 4: Read & print inference results from the NCS -------------------
# Get the results from NCS
output, userobj = graph.GetResult()
labels = numpy.loadtxt( LABELS_FILE_PATH, str, delimiter = '\t' )
order = output.argsort()[::-1][:6]
#### Initialization
external_IP_and_port = ('198.41.0.4', 53) # a.root-servers.net
socket_family = socket.AF_INET
host = os.uname()[1]
def getCPUtemperature():
res = os.popen('vcgencmd measure_temp').readline()
return(res.replace("temp=","").replace("'C\n",""))
def IP_address():
try:
s = socket.socket(socket_family, socket.SOCK_DGRAM)
s.connect(external_IP_and_port)
answer = s.getsockname()
s.close()
return answer[0] if answer else None
except socket.error:
return None
cpuTemp=int(float(getCPUtemperature()))
ipaddress = IP_address()
# yyyy-mm-dd hh:mm:ss
currenttime= strftime("%Y-%m-%d %H:%M:%S",gmtime())
row = { 'label1': labels[order[0]], 'label2': labels[order[1]], 'label3': labels[order[2]], 'label4': labels[order[3]], 'label5': labels[order[4]], 'currenttime': currenttime, 'host': host, 'cputemp': round(cpuTemp,2), 'ipaddress': ipaddress, 'starttime': starttime }
json_string = json.dumps(row)
print(json_string)
# ---- Step 5: Unload the graph and close the device -------------------------
graph.DeallocateGraph()
device.CloseDevice()
# ==== End of file ===========================================================
This same device also has a Sense-Hat, so we grab those as well.
Then I combined the two sets of code to produce a row of Sense-Hat sensor data and NCS AI results.
Source Code
https://github.com/tspannhw/rpi-minifi-movidius-sensehat/tree/master
Example Output JSON
{"label1": "b'n03794056 mousetrap'", "humidity": 17.2, "roll": 94.0, "yaw": 225.0, "label4": "b'n03127925 crate'", "cputemp2": 49.39, "label3": "b'n02091134 whippet'", "memory": 16.2, "z": -0.0, "diskfree": "15866.3 MB", "cputemp": 50, "pitch": 3.0, "temp": 34.9, "y": 1.0, "currenttime": "2017-12-28 19:09:32", "starttime": "2017-12-28 19:09:29", "tempf": 74.82, "label5": "b'n04265275 space heater'", "host": "sensehatmovidius", "ipaddress": "192.168.1.156", "pressure": 1032.7, "label2": "b'n02091032 Italian greyhound'", "x": -0.0}
The current version of Apache MiniFi is 0.30.
Make sure you download the 0.30 version of the Toolkit to convert your template created in your Apache NiFi screen to a proper config.yml. This is compatible with Apache NiFi 1.4.
MiNiFi (Java) Version 0.3.0 Release Date: 2017 December 22 (https://cwiki.apache.org/confluence/display/MINIFI/Release+Notes#ReleaseNotes-Version0.3.0)
To Build Your MiniFi YAML
buildconfig.sh YourConfigFileName.xml
minifi-toolkit-0.3.0/bin/config.sh transform $1 config.yml
scp config.yml pi@192.168.1.156:/opt/dmeo/minifi-0.3.0/conf/
MiNiFi Config Version: 3
Flow Controller:
name: Movidius MiniFi
comment: ''
Core Properties:
flow controller graceful shutdown period: 10 sec
flow service write delay interval: 500 ms
administrative yield duration: 30 sec
bored yield duration: 10 millis
max concurrent threads: 1
variable registry properties: ''
FlowFile Repository:
partitions: 256
checkpoint interval: 2 mins
always sync: false
Swap:
threshold: 20000
in period: 5 sec
in threads: 1
out period: 5 sec
out threads: 4
Content Repository:
content claim max appendable size: 10 MB
content claim max flow files: 100
always sync: false
Provenance Repository:
provenance rollover time: 1 min
implementation: org.apache.nifi.provenance.MiNiFiPersistentProvenanceRepository
Component Status Repository:
buffer size: 1440
snapshot frequency: 1 min
Security Properties:
keystore: ''
keystore type: ''
keystore password: ''
key password: ''
truststore: ''
truststore type: ''
truststore password: ''
ssl protocol: ''
Sensitive Props:
key:
algorithm: PBEWITHMD5AND256BITAES-CBC-OPENSSL
provider: BC
Processors: []
Controller Services: []
Process Groups:
- id: 9fdb6f99-2e9f-3d86-0000-000000000000
name: Movidius MiniFi
Processors:
- id: ceab4537-43cb-31c8-0000-000000000000
name: Capture Photo and NCS Image Classifier
class: org.apache.nifi.processors.standard.ExecuteProcess
max concurrent tasks: 1
scheduling strategy: TIMER_DRIVEN
scheduling period: 60 sec
penalization period: 30 sec
yield period: 1 sec
run duration nanos: 0
auto-terminated relationships list: []
Properties:
Argument Delimiter: ' '
Batch Duration:
Command: /root/workspace/ncappzoo/apps/image-classifier/run.sh
Command Arguments:
Redirect Error Stream: 'false'
Working Directory:
- id: 9fbeac0f-1c3f-3535-0000-000000000000
name: GetFile
class: org.apache.nifi.processors.standard.GetFile
max concurrent tasks: 1
scheduling strategy: TIMER_DRIVEN
scheduling period: 0 sec
penalization period: 30 sec
yield period: 1 sec
run duration nanos: 0
auto-terminated relationships list: []
Properties:
Batch Size: '10'
File Filter: '[^\.].*'
Ignore Hidden Files: 'false'
Input Directory: /opt/demo/images/
Keep Source File: 'false'
Maximum File Age:
Maximum File Size:
Minimum File Age: 5 sec
Minimum File Size: 10 B
Path Filter:
Polling Interval: 0 sec
Recurse Subdirectories: 'true'
Controller Services: []
Process Groups: []
Input Ports: []
Output Ports: []
Funnels: []
Connections:
- id: 1bd103a9-09dc-3121-0000-000000000000
name: Capture Photo and NCS Image Classifier/success/0d6447ee-be36-3ec4-b896-1d57e374580f
source id: ceab4537-43cb-31c8-0000-000000000000
source relationship names:
- success
destination id: 0d6447ee-be36-3ec4-b896-1d57e374580f
max work queue size: 10000
max work queue data size: 1 GB
flowfile expiration: 0 sec
queue prioritizer class: ''
- id: 3cdac0a3-aabf-3a15-0000-000000000000
name: GetFile/success/0d6447ee-be36-3ec4-b896-1d57e374580f
source id: 9fbeac0f-1c3f-3535-0000-000000000000
source relationship names:
- success
destination id: 0d6447ee-be36-3ec4-b896-1d57e374580f
max work queue size: 10000
max work queue data size: 1 GB
flowfile expiration: 0 sec
queue prioritizer class: ''
Remote Process Groups:
- id: c8b85dc1-0dfb-3f01-0000-000000000000
name: ''
url: http://hostname.local:8080/nifi
comment: ''
timeout: 60 sec
yield period: 10 sec
transport protocol: HTTP
proxy host: ''
proxy port: ''
proxy user: ''
proxy password: ''
local network interface: ''
Input Ports:
- id: befe3bd8-b4ac-326c-becf-fc448e4a8ff6
name: MiniFi From TX1 Jetson
comment: ''
max concurrent tasks: 1
use compression: false
- id: 0d6447ee-be36-3ec4-b896-1d57e374580f
name: Movidius Input
comment: ''
max concurrent tasks: 1
use compression: false
- id: e43fd055-eb0c-37b2-ad38-91aeea9380cd
name: ChristmasTreeInput
comment: ''
max concurrent tasks: 1
use compression: false
- id: 4689a182-d7e1-3371-bef2-24e102ed9350
name: From ADP Remote Partition 1
comment: ''
max concurrent tasks: 1
use compression: false
Output Ports: []
Input Ports: []
Output Ports: []
Funnels: []
Connections: []
Remote Process Groups: []
NiFi Properties Overrides: {}
Start Apache MiniFi on Your RPI Box
root@sensehatmovidius:/opt/demo/minifi-0.3.0/logs# ../bin/minifi.sh start1686],offset=0,name=2017-12-28_1452.jpg,size=531686], StandardFlowFileRecord[uuid=ca84bb5f-b9ff-40f2-b5f3-ece5278df9c2,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1514490624975-1, container=default, section=1], offset=1592409, length=531879],offset=0,name=2017-12-28_1453.jpg,size=531879], StandardFlowFileRecord[uuid=6201c682-967b-4922-89fb-d25abbc2fe82,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1514490624975-1, container=default, section=1], offset=2124819, length=527188],offset=0,name=2017-12-28_1455.jpg,size=527188]] (2.53 MB) to http://hostname.local:8080/nifi-api in 639 milliseconds at a rate of 3.96 MB/sec
2017-12-28 15:08:35,588 INFO [Timer-Driven Process Thread-8] o.a.nifi.remote.StandardRemoteGroupPort RemoteGroupPort[name=Movidius Input,targets=http://hostname.local:8080/nifi] Successfully sent [StandardFlowFileRecord[uuid=670a2494-5c60-4551-9a59-a1859e794cfe,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1514490883824-2, container=default, section=2], offset=530145, length=531289],offset=0,name=2017-12-28_1456.jpg,size=531289], StandardFlowFileRecord[uuid=39cad0fd-ccba-4386-840d-758b39daf48e,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1514491306665-2, container=default, section=2], offset=0, length=504600],offset=0,name=2017-12-28_1501.jpg,size=504600], StandardFlowFileRecord[uuid=3c2d4a2b-a10b-4a92-8c8b-ab73e6960670,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1514491298789-1, container=default, section=1], offset=0, length=527543],offset=0,name=2017-12-28_1502.jpg,size=527543], StandardFlowFileRecord[uuid=92cbfd2b-e94a-4854-b874-ef575dcf3905,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1514490883824-2, container=default, section=2], offset=0, length=529618],offset=0,name=2017-12-28_1454.jpg,size=529618]] (2 MB) to http://hostname.local:8080/nifi-api in 439 milliseconds at a rate of 4.54 MB/sec
2017-12-28 15:08:41,341 INFO [Timer-Driven Process Thread-2] o.a.nifi.remote.StandardRemoteGroupPort RemoteGroupPort[name=Movidius Input,targets=http://hostname.local:8080/nifi] Successfully sent [StandardFlowFileRecord[uuid=3fb7ca68-3e10-40c6-8f37-49532134689d,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1514491710195-1, container=default, section=1], offset=0, length=567],offset=0,name=21532511724407,size=567]] (567 bytes) to http://hostname.local:8080/nifi-api in 113 milliseconds at a rate of 4.88 KB/sec
Our Apache MiniFi flow sends that data via HTTP to our Apache NiFi server for processing and converting into a standard Apache ORC file in HDFS for Apache Hive queries.
Using InferAvroSchema we build a schema and upload it to the Hortonworks Schema Registry for processing.
Schema
{
"type": "record",
"name": "movidiussense",
"fields": [
{
"name": "cputemp2",
"type": "double",
"doc": "Type inferred from '54.77'"
},
{
"name": "pitch",
"type": "double",
"doc": "Type inferred from '4.0'"
},
{
"name": "label5",
"type": "string",
"doc": "Type inferred from '\"b'n04238763 slide rule, slipstick'\"'"
},
{
"name": "starttime",
"type": "string",
"doc": "Type inferred from '\"2017-12-28 20:08:37\"'"
},
{
"name": "label2",
"type": "string",
"doc": "Type inferred from '\"b'n02871525 bookshop, bookstore, bookstall'\"'"
},
{
"name": "y",
"type": "double",
"doc": "Type inferred from '1.0'"
},
{
"name": "currenttime",
"type": "string",
"doc": "Type inferred from '\"2017-12-28 20:08:39\"'"
},
{
"name": "x",
"type": "double",
"doc": "Type inferred from '-0.0'"
},
{
"name": "label4",
"type": "string",
"doc": "Type inferred from '\"b'n03482405 hamper'\"'"
},
{
"name": "roll",
"type": "double",
"doc": "Type inferred from '94.0'"
},
{
"name": "temp",
"type": "double",
"doc": "Type inferred from '35.31'"
},
{
"name": "host",
"type": "string",
"doc": "Type inferred from '\"sensehatmovidius\"'"
},
{
"name": "diskfree",
"type": "string",
"doc": "Type inferred from '\"15817.0 MB\"'"
},
{
"name": "memory",
"type": "double",
"doc": "Type inferred from '39.2'"
},
{
"name": "label3",
"type": "string",
"doc": "Type inferred from '\"b'n02666196 abacus'\"'"
},
{
"name": "label1",
"type": "string",
"doc": "Type inferred from '\"b'n02727426 apiary, bee house'\"'"
},
{
"name": "yaw",
"type": "double",
"doc": "Type inferred from '225.0'"
},
{
"name": "cputemp",
"type": "int",
"doc": "Type inferred from '55'"
},
{
"name": "humidity",
"type": "double",
"doc": "Type inferred from '17.2'"
},
{
"name": "tempf",
"type": "double",
"doc": "Type inferred from '75.56'"
},
{
"name": "z",
"type": "double",
"doc": "Type inferred from '-0.0'"
},
{
"name": "pressure",
"type": "double",
"doc": "Type inferred from '1033.2'"
},
{
"name": "ipaddress",
"type": "string",
"doc": "Type inferred from '\"192.168.1.156\"'"
}
]
}
Our data is now in HDFS with an Apache Hive table that Apache NiFi has generated the Create Table DDL for as follows:
CREATE EXTERNAL TABLE IF NOT EXISTS movidiussense (cputemp2 DOUBLE, pitch DOUBLE, label5 STRING, starttime STRING, label2 STRING, y DOUBLE, currenttime STRING, x DOUBLE, label4 STRING, roll DOUBLE, temp DOUBLE, host STRING, diskfree STRING, memory DOUBLE, label3 STRING, label1 STRING, yaw DOUBLE, cputemp INT, humidity DOUBLE, tempf DOUBLE, z DOUBLE, pressure DOUBLE, ipaddress STRING) STORED AS ORC LOCATION '/movidiussense'
Resources
https://developer.movidius.com/
https://github.com/tspannhw/rpi-minifi-movidius-sensehat
http://chris.gg/2012/07/using-a-ps3-eyetoy-with-the-raspberry-pi/
https://developer.movidius.com/start
https://github.com/movidius/ncsdk/blob/master/README.md
... View more
Labels:
12-27-2017
10:53 PM
2 Kudos
There is an open source model server for Apache MXNet that I recently tried. It's very easy to install and use. You must have Apache MXNet and Python installed.
Installation and Setup
To install the Model Server it's a simple. I am using Pip3 to make sure I install to Python3 as I also have Python 2.7 installed on my laptop.
pip3 install mxnet --pre --user
pip3 install mxnet-model-server
pip3 install imdbpy
pip3 install dataset
http://127.0.0.1:9999/api-description
{
"description": {
"host": "127.0.0.1:9999",
"info": {
"title": "Model Serving Apis",
"version": "1.0.0"
},
"paths": {
"/api-description": {
"get": {
"operationId": "api-description",
"produces": [
"application/json"
],
"responses": {
"200": {
"description": "OK",
"schema": {
"properties": {
"description": {
"type": "string"
}
},
"type": "object"
}
}
}
}
},
"/ping": {
"get": {
"operationId": "ping",
"produces": [
"application/json"
],
"responses": {
"200": {
"description": "OK",
"schema": {
"properties": {
"health": {
"type": "string"
}
},
"type": "object"
}
}
}
}
},
"/squeezenet/predict": {
"post": {
"consumes": [
"multipart/form-data"
],
"operationId": "squeezenet_predict",
"parameters": [
{
"description": "data should be image which will be resized to: [3, 224, 224]",
"in": "formData",
"name": "data",
"required": "true",
"type": "file"
}
],
"produces": [
"application/json"
],
"responses": {
"200": {
"description": "OK",
"schema": {
"properties": {
"prediction": {
"type": "string"
}
},
"type": "object"
}
}
}
}
}
},
"schemes": [
"http"
],
"swagger": "2.0"
}
}
http://127.0.0.1:9999/ping
{
"health": "healthy!"
}
Because each server can specify a port, you can have many running at once. I am running two at once. One for SSD and one for SqueezeNet. In the MXNet Model Server github you will find a Model Zoo containing many image processing libraries and examples.
mxnet-model-server --models squeezenet=https://s3.amazonaws.com/model-server/models/squeezenet_v1.1/squeezenet_v1.1.model --service mms/model_service/mxnet_vision_service.py --port 9999
/usr/local/lib/python3.6/site-packages/mms/service_manager.py:14: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
[INFO 2017-12-27 08:50:23,195 PID:50443 /usr/local/lib/python3.6/site-packages/mms/mxnet_model_server.py:__init__:87] Initialized model serving.
Downloading squeezenet_v1.1.model from https://s3.amazonaws.com/model-server/models/squeezenet_v1.1/squeezenet_v1.1.model.
[08:50:26] src/nnvm/legacy_json_util.cc:190: Loading symbol saved by previous version v0.8.0. Attempting to upgrade...
[08:50:26] src/nnvm/legacy_json_util.cc:198: Symbol successfully upgraded!
[INFO 2017-12-27 08:50:26,701 PID:50443 /usr/local/lib/python3.6/site-packages/mms/serving_frontend.py:add_endpoint:182] Adding endpoint: squeezenet_predict to Flask
[INFO 2017-12-27 08:50:26,701 PID:50443 /usr/local/lib/python3.6/site-packages/mms/serving_frontend.py:add_endpoint:182] Adding endpoint: ping to Flask
[INFO 2017-12-27 08:50:26,702 PID:50443 /usr/local/lib/python3.6/site-packages/mms/serving_frontend.py:add_endpoint:182] Adding endpoint: api-description to Flask
[INFO 2017-12-27 08:50:26,703 PID:50443 /usr/local/lib/python3.6/site-packages/mms/metric.py:start_recording:118] Metric errors for last 30 seconds is 0.000000
[INFO 2017-12-27 08:50:26,703 PID:50443 /usr/local/lib/python3.6/site-packages/mms/metric.py:start_recording:118] Metric requests for last 30 seconds is 0.000000
[INFO 2017-12-27 08:50:26,703 PID:50443 /usr/local/lib/python3.6/site-packages/mms/metric.py:start_recording:118] Metric cpu for last 30 seconds is 0.335000
[INFO 2017-12-27 08:50:26,704 PID:50443 /usr/local/lib/python3.6/site-packages/mms/metric.py:start_recording:118] Metric memory for last 30 seconds is 0.005696
[INFO 2017-12-27 08:50:26,704 PID:50443 /usr/local/lib/python3.6/site-packages/mms/metric.py:start_recording:118] Metric disk for last 30 seconds is 0.656000
[INFO 2017-12-27 08:50:26,704 PID:50443 /usr/local/lib/python3.6/site-packages/mms/metric.py:start_recording:118] Metric overall_latency for last 30 seconds is 0.000000
[INFO 2017-12-27 08:50:26,705 PID:50443 /usr/local/lib/python3.6/site-packages/mms/metric.py:start_recording:118] Metric inference_latency for last 30 seconds is 0.000000
[INFO 2017-12-27 08:50:26,705 PID:50443 /usr/local/lib/python3.6/site-packages/mms/metric.py:start_recording:118] Metric preprocess_latency for last 30 seconds is 0.000000
[INFO 2017-12-27 08:50:26,720 PID:50443 /usr/local/lib/python3.6/site-packages/mms/mxnet_model_server.py:start_model_serving:101] Service started successfully.
[INFO 2017-12-27 08:50:26,720 PID:50443 /usr/local/lib/python3.6/site-packages/mms/mxnet_model_server.py:start_model_serving:102] Service description endpoint: 127.0.0.1:9999/api-description
[INFO 2017-12-27 08:50:26,720 PID:50443 /usr/local/lib/python3.6/site-packages/mms/mxnet_model_server.py:start_model_serving:103] Service health endpoint: 127.0.0.1:9999/ping
[INFO 2017-12-27 08:50:26,730 PID:50443 /usr/local/lib/python3.6/site-packages/werkzeug/_internal.py:_log:87] * Running on http://127.0.0.1:9999/ (Press CTRL+C to quit)
For the SSD example, I fork the AWS Github (https://github.com/awslabs/mxnet-model-server.git) and change directory to the example/ssd directory and follow the setup to prepare the model.
mxnet-model-server --models SSD=resnet50_ssd_model.model --service ssd_service.py --port 9998
/usr/local/lib/python3.6/site-packages/mms/service_manager.py:14: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
[INFO 2017-12-27 09:02:22,800 PID:55345 /usr/local/lib/python3.6/site-packages/mms/mxnet_model_server.py:__init__:87] Initialized model serving.
[INFO 2017-12-27 09:02:24,510 PID:55345 /usr/local/lib/python3.6/site-packages/mms/serving_frontend.py:add_endpoint:182] Adding endpoint: SSD_predict to Flask
[INFO 2017-12-27 09:02:24,510 PID:55345 /usr/local/lib/python3.6/site-packages/mms/serving_frontend.py:add_endpoint:182] Adding endpoint: ping to Flask
[INFO 2017-12-27 09:02:24,511 PID:55345 /usr/local/lib/python3.6/site-packages/mms/serving_frontend.py:add_endpoint:182] Adding endpoint: api-description to Flask
[INFO 2017-12-27 09:02:24,511 PID:55345 /usr/local/lib/python3.6/site-packages/mms/metric.py:start_recording:118] Metric errors for last 30 seconds is 0.000000
[INFO 2017-12-27 09:02:24,512 PID:55345 /usr/local/lib/python3.6/site-packages/mms/metric.py:start_recording:118] Metric requests for last 30 seconds is 0.000000
[INFO 2017-12-27 09:02:24,512 PID:55345 /usr/local/lib/python3.6/site-packages/mms/metric.py:start_recording:118] Metric cpu for last 30 seconds is 0.290000
[INFO 2017-12-27 09:02:24,513 PID:55345 /usr/local/lib/python3.6/site-packages/mms/metric.py:start_recording:118] Metric memory for last 30 seconds is 0.014777
[INFO 2017-12-27 09:02:24,513 PID:55345 /usr/local/lib/python3.6/site-packages/mms/metric.py:start_recording:118] Metric disk for last 30 seconds is 0.656000
[INFO 2017-12-27 09:02:24,513 PID:55345 /usr/local/lib/python3.6/site-packages/mms/metric.py:start_recording:118] Metric overall_latency for last 30 seconds is 0.000000
[INFO 2017-12-27 09:02:24,514 PID:55345 /usr/local/lib/python3.6/site-packages/mms/metric.py:start_recording:118] Metric inference_latency for last 30 seconds is 0.000000
[INFO 2017-12-27 09:02:24,514 PID:55345 /usr/local/lib/python3.6/site-packages/mms/metric.py:start_recording:118] Metric preprocess_latency for last 30 seconds is 0.000000
[INFO 2017-12-27 09:02:24,514 PID:55345 /usr/local/lib/python3.6/site-packages/mms/mxnet_model_server.py:start_model_serving:101] Service started successfully.
[INFO 2017-12-27 09:02:24,514 PID:55345 /usr/local/lib/python3.6/site-packages/mms/mxnet_model_server.py:start_model_serving:102] Service description endpoint: 127.0.0.1:9998/api-description
[INFO 2017-12-27 09:02:24,514 PID:55345 /usr/local/lib/python3.6/site-packages/mms/mxnet_model_server.py:start_model_serving:103] Service health endpoint: 127.0.0.1:9998/ping
[INFO 2017-12-27 09:02:24,524 PID:55345 /usr/local/lib/python3.6/site-packages/werkzeug/_internal.py:_log:87] * Running on http://127.0.0.1:9998/ (Press CTRL+C to quit)
http://127.0.0.1:9998/api-description
{
"description": {
"host": "127.0.0.1:9998",
"info": {
"title": "Model Serving Apis",
"version": "1.0.0"
},
"paths": {
"/SSD/predict": {
"post": {
"consumes": [
"multipart/form-data"
],
"operationId": "SSD_predict",
"parameters": [
{
"description": "data should be image which will be resized to: [3, 512, 512]",
"in": "formData",
"name": "data",
"required": "true",
"type": "file"
}
],
"produces": [
"application/json"
],
"responses": {
"200": {
"description": "OK",
"schema": {
"properties": {
"prediction": {
"type": "string"
}
},
"type": "object"
}
}
}
}
},
"/api-description": {
"get": {
"operationId": "api-description",
"produces": [
"application/json"
],
"responses": {
"200": {
"description": "OK",
"schema": {
"properties": {
"description": {
"type": "string"
}
},
"type": "object"
}
}
}
}
},
"/ping": {
"get": {
"operationId": "ping",
"produces": [
"application/json"
],
"responses": {
"200": {
"description": "OK",
"schema": {
"properties": {
"health": {
"type": "string"
}
},
"type": "object"
}
}
}
}
}
},
"schemes": [
"http"
],
"swagger": "2.0"
}
}
http://127.0.0.1:9998/ping
{
"health": "healthy!"
}
With this call to Squeeze net we get some classes of guesses and probabilities (0.50 -> 50%).
curl -X POST http://127.0.0.1:9999/squeezenet/predict -F "data=@TimSpann2.jpg"
{
"prediction": [
[
{
"class": "n02877765 bottlecap",
"probability": 0.5077430009841919
},
{
"class": "n03196217 digital clock",
"probability": 0.35705313086509705
},
{
"class": "n03706229 magnetic compass",
"probability": 0.02305465377867222
},
{
"class": "n02708093 analog clock",
"probability": 0.018635360524058342
},
{
"class": "n04328186 stopwatch, stop watch",
"probability": 0.015588048845529556
}
]
]
}
With this test call to SSD, you will see it identifies a person (me) and provides coordinates of a box around me.
curl -X POST http://127.0.0.1:9998/SSD/predict -F "data=@TimSpann2.jpg"
{
"prediction": [
[
"person",
405,
139,
614,
467
],
[
"boat",
26,
0,
459,
481
]
]
}
/opt/demo/curl.sh
curl -X POST http://127.0.0.1:9998/SSD/predict -F "data=@$1"
/opt/demo/curl2.sh
curl -X POST http://127.0.0.1:9999/squeezenet/predict -F "data=@$1"
The Apache NiFi flow is easy, I call the REST URL and pass an image. This can be done with a Groovy Script or by executing a Curl Shell.
Logs From Run
[INFO 2017-12-27 17:41:33,447 PID:90860 /usr/local/lib/python3.6/site-packages/werkzeug/_internal.py:_log:87] 127.0.0.1 - - [27/Dec/2017 17:41:33] "POST /SSD/predict HTTP/1.1" 400 -
[INFO 2017-12-27 17:41:36,289 PID:90860 /usr/local/lib/python3.6/site-packages/mms/serving_frontend.py:predict_callback:440] Request input: data should be image with jpeg format.
[INFO 2017-12-27 17:41:36,289 PID:90860 /usr/local/lib/python3.6/site-packages/mms/request_handler/flask_handler.py:get_file_data:133] Getting file data from request.
[INFO 2017-12-27 17:41:37,035 PID:90860 /usr/local/lib/python3.6/site-packages/mms/serving_frontend.py:predict_callback:475] Response is text.
[INFO 2017-12-27 17:41:37,035 PID:90860 /usr/local/lib/python3.6/site-packages/mms/request_handler/flask_handler.py:jsonify:156] Jsonifying the response: {'prediction': [('motorbike', 270, 877, 1944, 3214), ('car', 77, 763, 2113, 3193)]}
Apache NiFi Results of the Run One of the Images Processed Apache NiFi Flow Template mxnetserver.xml
Resources
https://github.com/awslabs/mxnet-model-server
https://github.com/awslabs/mxnet-model-server/blob/master/docs/README.md
https://github.com/awslabs/mxnet-model-server/blob/master/docs/server.md
https://github.com/awslabs/mxnet-model-server/blob/master/examples/ssd/README.md
http://gluon.mxnet.io/
https://github.com/awslabs/mxnet-model-server/blob/master/docs/model_zoo.md
https://mxnet.incubator.apache.org/gluon/index.html
http://mxnet.incubator.apache.org/tutorials/index.html https://github.com/apache/incubator-mxnet/tree/master/example
https://github.com/apache/incubator-mxnet/tree/master/example#deep-learning-examples
http://mxnet.incubator.apache.org/model_zoo/index.html
https://github.com/apache/incubator-mxnet/releases/tag/1.0.0 https://github.com/gluon-api/gluon-api
http://mxnet.incubator.apache.org/tutorials/r/classifyRealImageWithPretrainedModel.html
https://www.slideshare.net/JulienSIMON5/an-introduction-to-deep-learning-with-apache-mxnet
https://mxnet.incubator.apache.org/get_started/why_mxnet.html
https://www.slideshare.net/JulienSIMON5/deep-learning-for-developers-december-2017
https://aws.amazon.com/blogs/aws/aws-contributes-to-milestone-1-0-release-and-adds-model-serving-capability-for-apache-mxnet/
... View more
Labels:
12-26-2017
10:05 PM
3 Kudos
I wanted to see offer alternatives to running Deep Learning and Machine Learning locally and take advantage of some free hours of cloud time.
It seems that it is trivially easy to integrate calling IBM Watson APIs for all your microservice needs. Apache NiFi makes it super easy and fun.
Using out of the box processors, we can call the InvokeHTTP
POST and GET REST APIs of the IBM Watson Platform for Natural Language, Visual Recognition, Personality Analysis, Language Translation and others. These use SSL, so you have to setup a simple StandardSSLContextService in Apache NiFi. Once you know the JVM you are running Apache NiFi with, it's trivial to grab the requirements for that.
By default the password is the very secure,
changeit.
A Get REST CALL to NLP
A Post to a Watson REST URL
Some URLs and Calls
curl -X POST --form "images_file=@mypic.jpg" "https://gateway-a.watsonplatform.net/visual-recognition/api/v3/classify?api_key={api-key}&version=2016-05-20"
curl -X POST --form "images_file=@myotherpic.jpg" "https://gateway-a.watsonplatform.net/visual-recognition/api/v3/detect_faces?api_key={api-key}&version=2016-05-20"
curl --user "{username}":"{password}" "https://gateway.watsonplatform.net/natural-language-understanding/api/v1/analyze?version=2017-02-27&text=This+is+a+test&features=sentiment,keywords"
curl --user "{username}":"{password}" "https://gateway.watsonplatform.net/natural-language-understanding/api/v1/analyze?version=2017-02-27&text=Test&features=sentiment,keywords&keywords.sentiment=true"
curl -X POST --user "{username}":"{password}" --header "Content-Type: application/json" --data-binary @{path_to_file}tone.json "https://gateway.watsonplatform.net/tone-analyzer/api/v3/tone?version=2017-09-21"
curl -X POST --user "{username}":"{password}" --header "Content-Type: application/json" --data-binary @{path_to_file}tone.json "https://gateway.watsonplatform.net/tone-analyzer/api/v3/tone?version=2017-09-21&sentences=false" curl -X GET --user "{username}":"{password}" "https://gateway.watsonplatform.net/tone-analyzer/api/v3/tone?version=2017-09-21
&text=Team%2C%20I%20know%20that%20times%20are%20tough%21%20Product%20sales%20have
%20been%20disappointing%20for%20the%20past%20three%20quarters.%20We%20have%20a%20
competitive%20product%2C%20but%20we%20need%20to%20do%20a%20better%20job%20of%20
selling%20it%21"
curl -X POST --user "{username}":"{password}" --header "Content-Type: application/json" --data-binary @{path_to_file}tone-chat.json "https://gateway.watsonplatform.net/tone-analyzer/api/v3/tone_chat?version=2017-09-21"
Personalality Insights
curl -X POST --user {username}:{password} --header "Content-Type: text/plain;charset=utf-8" --data-binary "@{path_to_file}profile.txt" "https://gateway.watsonplatform.net/personality-insights/api/v3/profile?version=2016-10-20"
Conversation
{
"url": "https://gateway.watsonplatform.net/conversation/api",
"username": "user",
"password": "pass"
}
Discovery
{
"url": "https://gateway.watsonplatform.net/discovery/api",
"username": "u",
"password": "p"
}
curl -X POST -u "{username}":"{password}" -H "Content-Type: application/json" -d '{ "name":"my-first-environment", "description":"exploring environments"}' "api/v1/environments?version=2017-09-01"
Language Translator
{
"url": "https://gateway.watsonplatform.net/language-translator/api",
"username": "u",
"password": "p"
}
curl -X POST --user "{username}":"{password}" --header "Content-Type: application/json" --header "Accept: application/json" --data '{"text":"Hello, world!","source":"en","target":"es"}' "https://gateway.watsonplatform.net/language-translator/api/v2/translate"
Natural Language
{
"url": "https://gateway.watsonplatform.net/natural-language-understanding/api",
"username": "u",
"password": "p"
}
curl --user "{username}":"{password}" "https://gateway.watsonplatform.net/natural-language-understanding/api/v1/analyze?version=2017-02-27&text=SomeText&features=sentiment,keywords"
If you can call it with curl or wget, you can call it with Apache NiFi.
An Overview of Incorporating IBM Watson in Apache NiFi Flows
With Apache NiFi we can subtitute any message you want NLP to analyze. See
${msg} which is expression language.
As you can see just plug your key in there!
IBM Returns some nice clean JSON.
We get our JSON probabilities and can use them as we see fit. The next step I would convert using a schema to Apache AVRO and then to Apache ORC and store in Apache Hive LLAP for queries and analytics.
'
Resources: https://console.bluemix.net/docs/services/visual-recognition/getting-started.html#getting-started-tutorial https://console.bluemix.net/services/natural-language-understanding/ https://console.bluemix.net/services/personality_insights/ https://console.bluemix.net/docs/services/visual-recognition/getting-started.html#getting-started-tutorial https://www.ibm.com/watson/developercloud/visual-recognition/api/v3/#introduction pip install --upgrade watson-developer-cloud https://github.com/watson-developer-cloud/ https://www.ibm.com/watson/webinars/ https://github.com/watson-developer-cloud/retrieve-and-rank-java https://github.com/watson-developer-cloud/java-sdk https://github.com/watson-developer-cloud/java-sdk/tree/develop/speech-to-text https://github.com/watson-developer-cloud/cognitive-client-java
brew tap watson-developer-cloud/tools https://github.com/watson-developer-cloud/java-sdk https://github.com/watson-developer-cloud/java-sdk/tree/master/examples/src/main/java/com/ibm/watson/developer_cloud https://console.bluemix.net/dashboard/apps https://console.bluemix.net/catalog/services/tone-analyzer/
... View more
Labels:
06-23-2018
01:14 PM
You cannot connect from several nodes to MQTT Broker with the same client id. On the other side this makes no sense since every NiFi node would get the same messages to handle (which is normally not a use case). To solve this problem you should set "run on primary node only" for the ConsumeMQTT processor. But unfortunately this seems not to work 😞 All nodes on the NiFi cluster still makes a connection. I think this is an issue in the development of the processor. It seems that ConsumeMQTT makes a connection to the MQTT broker on all nodes although it should run only on primary node.
... View more