1973
Posts
1225
Kudos Received
124
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1843 | 04-03-2024 06:39 AM | |
| 2874 | 01-12-2024 08:19 AM | |
| 1584 | 12-07-2023 01:49 PM | |
| 2348 | 08-02-2023 07:30 AM | |
| 3241 | 03-29-2023 01:22 PM |
09-10-2018
04:23 PM
3 Kudos
IoT Edge Processing with Apache NiFi and MiniFi and Multiple Deep Learning Libraries Series For: https://conferences.oreilly.com/strata/strata-ny/public/schedule/detail/68140 See Part 1: https://community.hortonworks.com/articles/215079/iot-edge-processing-with-deep-learning-on-hdf-32-a.html See Part 2: https://community.hortonworks.com/articles/215258/iot-edge-processing-with-deep-learning-on-hdf-32-a-1.html See Part 3: https://community.hortonworks.com/articles/215271/iot-edge-processing-with-deep-learning-on-hdf-32-a-2.html You will notice a bit of a travel theme this article, it's because some of the images and work was done while on various holidays in August and September. Deep Learning We are running TensorFlow 1.10, Apache MXNet 1.3, NCSDK 2.05 and Neural Compute Application Zoo (NC App Zoo). Device Type 1: Plain Raspberry Pi (Found some old Kodak slides...) Main things to do is upgrade to Python 3.6, upgrade Raspberry PI to Stretch, upgrade libraries and a few reboots. Install OpenCV (or upgrade) and install Apache MXNet. You want to make sure you are on the latest version of Stretch and everything is cleaned up. Example: sudo apt-get upgrade sudo apt-get install build-essential tk-dev libncurses5-dev libncursesw5-dev libreadline6-dev libdb5.3-dev libgdbm-dev libsqlite3-dev libssl-dev libbz2-dev libexpat1-dev liblzma-dev zlib1g-dev sudo apt autoremove pip3.6 install --upgrade pip pip3.6 install mxnet git clone https://github.com/apache/incubator-mxnet.git --recursive Device Type 2: Raspberry Pi Enhanced with Movidius Neural Compute Stick I have updated the code to work with the new Movidius NCSDK 2.05. See: https://github.com/tspannhw/StrataNYC2018/blob/master/all2.py I also updated some variable formatting and added some additional values. Evolve that schema! So you can see some additional data: {"uuid": "mxnet_uuid_json_20180911021437.json", "label3": "n04081281 restaurant, eating house, eating place, eatery", "label1": "n03179701 desk", "roll": 4.0, "y": 0.0, "value5": "3.5%", "ipaddress": "192.168.1.156", "top5": "n03637318 lampshade, lamp shade", "label5": "n02788148 bannister, banister, balustrade, balusters, handrail", "host": "sensehatmovidius", "cputemp": 53, "top3pct": "6.5%", "diskfree": "5289.1 MB", "pressure": 1018.6, "cafferuntime": "111.685844ms", "label4": "n04009552 projector", "top4": "n03742115 medicine chest, medicine cabinet", "humidity": 42.5, "cputemp2": 52.62, "value2": "6.1%", "value3": "6.0%", "top2pct": "6.9%", "top1": "n02788148 bannister, banister, balustrade, balusters, handrail", "top4pct": "6.4%", "currenttime": "2018-09-11 02:14:44", "label2": "n03924679 photocopier", "top1pct": "7.3%", "top3": "n04286575 spotlight, spot", "starttime": "2018-09-11 02:14:33", "top5pct": "3.9%", "memory": 35.2, "value4": "5.0%", "top2": "n03250847 drumstick", "runtime": "11", "z": 1.0, "pitch": 360.0, "imagefilename": "/opt/demo/images/2018-09-10_2214.jpg", "tempf": 75.25, "temp": 35.14, "yaw": 86.0, "value1": "8.5%", "x": 0.0} Apache NiFi and MiniFi Process, Proxy, Access, Filter and Transform Data Anywhere, Anytime, Any Platform Apache NiFi and minifi Works in Moab Utah Resources:
https://github.com/tspannhw/StrataNYC2018 https://www.geomesa.org/documentation/current/tutorials/geomesa-quickstart-nifi.html
https://github.com/cinci/rpi-sense-hat-java https://movidius.github.io/ncsdk/install.html https://movidius.github.io/ncsdk/tf_modelzoo.html https://github.com/movidius/ncappzoo/ https://github.com/movidius/ncappzoo/blob/ncsdk2/tensorflow/facenet/README.md https://github.com/movidius/ncappzoo/blob/ncsdk2/tensorflow/inception_v4/README.md https://medium.com/tensorflow/tensorflow-1-9-officially-supports-the-raspberry-pi-b91669b0aa0 https://github.com/lhelontra/tensorflow-on-arm/releases/download/v1.10.0/tensorflow-1.10.0-cp35-none-linux_armv7l.whl https://github.com/movidius/ncappzoo/blob/ncsdk2/apps/image-classifier/README.md
... View more
Labels:
08-31-2018
05:31 PM
4 Kudos
IoT Edge Processing with Apache NiFi and MiniFi and Multiple Deep Learning Libraries Series For: https://conferences.oreilly.com/strata/strata-ny/public/schedule/detail/68140 See Part 1: https://community.hortonworks.com/articles/215079/iot-edge-processing-with-deep-learning-on-hdf-32-a.html See Part 2: https://community.hortonworks.com/articles/215258/iot-edge-processing-with-deep-learning-on-hdf-32-a-1.html Hive - SQL - IoT Data Storage In this section, we will focus on converting JSON to AVRO to Apache ORC and storage options in Apache Hive 3. I am doing two styles of storage for one of the tables, rainbow. I am storing ORC files with an external table as well as using the Streaming API to store into an ACID table. NiFi - SQL - On Streams - Calcite SELECT *
FROM FLOWFILE
WHERE CAST(memory AS FLOAT) > 0
SELECT *
FROM FLOWFILE
WHERE CAST(tempf AS FLOAT) > 65 I check the flows as they are ingested real-time and filter based on conditions such as memory or temperature. This makes for some powerful and easy simple event processing. This is very handy when you may want to filter out standard conditions where no anomaly has occurred. IoT Data Storage Options For time series data, we are blessed with many options in HDP 3.x. The simplest choice I am doing first here. That's a simple Apache Hive 3.x table. This is where we have some tough decisions which engine to use. Hive has the best, most complete SQL and lots of interfaces. This is my default choice for where and how to store my data. If it was more than a few thousand rows a second and has a timestamp then we have to think about the architecture. Apache Druid has a lot of amazing abilities with time series data like what's coming out of these IoT devices. Since we can join Hive and Druid data and put Hive tables on top of Druid, we really should consider using Druid for our storage handler. https://cwiki.apache.org/confluence/display/Hive/Druid+Integration https://cwiki.apache.org/confluence/display/Hive/StorageHandlers https://hortonworks.com/blog/apache-hive-druid-part-1-3/ https://github.com/apache/hive/blob/master/druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandlerUtils.java https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/using-druid/content/druid_anatomy_of_hive_to_druid.html We could create a Hive table backed by Druid thusly: CREATE TABLE rainbow_druid STOREDBY 'org.apache.hadoop.hive.
druid.DruidStorageHandler' TBLPROPERTIES (
"druid.segment.granularity" = "MONTH",
"druid.query.granularity" = "DAY") AS SELECT ts as`__time`, cast(tempf as string) s_tempf, ipaddress, cast(altitude as string) s_altitude, host, diskfree, FROM RAINBOW; For second and sub-second data, we need to consider either Druid or HBase. The nice thing is these NoSQL options also have SQL interfaces to use. It comes down to how you are going to query the data and which one you like. HBase + Phoenix is performant and been used in production forever. With HBase 2.x there are really impressive updates that make this a good option. For richer analytics and some really cool analytics with Apache Superset, it's hard not to recommend Druid. Apache Druid has really been improved recently and well integrated with Hive 3's rich querying. Example of Our Geo Data {"speed": "0.145", "diskfree": "4643.2 MB", "altitude": "6.2", "ts": "2018-08-30 17:47:03", "cputemp": 52.0, "latitude": "38.9789405", "track": "0.0", "memory": 26.5, "host": "rainbow", "uniqueid": "gps_uuid_20180830174705", "ipaddress": "172.20.10.8", "epd": "nan", "utc": "2018-08-30T17:47:05.000Z", "epx": "21.91", "epy": "31.536", "epv": "73.37", "ept": "0.005", "eps": "63.07", "longitude": "-74.824475167", "mode": "3", "time": "1535651225.0", "climb": "0.0", "epc": "nan"} Hive 3 Tables CREATE EXTERNAL TABLE IF NOT EXISTS rainbow (tempf DOUBLE, cputemp DOUBLE, pressure DOUBLE, host STRING, uniqueid STRING, ipaddress STRING, temp DOUBLE, diskfree STRING, altitude DOUBLE, ts STRING,
tempf2 DOUBLE, memory DOUBLE) STORED AS ORC LOCATION '/rainbow' create table rainbowacid(tempf DOUBLE, cputemp DOUBLE, pressure DOUBLE, host STRING, uniqueid STRING, ipaddress STRING, temp DOUBLE, diskfree STRING, altitude DOUBLE, ts STRING,
tempf2 DOUBLE, memory DOUBLE) STORED AS ORC
TBLPROPERTIES ('transactional'='true') CREATE EXTERNAL TABLE IF NOT EXISTS gps (speed STRING, diskfree STRING, altitude STRING, ts STRING, cputemp DOUBLE, latitude STRING, track STRING, memory DOUBLE, host STRING, uniqueid STRING, ipaddress STRING, epd STRING, utc STRING, epx STRING, epy STRING, epv STRING, ept STRING, eps STRING, longitude STRING, mode STRING, time STRING, climb STRING, epc STRING) STORED AS ORC LOCATION '/gps' CREATE TABLE IF NOT EXISTS gpsacid (speed STRING, diskfree STRING, altitude STRING, ts STRING, cputemp DOUBLE, latitude STRING, track STRING, memory DOUBLE, host STRING, uniqueid STRING, ipaddress STRING, epd STRING, utc STRING, epx STRING, epy STRING, epv STRING, ept STRING, eps STRING, longitude STRING, mode STRING, `time` STRING, climb STRING, epc STRING) STORED AS ORC TBLPROPERTIES ('transactional'='true') CREATE EXTERNAL TABLE IF NOT EXISTS movidiussense (label5 STRING, runtime STRING, label1 STRING, diskfree STRING, top1 STRING, starttime STRING, label2 STRING, label3 STRING, top3pct STRING, host STRING, top5pct STRING, humidity DOUBLE, currenttime STRING, roll DOUBLE, uuid STRING, label4 STRING, tempf DOUBLE, y DOUBLE, top4pct STRING, cputemp2 DOUBLE, top5 STRING, top2pct STRING, ipaddress STRING, cputemp INT, pitch DOUBLE, x DOUBLE, z DOUBLE, yaw DOUBLE, pressure DOUBLE, top3 STRING, temp DOUBLE, memory DOUBLE, top4 STRING, imagefilename STRING, top1pct STRING, top2 STRING) STORED AS ORC LOCATION '/movidiussense' CREATE EXTERNAL TABLE IF NOT EXISTS minitensorflow2 (image STRING, ts STRING, host STRING, score STRING, human_string STRING, node_id INT) STORED AS ORC LOCATION '/minifitensorflow2' Resources: https://github.com/tspannhw/StrataNYC2018 https://www.geomesa.org/documentation/current/tutorials/geomesa-quickstart-nifi.html https://github.com/cinci/rpi-sense-hat-java
... View more
Labels:
08-31-2018
02:34 PM
1 Kudo
IoT Edge Processing with Deep Learning on HDF 3.2 and HDP 3.0 - Part 2 For: https://conferences.oreilly.com/strata/strata-ny/public/schedule/detail/68140 See Pre-Work: https://community.hortonworks.com/articles/203638/ingesting-multiple-iot-devices-with-apache-nifi-17.html See Part 1: https://community.hortonworks.com/articles/215079/iot-edge-processing-with-deep-learning-on-hdf-32-a.html Step By Step Processing Step 1: Install Apache NiFi (One or More Nodes or clusters)
Choose: https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.2.0/installing-hdf/content/install-ambari.html
or
docker pull hortonworks/nifi
Apache NiFi Configuration for IoT
https://community.hortonworks.com/articles/67756/ingesting-log-data-using-minifi-nifi.html
You will need to set: nifi.remote.input.host and nifi.remote.input.socket.port in the conf/nifi.properties or Ambari settings. Step 2: Install Apache NiFi - MiniFi on Your Device(s) Download MiniFi (https://nifi.apache.org/minifi/download.html)
You can choose Java or C++. For your first usage, I recommend the Java edition unless your device is too small.
You can also install on a RHEL or Debian Linux machine or OSX.
Download MiniFi Toolkit (https://nifi.apache.org/minifi/minifi-toolkit.html)
Resources:
https://cwiki.apache.org/confluence/display/MINIFI/Release+Notes#ReleaseNotes-Versioncpp-0.5.0 https://cwiki.apache.org/confluence/display/MINIFI/Release+Notes#ReleaseNotes-Version0.5.0 https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.1.2/bk_release-notes/content/ch_hdf_relnotes.html#centos7 https://community.hortonworks.com/articles/108947/minifi-for-ble-bluetooth-low-energy-beacon-data-in.html https://community.hortonworks.com/content/kbentry/107379/minifi-for-image-capture-and-ingestion-from-raspbe.html Step 3: Install Apache MXNet (On MiniFi Devices and NiFi Nodes - optional) https://mxnet.incubator.apache.org/install/index.html?platform=Devices&language=Python&processor=CPU
Install build tools and build from scratch
Walk through install: https://community.hortonworks.com/articles/176932/apache-deep-learning-101-using-apache-mxnet-on-the.html
Resources and Source
https://github.com/tspannhw/StrataNYC2018
rainbow-processing.xml
rainbow-gateway-processing.xml
display-images-server.xml
rainbowminifi.xml
https://community.hortonworks.com/articles/203638/ingesting-multiple-iot-devices-with-apache-nifi-17.html Resources and Source https://github.com/tspannhw/StrataNYC2018 rainbow-processing.xml rainbow-gateway-processing.xml display-images-server.xml rainbowminifi.xml
... View more
Labels:
08-29-2018
03:43 PM
3 Kudos
IoT Edge Processing with Apache NiFi and MiniFi and Multiple Deep Learning Libraries Series For: https://conferences.oreilly.com/strata/strata-ny/public/schedule/detail/68140 In preparation for my talk on utilizing edge devices for deep learning, IoT sensor reading and big data processing I have updated my environment to the latest and greatest tools available. With the upgrade of HDF to 3.2, I can now use Apache NiFi 1.7 and MiniFi 0.5 for IoT data ingestion, simple event processing, conversion, data processing, data flow and storage. The architecture diagram above shows the basic flow we are utilizing. IoT Step by Step
Raspberry Pi with latest patches, Python, GPS software, USB Camera, Sensor libraries, Java 8, MiniFi 0.5, TensorFlow and Apache MXNet installed. minifi flow pushes JSON and JPEGs over HTTP(s) / Site-to-Site to an Apache NiFi gateway server. Option: NiFi can push to a central NiFi cloud cluster and/or Kafka cluster both of which running on HDF 3.2 environments. Apache NiFi cluster pushes to Hive, HDFS, Dockerized API running in HDP 3.0 and Third Party APIs. NiFi and Kafka integrate with Schema Registry for our tabular data including rainbow and gps JSON data. SQL Tables in Hive I stream my data into Apache ORC files stored on HDP 3.0 HDFS directories and build external tables on them. CREATE EXTERNAL TABLE IF NOT EXISTS rainbow (tempf DOUBLE, cputemp DOUBLE, pressure DOUBLE, host STRING, uniqueid STRING, ipaddress STRING, temp DOUBLE, diskfree STRING, altitude DOUBLE, ts STRING,
tempf2 DOUBLE, memory DOUBLE)
STORED AS ORC LOCATION '/rainbow';
CREATE EXTERNAL TABLE IF NOT EXISTS gps (speed STRING, diskfree STRING, altitude STRING, ts STRING, cputemp DOUBLE, latitude STRING, track STRING, memory DOUBLE, host STRING, uniqueid STRING, ipaddress STRING, epd STRING, utc STRING, epx STRING, epy STRING, epv STRING, ept STRING, eps STRING, longitude STRING, mode STRING, time STRING, climb STRING, epc STRING)
STORED AS ORC LOCATION '/gps';
For my processing needs I also have a Hive 3 ACID table for general table usage and updates. create table rainbowacid(tempf DOUBLE, cputemp DOUBLE, pressure DOUBLE, host STRING, uniqueid STRING, ipaddress STRING, temp DOUBLE, diskfree STRING, altitude DOUBLE, ts STRING,
tempf2 DOUBLE, memory DOUBLE) STORED AS ORC
TBLPROPERTIES ('transactional'='true');
CREATE TABLE IF NOT EXISTS gpsacid (speed STRING, diskfree STRING, altitude STRING, ts STRING, cputemp DOUBLE, latitude STRING, track STRING, memory DOUBLE, host STRING, uniqueid STRING, ipaddress STRING, epd STRING, utc STRING, epx STRING, epy STRING, epv STRING, ept STRING, eps STRING, longitude STRING, mode STRING, time STRING, climb STRING, epc STRING) STORED AS ORC
TBLPROPERTIES ('transactional'='true');
Then I load my initial data. insert into rainbowacid
select * from rainbow;
insert into gpsacid
select * from gps; Hive 3.x Updates %jdbc(hive) CREATE TABLE Persons_default (
ID Int NOT NULL,
Name String NOT NULL,
Age Int,
Creator String DEFAULT CURRENT_USER(),
CreateDate Date DEFAULT CURRENT_DATE()
) One of the cool new features in Hive is that you can now have defaults, as you can see which are helpful for things like standard defaults you might want like current data. This gives us even more relational style features in Hive. Another very interesting feature is materialized views which help you for having clean and fast subqueries. Here is a cool example: CREATE MATERIALIZED VIEW mv1
AS
SELECT dest,origin,count(*)
FROM flights_hdfs
GROUP BY dest,origin References: https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/hive-overview/content/hive_whats_new_in_this_release_hive.html https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/using-hiveql/content/hive_3_internals.html https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/hive-overview/content/hive-apache-hive-3-architecturural-overview.html https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/materialized-view/content/hive_create_materialized_view.html
... View more
Labels:
07-26-2018
11:56 AM
if you add -ot json to the end your output will be in JSON format. Which you can parse with your favorite tool, again I was thinking to call it from NiFi and process the output. Perhaps this is Nifinception all over again, perhaps this is Nifinception all over again. By default it's nice, easy to read text which is perfect if a person is watching it. It's really cool to be able to start and stop processor groups remotely via a command line command.
... View more
07-19-2018
01:49 PM
3 Kudos
Topic: IoT Edge Processing with Apache NiFi and MiniFi and Multiple Deep Learning Libraries Part 1: Multiple Devices with Data Keywords: Deep Learning On The Edge, GPS Ingestion, Sense-Hat and Rainbow Hat Sensor Ingest, WebCam Image Ingest In preparation for my talk at Strata in NYC, I am updating my IoT demos for more devices, more data types and more actions. I have three streams coming from each device including web camera images. When we are sending data from a MiniFi agent we need to define a port on an Apache NiFi server/cluster to receive it. So I design my MiniFi flow in the Apache NiFi UI (pretty soon there will be a special designer for this). You then highlight everything there and hit Create Template. You can then export it and convert to config.yml. Again, this process will be automated and connected with the NiFi Registry very shortly to make this less clicks. This is an example. When you connect to it in your flow you design in Apache NiFi UI you will connect to this port on the Remote Processor Group. If you are manually editing one (okay never do this, but sometimes I have to). You can copy that ID from this Port Details and past it in the file. Once MiniFi has it's config.yml and it's started, we will start getting messages to that Port. You can see I have two inputs, one for Movidius and one for Rainbow. I could just have one and route to what I want. It's up to you how you want to segment these flows. Welcome to Apache NiFi registry v0.2.0, this one works just as well. Very stable, but with some new magic. You can now connect to Git and Github!!!! We have structured JSON, so let's Infer a schema, clean it up and store it in the Hortonworks Schema Registry. That will make it versioned and REST enabled. I add one for the each of the two JSON file types I am sending from the rainbow device. You can see the schemas in full at the bottom of the article. The data is received from MiniFi on my local NiFi edge server for simple event processing, filtering and analysis. I route based on the two types of files, apply their schema, do a simple filter via SQL and send the converted AVRO formatted file to my cloud hosted cluster. Once I get the data I send it from my edge server to my cloud HDF 3.2 cluster. For images, I send them to my existing image storage processor group. For my other two types of files I convert them to Apache ORC and store in HDFS as Apache Hive tables. Server Dashboard Rainbow Processing Routing is Easy On High Humidity, Send a Slack Message (Query on humidity value) We can dive into any flowfile as it travels through the system and examine it's data and metadata. Now that my data is saved in HDFS with Hive tables on top I can use the latest version of Apache Zeppelin to analyze the data. I added some maps to Zeppelin via Helium, which is now available in HDP 3.0. I found a bunch of new chart types, this one could be insightful. So with the latest NiFi 1.7.1 and HDP 3.0 I can do a lot of interesting things. Next up, let's run some Dockerized TensorFlow application in my HDP 3.0 cluster. Strata Talk: https://conferences.oreilly.com/strata/strata-ny/public/schedule/detail/68140 Python Scripts https://github.com/tspannhw/StrataNYC2018/tree/master Schemas rainbow {
"type": "record",
"name": "rainbow",
"fields": [
{
"name": "tempf",
"type": "double",
"doc": "Type inferred from '84.15'"
},
{
"name": "cputemp",
"type": "double",
"doc": "Type inferred from '53.0'"
},
{
"name": "pressure",
"type": "double",
"doc": "Type inferred from '101028.56'"
},
{
"name": "host",
"type": "string",
"doc": "Type inferred from '\"rainbow\"'"
},
{
"name": "uniqueid",
"type": "string",
"doc": "Type inferred from '\"rainbow_uuid_20180718234222\"'"
},
{
"name": "ipaddress",
"type": "string",
"doc": "Type inferred from '\"192.168.1.165\"'"
},
{
"name": "temp",
"type": "double",
"doc": "Type inferred from '38.58'"
},
{
"name": "diskfree",
"type": "string",
"doc": "Type inferred from '\"4831.2 MB\"'"
},
{
"name": "altitude",
"type": "double",
"doc": "Type inferred from '80.65'"
},
{
"name": "ts",
"type": "string",
"doc": "Type inferred from '\"2018-07-18 23:42:22\"'"
},
{
"name": "tempf2",
"type": "double",
"doc": "Type inferred from '28.97'"
},
{
"name": "memory",
"type": "double",
"doc": "Type inferred from '32.3'"
}
]
}
gps {
"type": "record",
"name": "gps",
"fields": [
{
"name": "speed",
"type": "string",
"doc": "Type inferred from '\"0.066\"'"
},
{
"name": "diskfree",
"type": "string",
"doc": "Type inferred from '\"4830.3 MB\"'"
},
{
"name": "altitude",
"type": "string",
"doc": "Type inferred from '\"43.0\"'"
},
{
"name": "ts",
"type": "string",
"doc": "Type inferred from '\"2018-07-18 23:46:39\"'"
},
{
"name": "cputemp",
"type": "double",
"doc": "Type inferred from '54.0'"
},
{
"name": "latitude",
"type": "string",
"doc": "Type inferred from '\"40.2681555\"'"
},
{
"name": "track",
"type": "string",
"doc": "Type inferred from '\"0.0\"'"
},
{
"name": "memory",
"type": "double",
"doc": "Type inferred from '32.3'"
},
{
"name": "host",
"type": "string",
"doc": "Type inferred from '\"rainbow\"'"
},
{
"name": "uniqueid",
"type": "string",
"doc": "Type inferred from '\"gps_uuid_20180718234640\"'"
},
{
"name": "ipaddress",
"type": "string",
"doc": "Type inferred from '\"192.168.1.165\"'"
},
{
"name": "epd",
"type": "string",
"doc": "Type inferred from '\"nan\"'"
},
{
"name": "utc",
"type": "string",
"doc": "Type inferred from '\"2018-07-18T23:46:40.000Z\"'"
},
{
"name": "epx",
"type": "string",
"doc": "Type inferred from '\"40.135\"'"
},
{
"name": "epy",
"type": "string",
"doc": "Type inferred from '\"42.783\"'"
},
{
"name": "epv",
"type": "string",
"doc": "Type inferred from '\"171.35\"'"
},
{
"name": "ept",
"type": "string",
"doc": "Type inferred from '\"0.005\"'"
},
{
"name": "eps",
"type": "string",
"doc": "Type inferred from '\"85.57\"'"
},
{
"name": "longitude",
"type": "string",
"doc": "Type inferred from '\"-74.529094\"'"
},
{
"name": "mode",
"type": "string",
"doc": "Type inferred from '\"3\"'"
},
{
"name": "time",
"type": "string",
"doc": "Type inferred from '\"2018-07-18T23:46:40.000Z\"'"
},
{
"name": "climb",
"type": "string",
"doc": "Type inferred from '\"0.0\"'"
},
{
"name": "epc",
"type": "string",
"doc": "Type inferred from '\"nan\"'"
}
]
}
SQL %sql
CREATE EXTERNAL TABLE IF NOT EXISTS movidiussense (label5 STRING, runtime STRING, label1 STRING, diskfree STRING, top1 STRING, starttime STRING, label2 STRING, label3 STRING, top3pct STRING, host STRING, top5pct STRING, humidity DOUBLE, currenttime STRING, roll DOUBLE, uuid STRING, label4 STRING, tempf DOUBLE, y DOUBLE, top4pct STRING, cputemp2 DOUBLE, top5 STRING, top2pct STRING, ipaddress STRING, cputemp INT, pitch DOUBLE, x DOUBLE, z DOUBLE, yaw DOUBLE, pressure DOUBLE, top3 STRING, temp DOUBLE, memory DOUBLE, top4 STRING, imagefilename STRING, top1pct STRING, top2 STRING) STORED AS ORC LOCATION '/movidiussense'
%sql
CREATE EXTERNAL TABLE IF NOT EXISTS minitensorflow2 (image STRING, ts STRING, host STRING, score STRING, human_string STRING, node_id INT) STORED AS ORC LOCATION '/minifitensorflow2'
%sql
CREATE EXTERNAL TABLE IF NOT EXISTS gps (speed STRING, diskfree STRING, altitude STRING, ts STRING, cputemp DOUBLE, latitude STRING, track STRING, memory DOUBLE, host STRING, uniqueid STRING, ipaddress STRING, epd STRING, utc STRING, epx STRING, epy STRING, epv STRING, ept STRING, eps STRING, longitude STRING, mode STRING, time STRING, climb STRING, epc STRING) STORED AS ORC LOCATION '/gps'
%sql
CREATE EXTERNAL TABLE IF NOT EXISTS rainbow (tempf DOUBLE, cputemp DOUBLE, pressure DOUBLE, host STRING, uniqueid STRING, ipaddress STRING, temp DOUBLE, diskfree STRING, altitude DOUBLE, ts STRING,
tempf2 DOUBLE, memory DOUBLE) STORED AS ORC LOCATION '/rainbow'
References
https://community.hortonworks.com/articles/176932/apache-deep-learning-101-using-apache-mxnet-on-the.html https://cwiki.apache.org/confluence/display/MINIFI/Release+Notes#ReleaseNotes-Versioncpp-0.5.0 https://cwiki.apache.org/confluence/display/MINIFI/Release+Notes#ReleaseNotes-Version0.5.0 https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.1.2/bk_release-notes/content/ch_hdf_relnotes.html#centos7 https://community.hortonworks.com/articles/108947/minifi-for-ble-bluetooth-low-energy-beacon-data-in.html https://community.hortonworks.com/content/kbentry/107379/minifi-for-image-capture-and-ingestion-from-raspbe.html NiFi Flows rainbow-server-processing.xml rainbow-minifi-ingest-in-nifi.xml
... View more
Labels:
07-14-2018
09:29 PM
3 Kudos
Scanning Documents into Data Lakes via Tesseract, Python, OpenCV and Apache NiFi Source: https://github.com/tspannhw/nifi-tesseract-python There are many awesome open source tools available to integrate with your Big Data Streaming flows. Take a look at these articles for installation and why the new version of Tesseract is different. I am officially recommending Python 3.6 or newer. Please don't use Python 2.7 if you don't have to. Friends don't let friends use old Python. Tesseract 4 with Deep Learning https://www.learnopencv.com/deep-learning-based-text-recognition-ocr-using-tesseract-and-opencv/ Github: https://github.com/spmallick/learnopencv/tree/master/OCR For installation on a Mac Laptop: brew install tesseract --HEAD
pip3.6 install pytesseract
brew install leptonica Note: if you have tesseract already, you may need to uninstall and unlink it first with brew. If you don't use brew, you can install another way. Summary
Execute the run.sh (https://github.com/tspannhw/nifi-tesseract-python/blob/master/pytesstest.py) . It will send a MQTT message of the text and some other attributes in JSON format to the tesseract topic in the specified MQTT broker. Apache NiFi will read from this topic via ConsumeMQTT The flow checks to see if it's valid JSON via RouteOnContent. We run MergeRecord to convert a bunch of JSON into one big Apache Avro File Then we run ConvertAvroToORC to make a superfast Apache ORC file for storage Then we store it in HDFS via PutHDFS Running The Python Script You could have this also hooked up to a scanner or point it at a directory. You could also have it scheduled to run every 30 seconds or so. I had this hooked up to a local Apache NiFi instance to schedule runs. This can also be run by MiniFi Java Agent or MiniFi C++ agent. Or on demand if you wish. Sending MQTT Messages From Python # MQTT
client = mqtt.Client()
client.username_pw_set("user","pass")
client.connect("server.server.com", 17769, 60)
client.publish("tesseract", payload=json_string, qos=0, retain=True) You will need to run: pip3 install paho-mqtt Create the HDFS Directory hdfs dfs -mkdir -p /tesseract
Create the External Hive Table (DDL Built by NiFi) CREATE EXTERNAL TABLE IF NOT EXISTS tesseract (`text` STRING, imgname STRING, host STRING, `end` STRING, te STRING, battery INT, systemtime STRING, cpu DOUBLE, diskusage STRING, memory DOUBLE, id STRING) STORED AS ORC
LOCATION '/tesseract';
This DDL is a side effect, it's built by our ORC conversion and HDFS storage commands. You could run that create script in Hive View 2, Beeline or another Apache Hive JDBC/ODBC tool. I used Apache Zeppelin since I am going to be doing queries there anyway. Let's Ingest Our Captured Images and Process Them with Apache Tika, TensorFlow and grab the metadata Consume MQTT Records and Store in Apache Hive Let's look at other fields in Zeppelin Let's Look at Our Records in Apache Zeppelin via a SQL Query (SELECT *FROM TESSERACT) ConsumeMQTT: Give me all the record from the tesseract topic from our MQTT Broker. Isolation from our ingest clients which could be 100,000 devices. MergeRecord: Merge all the JSON files sent via MQTT into one big AVRO File ConvertAVROToORC: converts are merged AVRO file PutHDFS Tesseract Example Schema in Hortonworks Schema Registry TIP: You can generate your schema with InferAvroSchema. Do that once, copy it and paste into Schema Registry. Then you can remove that step from your flow. The Schema Text {
"type": "record",
"name": "tesseract",
"fields": [
{
"name": "text",
"type": "string",
"doc": "Type inferred from '\"cgi cctong aiternacrety, pou can acces the complete Pro\nLance repesiiry from eh Provenance mens: The Provenance\n‘emu inchades the Date/Time, Actontype, the Unsque Fowie\nTD and other sata. Om the ar it is smal exci i oe:\n‘ick chs icon, and you get the flowin On the right, war\n‘cots like three inthe cic soemecaed gether Liege:\n\nLineage ts visualined as « lange direcnad sqycie graph (DAG) char\nSrones the seeps 1m she Gow where modifications oF routing ‘oot\nplace on the Aewiike. Righe-iieit « step lp the Lineage s view\nSetusls aboot the fowtle at that step ar expand the ow to ander:\nScand where & was potentially domed frum. Af the very bottom\nleft of the Lineage Oi a slider wath a play button to play the pro\n“sing flow (with scaled ame} and understand where tbe owtise\nSpent the meat Game of at whch PORN get muted\n\naide the Bowtie dealin, you cam: finn deed analy of box\n\ntern\n=\"'"
},
{
"name": "imgname",
"type": "string",
"doc": "Type inferred from '\"images/tesseract_image_20180613205132_c14779b8-1546-433e-8976-ddb5bfc5f978.jpg\"'"
},
{
"name": "host",
"type": "string",
"doc": "Type inferred from '\"HW13125.local\"'"
},
{
"name": "end",
"type": "string",
"doc": "Type inferred from '\"1528923095.3205361\"'"
},
{
"name": "te",
"type": "string",
"doc": "Type inferred from '\"3.7366552352905273\"'"
},
{
"name": "battery",
"type": "int",
"doc": "Type inferred from '100'"
},
{
"name": "systemtime",
"type": "string",
"doc": "Type inferred from '\"06/13/2018 16:51:35\"'"
},
{
"name": "cpu",
"type": "double",
"doc": "Type inferred from '22.8'"
},
{
"name": "diskusage",
"type": "string",
"doc": "Type inferred from '\"113759.7 MB\"'"
},
{
"name": "memory",
"type": "double",
"doc": "Type inferred from '69.4'"
},
{
"name": "id",
"type": "string",
"doc": "Type inferred from '\"20180613205132_c14779b8-1546-433e-8976-ddb5bfc5f978\"'"
}
]
} The above schema was generated by Infer Avro Schema in Apache NiFi. Image Analytics Results {
"tiffImageWidth" : "1280",
"ContentType" : "image/jpeg",
"JPEGImageWidth" : "1280 pixels",
"FileTypeDetectedFileTypeName" : "JPEG",
"tiffBitsPerSample" : "8",
"ThumbnailHeightPixels" : "0",
"label4" : "book jacket",
"YResolution" : "1 dot",
"label5" : "pill bottle",
"ImageWidth" : "1280 pixels",
"JFIFYResolution" : "1 dot",
"JPEGImageHeight" : "720 pixels",
"filecreationTime" : "2018-06-13T17:24:07-0400",
"JFIFThumbnailHeightPixels" : "0",
"DataPrecision" : "8 bits",
"XResolution" : "1 dot",
"ImageHeight" : "720 pixels",
"JPEGNumberofComponents" : "3",
"JFIFXResolution" : "1 dot",
"FileTypeExpectedFileNameExtension" : "jpg",
"JPEGDataPrecision" : "8 bits",
"FileSize" : "223716 bytes",
"probability4" : "1.74%",
"tiffImageLength" : "720",
"probability3" : "3.29%",
"probability2" : "6.13%",
"probability1" : "81.23%",
"FileName" : "apache-tika-2858986094088526803.tmp",
"filelastAccessTime" : "2018-06-13T17:24:07-0400",
"JFIFThumbnailWidthPixels" : "0",
"JPEGCompressionType" : "Baseline",
"JFIFVersion" : "1.1",
"filesize" : "223716",
"FileModifiedDate" : "Wed Jun 13 17:24:27 -04:00 2018",
"Component3" : "Cr component: Quantization table 1, Sampling factors 1 horiz/1 vert",
"Component1" : "Y component: Quantization table 0, Sampling factors 2 horiz/2 vert",
"Component2" : "Cb component: Quantization table 1, Sampling factors 1 horiz/1 vert",
"NumberofTables" : "4 Huffman tables",
"FileTypeDetectedFileTypeLongName" : "Joint Photographic Experts Group",
"fileowner" : "tspann",
"filepermissions" : "rw-r--r--",
"JPEGComponent3" : "Cr component: Quantization table 1, Sampling factors 1 horiz/1 vert",
"JPEGComponent2" : "Cb component: Quantization table 1, Sampling factors 1 horiz/1 vert",
"JPEGComponent1" : "Y component: Quantization table 0, Sampling factors 2 horiz/2 vert",
"FileTypeDetectedMIMEType" : "image/jpeg",
"NumberofComponents" : "3",
"HuffmanNumberofTables" : "4 Huffman tables",
"label1" : "menu",
"XParsedBy" : "org.apache.tika.parser.DefaultParser, org.apache.tika.parser.ocr.TesseractOCRParser, org.apache.tika.parser.jpeg.JpegParser",
"label2" : "web site",
"label3" : "crossword puzzle",
"absolutepath" : "/Volumes/seagate/opensourcecomputervision/images/",
"filelastModifiedTime" : "2018-06-13T17:24:07-0400",
"ThumbnailWidthPixels" : "0",
"filegroup" : "staff",
"ResolutionUnits" : "none",
"JFIFResolutionUnits" : "none",
"CompressionType" : "Baseline",
"probability5" : "1.12%"
}
This is built using a combination of Apache Tika, TensorFlow and other metadata analysis processors.
... View more
Labels:
07-11-2018
02:24 PM
2 Kudos
Capture Images from PicSum.com Free Images Process All the Images via TensorFlow Processor, SSD Predict via MMS and SqueezeNet v1.1 via MMS Apache Zeppelin SQL Against tblsqueeze11 Example Output from Squeeze v1.1 Storing Generic Data in HDFS via Schema Example SSD Data JSON High Level Flow From Server Apache NiFi Server Flows to Store Convert to Apache ORC Extract Attributes Convert JSON Arrays to Other Example Data Derived From TensorFlow Processor Schemas in Schema Registry Create Table in Zeppelin Query Table in Zeppelin Python Libraries git clone https://github.com/awslabs/mxnet-model-server.git
pip install opencv-python -U
pip install scikit-learn -U
pip install easydict -U
pip install scikit-image -U
pip install numpy -U
pip install mxnet -U
pip3.6 install opencv-python -U
pip3.6 install scikit-learn -U
pip3.6 install easydict -U
pip3.6 install scikit-image -U
pip3.6 install numpy -U
pip3.6 install mxnet -U Example Runs - Squeeze v1.1 mxnet-model-server --models squeezenet=squeezenet_v1.1.model --service mms/model_service/mxnet_vision_service.py --port 9999
[INFO 2018-07-10 16:50:26,840 PID:7730 /usr/local/lib/python3.6/site-packages/mms/request_handler/flask_handler.py:jsonify:159] Jsonifying the response: {'prediction': [[{'probability': 0.3365139067173004, 'class': 'n03710193 mailbox, letter box'}, {'probability': 0.1522996574640274, 'class': 'n03764736 milk can'}, {'probability': 0.08760709315538406, 'class': 'n03000134 chainlink fence'}, {'probability': 0.08103135228157043, 'class': 'n02747177 ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin'}, {'probability': 0.04956872761249542, 'class': 'n02795169 barrel, cask'}]]}
[INFO 2018-07-10 16:50:26,842 PID:7730 /usr/local/lib/python3.6/site-packages/werkzeug/_internal.py:_log:88] 127.0.0.1 - - [10/Jul/2018 16:50:26] "POST /squeezenet/predict HTTP/1.1" 200 -
[INFO 2018-07-10 16:50:46,904 PID:7730 /usr/local/lib/python3.6/site-packages/mms/serving_frontend.py:predict_callback:467] Request input: data should be image with jpeg format.
[INFO 2018-07-10 16:50:46,960 PID:7730 /usr/local/lib/python3.6/site-packages/mms/request_handler/flask_handler.py:get_file_data:137] Getting file data from request.
[INFO 2018-07-10 16:50:47,020 PID:7730 /usr/local/lib/python3.6/site-packages/mms/serving_frontend.py:predict_callback:510] Response is text.
[INFO 2018-07-10 16:50:47,020 PID:7730 /usr/local/lib/python3.6/site-packages/mms/request_handler/flask_handler.py:jsonify:159] Jsonifying the response: {'prediction': [[{'probability': 0.1060439869761467, 'class': 'n02536864 coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch'}, {'probability': 0.06582894921302795, 'class': 'n01930112 nematode, nematode worm, roundworm'}, {'probability': 0.05008145794272423, 'class': 'n01751748 sea snake'}, {'probability': 0.03847070038318634, 'class': 'n01737021 water snake'}, {'probability': 0.03614763543009758, 'class': 'n09229709 bubble'}]]}
[INFO 2018-07-10 16:50:47,021 PID:7730 /usr/local/lib/python3.6/site-packages/werkzeug/_internal.py:_log:88] 127.0.0.1 - - [10/Jul/2018 16:50:47] "POST /squeezenet/predict HTTP/1.1" 200 -
mxnet-model-server --models SSD=resnet50_ssd_model.model --service ssd_service.py --port 9998
Apache MXNet Model Server Model Zoo https://github.com/awslabs/mxnet-model-server/blob/master/docs/model_zoo.md Connect to MMS /opt/demo/curl.sh
curl -X POST http://127.0.0.1:9998/SSD/predict -F "data=@$1" 2>/dev/null
/opt/demo/curl2.sh
curl -X POST http://127.0.0.1:9999/squeezenet/predict -F "data=@$1" 2>/dev/null
Flows mxnetserverlocal.xml mxnetmodelserver.xml Reference
https://community.hortonworks.com/articles/155435/using-the-new-mxnet-model-server.html https://community.hortonworks.com/articles/177232/apache-deep-learning-101-processing-apache-mxnet-m.html https://mxnet.incubator.apache.org/model_zoo/ https://medium.com/apache-mxnet/mxnet-1-2-adds-built-in-support-for-onnx-e2c7450ffc28 https://mxnet.incubator.apache.org/api/python/gluon/model_zoo.html https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data https://github.com/onnx/models https://github.com/awslabs/mxnet-model-server/blob/master/docs/model_zoo.md#lstm-ptb https://github.com/awslabs/mxnet-model-server/blob/master/docs/model_zoo.md#arcface-resnet100_onnx
... View more
Labels:
07-19-2018
11:12 AM
Great post. Another solution may be to make use of Google Blockchain public dataset and Nifi: http://datamater.io/2018/07/19/fetching-bitcoin-transactions-with-apache-nifi/
... View more