1973
Posts
1225
Kudos Received
124
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2472 | 04-03-2024 06:39 AM | |
| 3820 | 01-12-2024 08:19 AM | |
| 2066 | 12-07-2023 01:49 PM | |
| 3045 | 08-02-2023 07:30 AM | |
| 4185 | 03-29-2023 01:22 PM |
11-22-2017
03:39 PM
Sure, can be from anywhere you want for REST. GET or POST.
... View more
11-17-2017
03:28 PM
4 Kudos
For this edge use case we are using NVidia's TensorRT as well as Apache MXNet. From TensorRT I am using imageNet for image recognition and detectNet for object localization.
For Apache MXNet, I am using their image classifier. So we have multiple deep learning frameworks run on the same capture from an attached USB webcam. For this example I am using a Logitech HD1080, while the Jetson TX1 supports 6+ concurrent high end cameras for those with high end use cases. They also have a more powerful Jetson TX2 for more intense use cases as it has more RAM and a better GPU.
Quick Hardware Breakdown
NVIDIA Maxwell™ GPU with 256 NVIDIA® CUDA® Cores 4 GB LPDDR4 Memory
Python Script # 2017 load pictures and analyze
# https://github.com/tspannhw/mxnet_rpi/blob/master/analyze.py
import time
import sys
import datetime
import subprocess
import sys
import urllib2
import os
import datetime
import traceback
import math
import random, string
import base64
import json
from time import gmtime, strftime
import mxnet as mx
import inception_predict
import numpy as np
import cv2
import math
import random, string
import time
from time import gmtime, strftime
start = time.time()
cap = cv2.VideoCapture(0)
packet_size=3000
def randomword(length):
return ''.join(random.choice(string.lowercase) for i in range(length))
#while True:
# Create unique image name
uniqueid = 'mxnet_uuid_{0}_{1}'.format(randomword(3),strftime("%Y%m%d%H%M%S",gmtime()))
ret, frame = cap.read()
imgdir = 'images/'
filename = 'tx1_image_{0}_{1}.jpg'.format(randomword(3),strftime("%Y%m%d%H%M%S",gmtime()))
cv2.imwrite(imgdir + filename, frame)
# Run inception prediction on image
try:
topn = inception_predict.predict_from_local_file(imgdir + filename, N=5)
except:
errorcondition = "true"
# CPU Temp
f = open("/sys/devices/virtual/thermal/thermal_zone1/temp","r")
cputemp = str( f.readline() )
cputemp = cputemp.replace('\n','')
cputemp = cputemp.strip()
cputemp = str(round(float(cputemp)) / 1000)
cputempf = str(round(9.0/5.0 * float(cputemp) + 32))
f.close()
# GPU Temp
f = open("/sys/devices/virtual/thermal/thermal_zone2/temp","r")
gputemp = str( f.readline() )
gputemp = gputemp.replace('\n','')
gputemp = gputemp.strip()
gputemp = str(round(float(gputemp)) / 1000)
gputempf = str(round(9.0/5.0 * float(gputemp) + 32))
f.close()
# Face Detect
p = os.popen('/media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/jetson-inference-master/build/aarch64/bin/facedetect.sh ' + filename).read()
face = p.replace('\n','|')
face = face.strip()
# NVidia Image Net Classify
p2 = os.popen('/media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/jetson-inference-master/build/aarch64/bin/runclassify.sh ' + filename).read()
imagenet = p2.replace('\n','|')
imagenet = imagenet.strip()
# 5 MXNET Analysis
top1 = str(topn[0][1])
top1pct = str(round(topn[0][0],3) * 100)
top2 = str(topn[1][1])
top2pct = str(round(topn[1][0],3) * 100)
top3 = str(topn[2][1])
top3pct = str(round(topn[2][0],3) * 100)
top4 = str(topn[3][1])
top4pct = str(round(topn[3][0],3) * 100)
top5 = str(topn[4][1])
top5pct = str(round(topn[4][0],3) * 100)
end = time.time()
# face[-4096:]
row = { 'uuid': uniqueid, 'top1pct': top1pct, 'top1': top1, 'top2pct': top2pct, 'top2': top2,'top3pct': top3pct, 'top3': top3,'top4pct': top4pct,'top4': top4, 'top5pct': top5pct,'top5': top5, 'cputemp': cputemp, 'gputemp': gputemp, 'imagefilename': filename, 'gputempf': gputempf, 'cputempf': cputempf, 'runtime': str(round(end - start)), 'facedetect': face, 'imagenet': imagenet }
json_string = json.dumps(row)
print (json_string )
Setup Jetson TX1 for Deep Learning and Computer Vision
sudo apt-get update -y
sudo apt-get -y install git build-essential libatlas-base-dev libopencv-dev graphviz python-pip
sudo pip install pip --upgrade
sudo pip install setuptools numpy --upgrade
Apache Hive DDL
CREATE EXTERNAL TABLE IF NOT EXISTS jetsonscan
(top3pct STRING, uuid STRING, top1pct STRING, top5 STRING, top4 STRING, top3 STRING, top2 STRING, top1 STRING, top4pct STRING, facedetect STRING, gputempf STRING, gputemp STRING, top5pct STRING, top2pct STRING, cputemp STRING, imagenet STRING, runtime STRING, imagefilename STRING, cputempf STRING) STORED AS ORC
LOCATION '/jetsonscan'
Build Apache MiniFi Configuration
minifi-toolkit-0.2.0/bin/config.sh transform $1 config.yml
scp config.yml nvidia@192.168.1.190:/media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/minifi-0.2.0/conf/
Example Output JSON
{
"top3pct" : "6.1",
"uuid" : "mxnet_uuid_pgo_20171110193628",
"top1pct" : "8.3",
"top5" : "n03110669 cornet, horn, trumpet, trump",
"top4" : "n03481172 hammer",
"top3" : "n02787622 banjo",
"top2" : "n02791270 barbershop",
"top1" : "n04487394 trombone",
"top4pct" : "4.4",
"facedetect" : "networks/facenet-120/snapshot_iter_24000.caffemodel initialized.|[cuda] cudaAllocMapped 16 bytes, CPU 0x1013a0000 GPU 0x1013a0000|maximum bounding boxes: 3136|[cuda] cudaAllocMapped 50176 bytes, CPU 0x1012a6200 GPU 0x1012a6200|[cuda] cudaAllocMapped 12544 bytes, CPU 0x1011a1a00 GPU 0x1011a1a00|failed to load image /media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/images/tx1_image_xmv_20171110193629.jpg|failed to load image '/media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/images/tx1_image_xmv_20171110193629.jpg'|",
"gputempf" : "68.0",
"gputemp" : "20.0",
"top5pct" : "3.2",
"top2pct" : "6.4",
"cputemp" : "21.5",
"imagenet" : "imagenet-console| args (3): 0 [/media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/jetson-inference-master/build/aarch64/bin/imagenet-console] 1 [/media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/images/tx1_image_xmv_20171110193629.jpg] 2 [/media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/images/cfout-tx1_image_xmv_20171110193629.jpg] |||imageNet -- loading classification network model from:| -- prototxt networks/googlenet.prototxt| -- model networks/bvlc_googlenet.caffemodel| -- class_labels networks/ilsvrc12_synset_words.txt| -- input_blob 'data'| -- output_blob 'prob'| -- batch_size 2||[GIE] attempting to open cache file networks/bvlc_googlenet.caffemodel.2.tensorcache|[GIE] loading network profile from cache... networks/bvlc_googlenet.caffemodel.2.tensorcache|[GIE] platform has FP16 support.|[GIE] networks/bvlc_googlenet.caffemodel loaded|[GIE] CUDA engine context initialized with 2 bindings|[GIE] networks/bvlc_googlenet.caffemodel input binding index: 0|[GIE] networks/bvlc_googlenet.caffemodel input dims (b=2 c=3 h=224 w=224) size=1204224|[cuda] cudaAllocMapped 1204224 bytes, CPU 0x100ce0000 GPU 0x100ce0000|[GIE] networks/bvlc_googlenet.caffemodel output 0 prob binding index: 1|[GIE] networks/bvlc_googlenet.caffemodel output 0 prob dims (b=2 c=1000 h=1 w=1) size=8000|[cuda] cudaAllocMapped 8000 bytes, CPU 0x100e20000 GPU 0x100e20000|networks/bvlc_googlenet.caffemodel initialized.|[GIE] networks/bvlc_googlenet.caffemodel loaded|imageNet -- loaded 1000 class info entries|networks/bvlc_googlenet.caffemodel initialized.|failed to load image /media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/images/tx1_image_xmv_20171110193629.jpg|failed to load image '/media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/images/tx1_image_xmv_20171110193629.jpg'|",
"runtime" : "8.0",
"imagefilename" : "tx1_image_xmv_20171110193629.jpg",
"cputempf" : "71.0"
}
Schema (Put this in Hortonworks Schema Registry) - MXRECORD
{ "type" : "record", "name" : "MXRECORD", "fields" :
[ { "name" : "top3pct", "type" : "string", "doc" : "Type inferred from '\"5.0\"'" },
{ "name" : "uuid", "type" : "string", "doc" : "Type inferred from '\"mxnet_uuid_ltu_20171110193847\"'" },
{ "name" : "top1pct", "type" : "string", "doc" : "Type inferred from '\"5.4\"'" },
{ "name" : "top5", "type" : "string", "doc" : "Type inferred from '\"n03970156 plunger, plumber's helper\"'" },
{ "name" : "top4", "type" : "string", "doc" : "Type inferred from '\"n07615774 ice lolly, lolly, lollipop, popsicle\"'" },
{ "name" : "top3", "type" : "string", "doc" : "Type inferred from '\"n04270147 spatula\"'" },
{ "name" : "top2", "type" : "string", "doc" : "Type inferred from '\"n03110669 cornet, horn, trumpet, trump\"'" },
{ "name" : "top1", "type" : "string", "doc" : "Type inferred from '\"n04487394 trombone\"'" },
{ "name" : "top4pct", "type" : "string", "doc" : "Type inferred from '\"4.5\"'" },
{ "name" : "facedetect", "type" : "string" },
{ "name" : "gputempf", "type" : "string", "doc" : "Type inferred from '\"68.0\"'" },
{ "name" : "gputemp", "type" : "string", "doc" : "Type inferred from '\"20.0\"'" },
{ "name" : "top5pct", "type" : "string", "doc" : "Type inferred from '\"4.4\"'" },
{ "name" : "top2pct", "type" : "string", "doc" : "Type inferred from '\"5.3\"'" },
{ "name" : "cputemp", "type" : "string", "doc" : "Type inferred from '\"23.0\"'" },
{ "name" : "imagenet", "type" : "string" },
{ "name" : "runtime", "type" : "string", "doc" : "Type inferred from '\"8.0\"'" },
{ "name" : "imagefilename", "type" : "string", "doc" : "Type inferred from '\"tx1_image_okg_20171110193848.jpg\"'" },
{ "name" : "cputempf", "type" : "string", "doc" : "Type inferred from '\"73.0\"'" }
]
}
Example Apache MiniFi Logs
2017-11-10 15:13:53,061 INFO [Provenance Maintenance Thread-3] o.a.n.p.PersistentProvenanceRepository Created new Provenance Event Writers for events starting with ID 51004
2017-11-10 15:13:53,084 INFO [Provenance Repository Rollover Thread-1] o.a.n.p.lucene.SimpleIndexManager Index Writer for provenance_repository/index-1503524885000 has been returned to Index Manager and is no longer in use. Closing Index Writer
2017-11-10 15:13:53,086 INFO [Provenance Repository Rollover Thread-1] o.a.n.p.PersistentProvenanceRepository Successfully merged 16 journal files (6 records) into single Provenance Log File provenance_repository/50998.prov in 28 milliseconds
2017-11-10 15:13:53,087 INFO [Provenance Repository Rollover Thread-1] o.a.n.p.PersistentProvenanceRepository Successfully Rolled over Provenance Event file containing 70 records. In the past 5 minutes, 29 events have been written to the Provenance Repository, totaling 18.54 KB
2017-11-10 15:14:08,531 INFO [Http Site-to-Site PeerSelector] o.apache.nifi.remote.client.PeerSelector org.apache.nifi.remote.client.PeerSelector@60bcd09e Successfully refreshed Peer Status; remote instance consists of 1 peers
2017-11-10 15:14:38,658 WARN [ExecuteProcess c216f845-1839-3f3c-0000-000000000000 Task] o.a.n.processors.standard.ExecuteProcess ExecuteProcess[id=c216f845-1839-3f3c-0000-000000000000] [15:14:38] src/nnvm/legacy_json_util.cc:190: Loading symbol saved by previous version v0.8.0. Attempting to upgrade...
2017-11-10 15:14:38,665 WARN [ExecuteProcess c216f845-1839-3f3c-0000-000000000000 Task] o.a.n.processors.standard.ExecuteProcess ExecuteProcess[id=c216f845-1839-3f3c-0000-000000000000] [15:14:38] src/nnvm/legacy_json_util.cc:198: Symbol successfully upgraded!
2017-11-10 15:14:38,716 WARN [ExecuteProcess c216f845-1839-3f3c-0000-000000000000 Task] o.a.n.processors.standard.ExecuteProcess ExecuteProcess[id=c216f845-1839-3f3c-0000-000000000000] /media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/mxnet/python/mxnet/module/base_module.py:65: UserWarning: Data provided by label_shapes don't match names specified by label_names ([] vs. ['softmax_label'])
2017-11-10 15:14:38,717 WARN [ExecuteProcess c216f845-1839-3f3c-0000-000000000000 Task] o.a.n.processors.standard.ExecuteProcess ExecuteProcess[id=c216f845-1839-3f3c-0000-000000000000] warnings.warn(msg)
2017-11-10 15:14:38,965 WARN [ExecuteProcess c216f845-1839-3f3c-0000-000000000000 Task] o.a.n.processors.standard.ExecuteProcess ExecuteProcess[id=c216f845-1839-3f3c-0000-000000000000] HIGHGUI ERROR: V4L/V4L2: VIDIOC_S_CROP
Resources
https://github.com/tspannhw/nvidiajetsontx1-mxnet https://developer.nvidia.com/embedded/twodaystoademo https://github.com/dusty-nv/jetson-inference https://developer.nvidia.com/tensorrt https://developer.nvidia.com/embedded/buy/jetson-tx1-devkit https://github.com/tspannhw/nvidiajetsontx1-mxnet Flow Files storejetsontx1.xml jetsontx1mx-10nov2017.xml
... View more
Labels:
11-16-2017
10:06 PM
incrementalstream-1.xml
... View more
10-27-2017
06:15 PM
3 Kudos
If you have not attended a DataWorksSummit, I highly recommend it. It is an amazing event held at three locations a year and is a great community experience. The content is deep and highly technical and you will learn about the current state of the art and what is coming next. It's not just Big Data, but AI, Streaming, Microservices, Containers, Cloud and many other topics that startups and enterprises alike need to know. My topic was a simple talk on using Apache NiFi to ingest and transform various data types. There is a small group forming around my quickly released Inception V3 TensorFlow Apache NiFi Processor, I encourage you to try it and provide feedback, pull requests, bug reports, documentation, unit tests, examples and more. The Java API for TensorFlow is new so this is really basic. Thanks to @Simon Elliston Ball for a major cleanup on it. https://github.com/tspannhw/nifi-tensorflow-processor What do we want to do? MiniFi ingests camera images and sensor data Run TensorFlow Inception v3 to recognize
objects in image NiFi stores images, metadata and enriched data in Hadoop NiFi ingests social data and feeds NiFi analyzes sentiment of
textual data •TensorFlow (C++, Python, Java)
via ExecuteStreamCommand
•
•TensorFlow NiFi Java Custom Processor
•
•TensorFlow Running on Edge Nodes (MiniFi)
•
•
• •TensorFlow Mobile (iOS, Android, RPi)
•
•TensorFlow on Spark (Yahoo) via Livy, S2S,
Kafka
•
•TensorFlow Running in Containers in YARN 3.0
on Hadoop
• (NiFI 1.4) gRPC Call to TensorFlow Serving python classify_image.py
--image_file/dir/solarroofpanel.jpg<br>solar dish, solar collector, solar furnace (score
= 0.98316)<br>window screen
(score = 0.00196)<br>manhole cover
(score = 0.00070)<br>radiator (score
= 0.00041)<br>doormat,
welcome mat (score = 0.00041) Python Uses pip install -U textblob python -m textblob.download_corpora pip install -U spacy python -m spacy.en.download all
pip install -U nltk pip install -U numpy run.sh python sentiment.py "$@” sentiment.py
sentiment.pyfrom nltk.sentiment.vader
import SentimentIntensityAnalyzer
import sys
sid = SentimentIntensityAnalyzer()
ss = sid.polarity_scores(sys.argv[1])
print('Compound {0} Negative {1} Neutral {2} Positive {3} '.format( ss['compound'],ss['neg'],ss['neu'],ss['pos']))
These are some good Python libraries to be using. I recommend using Python 3.X unless you are stuck with 2.6/2.7. I have also created two processors for working with text/NLP, these are listed below for Apache OpenNLP and Stanford CoreNLP. Please comment in HCC (here), check out github and do pull requests (https://github.com/tspannhw) and come to a meetup (https://www.meetup.com/futureofdata-princeton/). References:
https://github.com/tspannhw/dws2017sydney https://dataworkssummit.com/sydney-2017/sessions/real-time-ingesting-and-transforming-sensor-data-and-social-data-with-nifi-and-tensorflow/ https://www.slideshare.net/Hadoop_Summit/realtime-ingesting-and-transforming-sensor-data-and-social-data-with-nifi-and-tensorflow https://hortonworks.com/blog/7-sessions-dataworks-summit-sydney-see/ https://community.hortonworks.com/articles/58265/analyzing-images-in-hdf-20-using-tensorflow.html https://community.hortonworks.com/articles/76935/using-sentiment-analysis-and-nlp-tools-with-hdp-25.html http://www.nltk.org/install.html https://github.com/tspannhw/nifi-nlp- processor https://community.hortonworks.com/articles/80418/open-nlp-example-apache-nifi-processor.html https://community.hortonworks.com/articles/81270/adding-stanford-corenlp-to-big-data-pipelines-apac-1.html
... View more
Labels:
10-06-2017
06:54 PM
There has been a major upgrade of the processor thanks to @Simon Elliston Ball Check out the latest version 2.1. I am also prototyping a DL4J processor based on this and some code from the SkyMind guys.
... View more
09-26-2017
03:54 PM
sounds like a good custom processor
... View more
09-13-2017
01:28 PM
Livy supports that is now a full citizen in HDP. I have not tried it, but post a question.
... View more
09-10-2017
02:15 PM
1 Kudo
Part 1: Installation and Setup NVIDIA Maxwell ™, 256 CUDA cores
Quad ARM® A57/2 MB L2
4K x 2K 30 Hz Encode (HEVC) 4K x 2K 60 Hz Decode (10-Bit Support)
4 GB 64 bit LPDDR4 25.6 GB/s
2x DSI, 1x eDP 1.4 / DP 1.2 / HDMI
16 GB eMMC, SDIO, SATA
Up to 6 Cameras (2 Lane) CSI2 D-PHY 1.1 (1.5 Gbps/Lane)
UART, SPI, I2C, I2S, GPIOs
1 Gigabit Ethernet, 802.11ac WLAN, Bluetooth Part 2: Classifying Images with ImageNet Part 3: Detecting Faces in Images Part 4: Using MiniFi to Send the Data and NiFi to Consume and Convert Build the Config File minifi-toolkit-0.2.0/bin/config.sh transform TensorRTMiniFi.xml config.yml Note: Do not install minifi as a service, I had issues with this version of ubuntu and that References: https://nifi.apache.org/minifi/system-admin-guide.html minifi.sh flowStatus processor:TailFile:health,stats,bulletins https://nifi.apache.org/minifi/getting-started.html https://unsplash.it/
... View more
Labels:
09-09-2017
02:02 PM
2 Kudos
Part 1: Installation and Setup Part 2: Classifying Images with ImageNet Part 3: Detecting Faces in Images NVidia also provides a good example C++ program for detecting faces, so we try that out next. You can add more training data to improve results, but it found me okay. In the next step we'll connect to MiniFi. Shell Source: root@tegra-ubuntu:/media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/jetson-inference-master/build/aarch64/bin# ./facedetect.sh
detectnet-console
args (4): 0 [/media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/jetson-inference-master/build/aarch64/bin/detectnet-console] 1 [backupimages/tim.jpg] 2 [images/outputtim.png] 3 [facenet]
detectNet -- loading detection network model from:
-- prototxt networks/facenet-120/deploy.prototxt
-- model networks/facenet-120/snapshot_iter_24000.caffemodel
-- input_blob 'data'
-- output_cvg 'coverage'
-- output_bbox 'bboxes'
-- threshold 0.500000
-- batch_size 2
[GIE] attempting to open cache file networks/facenet-120/snapshot_iter_24000.caffemodel.2.tensorcache
[GIE] loading network profile from cache... networks/facenet-120/snapshot_iter_24000.caffemodel.2.tensorcache
[GIE] platform has FP16 support.
[GIE] networks/facenet-120/snapshot_iter_24000.caffemodel loaded
[GIE] CUDA engine context initialized with 3 bindings
[GIE] networks/facenet-120/snapshot_iter_24000.caffemodel input binding index: 0
[GIE] networks/facenet-120/snapshot_iter_24000.caffemodel input dims (b=2 c=3 h=450 w=450) size=4860000
[cuda] cudaAllocMapped 4860000 bytes, CPU 0x100ce0000 GPU 0x100ce0000
[GIE] networks/facenet-120/snapshot_iter_24000.caffemodel output 0 coverage binding index: 1
[GIE] networks/facenet-120/snapshot_iter_24000.caffemodel output 0 coverage dims (b=2 c=1 h=28 w=28) size=6272
[cuda] cudaAllocMapped 6272 bytes, CPU 0x1011a0000 GPU 0x1011a0000
[GIE] networks/facenet-120/snapshot_iter_24000.caffemodel output 1 bboxes binding index: 2
[GIE] networks/facenet-120/snapshot_iter_24000.caffemodel output 1 bboxes dims (b=2 c=4 h=28 w=28) size=25088
[cuda] cudaAllocMapped 25088 bytes, CPU 0x1012a0000 GPU 0x1012a0000
networks/facenet-120/snapshot_iter_24000.caffemodel initialized.
[cuda] cudaAllocMapped 16 bytes, CPU 0x1013a0000 GPU 0x1013a0000
maximum bounding boxes: 3136
[cuda] cudaAllocMapped 50176 bytes, CPU 0x1012a6200 GPU 0x1012a6200
[cuda] cudaAllocMapped 12544 bytes, CPU 0x1011a1a00 GPU 0x1011a1a00
loaded image backupimages/tim.jpg (400 x 400) 2560000 bytes
[cuda] cudaAllocMapped 2560000 bytes, CPU 0x1014a0000 GPU 0x1014a0000
detectnet-console: beginning processing network (1505047556083)
[GIE] layer deploy_transform input reformatter 0 - 4.594114 ms
[GIE] layer deploy_transform - 1.522865 ms
[GIE] layer conv1/7x7_s2 + conv1/relu_7x7 - 24.272917 ms
[GIE] layer pool1/3x3_s2 - 4.988593 ms
[GIE] layer pool1/norm1 - 1.322396 ms
[GIE] layer conv2/3x3_reduce + conv2/relu_3x3_reduce - 2.462032 ms
[GIE] layer conv2/3x3 + conv2/relu_3x3 - 29.438957 ms
[GIE] layer conv2/norm2 - 3.703281 ms
[GIE] layer pool2/3x3_s2 - 3.817292 ms
[GIE] layer inception_3a/1x1 + inception_3a/relu_1x1 || inception_3a/3x3_reduce + inception_3a/relu_3x3_reduce || inception_3a/5x5_reduce + inception_3a/relu_5x5_reduce - 4.193281 ms
[GIE] layer inception_3a/3x3 + inception_3a/relu_3x3 - 11.074271 ms
[GIE] layer inception_3a/5x5 + inception_3a/relu_5x5 - 2.207708 ms
[GIE] layer inception_3a/pool - 1.708906 ms
[GIE] layer inception_3a/pool_proj + inception_3a/relu_pool_proj - 1.522240 ms
[GIE] layer inception_3a/1x1 copy - 0.194323 ms
[GIE] layer inception_3b/1x1 + inception_3b/relu_1x1 || inception_3b/3x3_reduce + inception_3b/relu_3x3_reduce || inception_3b/5x5_reduce + inception_3b/relu_5x5_reduce - 8.700052 ms
[GIE] layer inception_3b/3x3 + inception_3b/relu_3x3 - 21.696459 ms
[GIE] layer inception_3b/5x5 + inception_3b/relu_5x5 - 10.463386 ms
[GIE] layer inception_3b/pool - 2.265937 ms
[GIE] layer inception_3b/pool_proj + inception_3b/relu_pool_proj - 1.910729 ms
[GIE] layer inception_3b/1x1 copy - 0.354375 ms
[GIE] layer pool3/3x3_s2 - 1.903125 ms
[GIE] layer inception_4a/1x1 + inception_4a/relu_1x1 || inception_4a/3x3_reduce + inception_4a/relu_3x3_reduce || inception_4a/5x5_reduce + inception_4a/relu_5x5_reduce - 4.471615 ms
[GIE] layer inception_4a/3x3 + inception_4a/relu_3x3 - 6.044531 ms
[GIE] layer inception_4a/5x5 + inception_4a/relu_5x5 - 0.968907 ms
[GIE] layer inception_4a/pool - 1.064114 ms
[GIE] layer inception_4a/pool_proj + inception_4a/relu_pool_proj - 1.103750 ms
[GIE] layer inception_4a/1x1 copy - 0.152396 ms
[GIE] layer inception_4b/1x1 + inception_4b/relu_1x1 || inception_4b/3x3_reduce + inception_4b/relu_3x3_reduce || inception_4b/5x5_reduce + inception_4b/relu_5x5_reduce - 4.764219 ms
[GIE] layer inception_4b/3x3 + inception_4b/relu_3x3 - 4.324583 ms
[GIE] layer inception_4b/5x5 + inception_4b/relu_5x5 - 1.413073 ms
[GIE] layer inception_4b/pool - 1.132969 ms
[GIE] layer inception_4b/pool_proj + inception_4b/relu_pool_proj - 1.176146 ms
[GIE] layer inception_4b/1x1 copy - 0.132864 ms
[GIE] layer inception_4c/1x1 + inception_4c/relu_1x1 || inception_4c/3x3_reduce + inception_4c/relu_3x3_reduce || inception_4c/5x5_reduce + inception_4c/relu_5x5_reduce - 4.738177 ms
[GIE] layer inception_4c/3x3 + inception_4c/relu_3x3 - 5.503698 ms
[GIE] layer inception_4c/5x5 + inception_4c/relu_5x5 - 1.394011 ms
[GIE] layer inception_4c/pool - 1.132656 ms
[GIE] layer inception_4c/pool_proj + inception_4c/relu_pool_proj - 1.157812 ms
[GIE] layer inception_4c/1x1 copy - 0.111927 ms
[GIE] layer inception_4d/1x1 + inception_4d/relu_1x1 || inception_4d/3x3_reduce + inception_4d/relu_3x3_reduce || inception_4d/5x5_reduce + inception_4d/relu_5x5_reduce - 4.727709 ms
[GIE] layer inception_4d/3x3 + inception_4d/relu_3x3 - 6.811302 ms
[GIE] layer inception_4d/5x5 + inception_4d/relu_5x5 - 1.772187 ms
[GIE] layer inception_4d/pool - 1.132084 ms
[GIE] layer inception_4d/pool_proj + inception_4d/relu_pool_proj - 1.161718 ms
[GIE] layer inception_4d/1x1 copy - 0.103438 ms
[GIE] layer inception_4e/1x1 + inception_4e/relu_1x1 || inception_4e/3x3_reduce + inception_4e/relu_3x3_reduce || inception_4e/5x5_reduce + inception_4e/relu_5x5_reduce - 7.476458 ms
[GIE] layer inception_4e/3x3 + inception_4e/relu_3x3 - 12.779844 ms
[GIE] layer inception_4e/5x5 + inception_4e/relu_5x5 - 3.287656 ms
[GIE] layer inception_4e/pool - 1.165417 ms
[GIE] layer inception_4e/pool_proj + inception_4e/relu_pool_proj - 2.159844 ms
[GIE] layer inception_4e/1x1 copy - 0.195000 ms
[GIE] layer inception_5a/1x1 + inception_5a/relu_1x1 || inception_5a/3x3_reduce + inception_5a/relu_3x3_reduce || inception_5a/5x5_reduce + inception_5a/relu_5x5_reduce - 11.466510 ms
[GIE] layer inception_5a/3x3 + inception_5a/relu_3x3 - 12.746927 ms
[GIE] layer inception_5a/5x5 + inception_5a/relu_5x5 - 3.235729 ms
[GIE] layer inception_5a/pool - 1.818386 ms
[GIE] layer inception_5a/pool_proj + inception_5a/relu_pool_proj - 3.259010 ms
[GIE] layer inception_5a/1x1 copy - 0.194844 ms
[GIE] layer inception_5b/1x1 + inception_5b/relu_1x1 || inception_5b/3x3_reduce + inception_5b/relu_3x3_reduce || inception_5b/5x5_reduce + inception_5b/relu_5x5_reduce - 14.704739 ms
[GIE] layer inception_5b/3x3 + inception_5b/relu_3x3 - 11.462292 ms
[GIE] layer inception_5b/5x5 + inception_5b/relu_5x5 - 4.753594 ms
[GIE] layer inception_5b/pool - 1.817604 ms
[GIE] layer inception_5b/pool_proj + inception_5b/relu_pool_proj - 3.259792 ms
[GIE] layer inception_5b/1x1 copy - 0.274687 ms
[GIE] layer cvg/classifier - 2.113386 ms
[GIE] layer coverage/sig - 0.059687 ms
[GIE] layer coverage/sig output reformatter 0 - 0.042969 ms
[GIE] layer bbox/regressor - 2.062864 ms
[GIE] layer bbox/regressor output reformatter 0 - 0.053386 ms
[GIE] layer network time - 301.203705 ms
detectnet-console: finished processing network (1505047556394)
1 bounding boxes detected
bounding box 0 (17.527779, -34.222221) (193.388885, 238.500000) w=175.861115 h=272.722229
draw boxes 1 0 0.000000 200.000000 255.000000 100.000000
detectnet-console: writing 400x400 image to 'images/outputtim.png'
detectnet-console: successfully wrote 400x400 image to 'images/outputtim.png'
References:
http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html https://unsplash.com/search/photos/face networks/bvlc_googlenet.caffemodel https://github.com/JunhongXu/tx1-neural-navigation https://github.com/jetsonhacks?tab=repositories https://github.com/dusty-nv/jetson-inference https://github.com/dusty-nv/jetson-inference/blob/master/docs/deep-learning.md https://github.com/NVIDIA/DIGITS/tree/master/examples/semantic-segmentation https://github.com/jetsonhacks/installTensorFlowTX1 https://github.com/open-horizon/cogwerx-jetson-tx1/wiki/Yolo-and-Darknet-on-the-TX1 https://nvidia.qwiklab.com/focuses/preview/223?locale=en http://www.jetsonhacks.com/2016/12/30/tensorflow-nvidia-jetson-tx1-development-kit/ http://docs.nvidia.com/jetpack-l4t/index.html#developertools/mobile/jetpack/l4t/3.0/jetpack_l4t_install.htm https://developer.nvidia.com/embedded/buy/jetson-tx1-devkit https://developer.nvidia.com/cublas
... View more
Labels:
09-09-2017
01:56 PM
3 Kudos
Part 1: Installation and Setup Part 2: Classifying Images with ImageNet Use This Project: https://github.com/dusty-nv/jetson-inference This will create a C++ executable to run Classification for ImageNet with TensorRT. Shell Call Example root@tegra-ubuntu:/media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/jetson-inference-master/build/aarch64/bin# ./runclassify.sh
imagenet-console
args (3): 0 [/media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/jetson-inference-master/build/aarch64/bin/imagenet-console] 1 [backupimages/granny_smith_1.jpg] 2 [images/output_0.jpg]
imageNet -- loading classification network model from:
-- prototxt networks/googlenet.prototxt
-- model networks/bvlc_googlenet.caffemodel
-- class_labels networks/ilsvrc12_synset_words.txt
-- input_blob 'data'
-- output_blob 'prob'
-- batch_size 2
[GIE] attempting to open cache file networks/bvlc_googlenet.caffemodel.2.tensorcache
[GIE] loading network profile from cache... networks/bvlc_googlenet.caffemodel.2.tensorcache
[GIE] platform has FP16 support.
[GIE] networks/bvlc_googlenet.caffemodel loaded
[GIE] CUDA engine context initialized with 2 bindings
[GIE] networks/bvlc_googlenet.caffemodel input binding index: 0
[GIE] networks/bvlc_googlenet.caffemodel input dims (b=2 c=3 h=224 w=224) size=1204224
[cuda] cudaAllocMapped 1204224 bytes, CPU 0x100ce0000 GPU 0x100ce0000
[GIE] networks/bvlc_googlenet.caffemodel output 0 prob binding index: 1
[GIE] networks/bvlc_googlenet.caffemodel output 0 prob dims (b=2 c=1000 h=1 w=1) size=8000
[cuda] cudaAllocMapped 8000 bytes, CPU 0x100e20000 GPU 0x100e20000
networks/bvlc_googlenet.caffemodel initialized.
[GIE] networks/bvlc_googlenet.caffemodel loaded
imageNet -- loaded 1000 class info entries
networks/bvlc_googlenet.caffemodel initialized.
loaded image backupimages/granny_smith_1.jpg (1000 x 1000) 16000000 bytes
[cuda] cudaAllocMapped 16000000 bytes, CPU 0x100f20000 GPU 0x100f20000
[GIE] layer conv1/7x7_s2 + conv1/relu_7x7 input reformatter 0 - 1.207813 ms
[GIE] layer conv1/7x7_s2 + conv1/relu_7x7 - 6.144531 ms
[GIE] layer pool1/3x3_s2 - 1.301354 ms
[GIE] layer pool1/norm1 - 0.412240 ms
[GIE] layer conv2/3x3_reduce + conv2/relu_3x3_reduce - 0.737552 ms
[GIE] layer conv2/3x3 + conv2/relu_3x3 - 11.184843 ms
[GIE] layer conv2/norm2 - 1.052657 ms
[GIE] layer pool2/3x3_s2 - 0.946510 ms
[GIE] layer inception_3a/1x1 + inception_3a/relu_1x1 || inception_3a/3x3_reduce + inception_3a/relu_3x3_reduce || inception_3a/5x5_reduce + inception_3a/relu_5x5_reduce - 1.299844 ms
[GIE] layer inception_3a/3x3 + inception_3a/relu_3x3 - 3.431562 ms
[GIE] layer inception_3a/5x5 + inception_3a/relu_5x5 - 0.697657 ms
[GIE] layer inception_3a/pool - 0.449479 ms
[GIE] layer inception_3a/pool_proj + inception_3a/relu_pool_proj - 0.542916 ms
[GIE] layer inception_3a/1x1 copy - 0.074375 ms
[GIE] layer inception_3b/1x1 + inception_3b/relu_1x1 || inception_3b/3x3_reduce + inception_3b/relu_3x3_reduce || inception_3b/5x5_reduce + inception_3b/relu_5x5_reduce - 2.582917 ms
[GIE] layer inception_3b/3x3 + inception_3b/relu_3x3 - 6.324167 ms
[GIE] layer inception_3b/5x5 + inception_3b/relu_5x5 - 3.262968 ms
[GIE] layer inception_3b/pool - 0.586719 ms
[GIE] layer inception_3b/pool_proj + inception_3b/relu_pool_proj - 0.657552 ms
[GIE] layer inception_3b/1x1 copy - 0.111511 ms
[GIE] layer pool3/3x3_s2 - 0.608333 ms
[GIE] layer inception_4a/1x1 + inception_4a/relu_1x1 || inception_4a/3x3_reduce + inception_4a/relu_3x3_reduce || inception_4a/5x5_reduce + inception_4a/relu_5x5_reduce - 1.589531 ms
[GIE] layer inception_4a/3x3 + inception_4a/relu_3x3 - 1.027396 ms
[GIE] layer inception_4a/5x5 + inception_4a/relu_5x5 - 0.420052 ms
[GIE] layer inception_4a/pool - 0.306563 ms
[GIE] layer inception_4a/pool_proj + inception_4a/relu_pool_proj - 0.464583 ms
[GIE] layer inception_4a/1x1 copy - 0.060417 ms
[GIE] layer inception_4b/1x1 + inception_4b/relu_1x1 || inception_4b/3x3_reduce + inception_4b/relu_3x3_reduce || inception_4b/5x5_reduce + inception_4b/relu_5x5_reduce - 1.416875 ms
[GIE] layer inception_4b/3x3 + inception_4b/relu_3x3 - 1.157135 ms
[GIE] layer inception_4b/5x5 + inception_4b/relu_5x5 - 0.555886 ms
[GIE] layer inception_4b/pool - 0.331354 ms
[GIE] layer inception_4b/pool_proj + inception_4b/relu_pool_proj - 0.485677 ms
[GIE] layer inception_4b/1x1 copy - 0.056041 ms
[GIE] layer inception_4c/1x1 + inception_4c/relu_1x1 || inception_4c/3x3_reduce + inception_4c/relu_3x3_reduce || inception_4c/5x5_reduce + inception_4c/relu_5x5_reduce - 1.454011 ms
[GIE] layer inception_4c/3x3 + inception_4c/relu_3x3 - 2.771198 ms
[GIE] layer inception_4c/5x5 + inception_4c/relu_5x5 - 0.554844 ms
[GIE] layer inception_4c/pool - 0.502604 ms
[GIE] layer inception_4c/pool_proj + inception_4c/relu_pool_proj - 0.486198 ms
[GIE] layer inception_4c/1x1 copy - 0.050833 ms
[GIE] layer inception_4d/1x1 + inception_4d/relu_1x1 || inception_4d/3x3_reduce + inception_4d/relu_3x3_reduce || inception_4d/5x5_reduce + inception_4d/relu_5x5_reduce - 1.419271 ms
[GIE] layer inception_4d/3x3 + inception_4d/relu_3x3 - 1.781406 ms
[GIE] layer inception_4d/5x5 + inception_4d/relu_5x5 - 0.680052 ms
[GIE] layer inception_4d/pool - 0.333542 ms
[GIE] layer inception_4d/pool_proj + inception_4d/relu_pool_proj - 0.483854 ms
[GIE] layer inception_4d/1x1 copy - 0.048229 ms
[GIE] layer inception_4e/1x1 + inception_4e/relu_1x1 || inception_4e/3x3_reduce + inception_4e/relu_3x3_reduce || inception_4e/5x5_reduce + inception_4e/relu_5x5_reduce - 2.225573 ms
[GIE] layer inception_4e/3x3 + inception_4e/relu_3x3 - 4.142656 ms
[GIE] layer inception_4e/5x5 + inception_4e/relu_5x5 - 0.954427 ms
[GIE] layer inception_4e/pool - 0.332917 ms
[GIE] layer inception_4e/pool_proj + inception_4e/relu_pool_proj - 0.667344 ms
[GIE] layer inception_4e/1x1 copy - 0.071666 ms
[GIE] layer pool4/3x3_s2 - 0.275625 ms
[GIE] layer inception_5a/1x1 + inception_5a/relu_1x1 || inception_5a/3x3_reduce + inception_5a/relu_3x3_reduce || inception_5a/5x5_reduce + inception_5a/relu_5x5_reduce - 1.685417 ms
[GIE] layer inception_5a/3x3 + inception_5a/relu_3x3 - 2.085990 ms
[GIE] layer inception_5a/5x5 + inception_5a/relu_5x5 - 0.391198 ms
[GIE] layer inception_5a/pool - 0.187552 ms
[GIE] layer inception_5a/pool_proj + inception_5a/relu_pool_proj - 0.964791 ms
[GIE] layer inception_5a/1x1 copy - 0.041094 ms
[GIE] layer inception_5b/1x1 + inception_5b/relu_1x1 || inception_5b/3x3_reduce + inception_5b/relu_3x3_reduce || inception_5b/5x5_reduce + inception_5b/relu_5x5_reduce - 2.327656 ms
[GIE] layer inception_5b/3x3 + inception_5b/relu_3x3 - 1.884532 ms
[GIE] layer inception_5b/5x5 + inception_5b/relu_5x5 - 1.364895 ms
[GIE] layer inception_5b/pool - 0.189219 ms
[GIE] layer inception_5b/pool_proj + inception_5b/relu_pool_proj - 0.453490 ms
[GIE] layer inception_5b/1x1 copy - 0.045781 ms
[GIE] layer pool5/7x7_s1 - 0.743281 ms
[GIE] layer loss3/classifier input reformatter 0 - 0.042552 ms
[GIE] layer loss3/classifier - 0.848386 ms
[GIE] layer loss3/classifier output reformatter 0 - 0.042969 ms
[GIE] layer prob - 0.092343 ms
[GIE] layer prob output reformatter 0 - 0.042552 ms
[GIE] layer network time - 84.158958 ms
class 0948 - 1.000000 (Granny Smith)
imagenet-console: 'backupimages/granny_smith_1.jpg' -> 100.00000% class #948 (Granny Smith)
loaded image fontmapA.png (256 x 512) 2097152 bytes
[cuda] cudaAllocMapped 2097152 bytes, CPU 0x101fa0000 GPU 0x101fa0000
[cuda] cudaAllocMapped 8192 bytes, CPU 0x100e22000 GPU 0x100e22000
imagenet-console: attempting to save output image to 'images/output_0.jpg'
imagenet-console: completed saving 'images/output_0.jpg'
shutting down...
Input File Output File The output image contains the highest probability of what's in the image. This one says sunscreen which is a bit weird, I am guessing because my original image is very sunny. Source Code https://github.com/tspannhw/jetsontx1-TensorRT Resources http://www.jetsonhacks.com/2017/01/28/install-samsung-ssd-on-nvidia-jetson-tx1/ https://github.com/PhilipChicco/pedestrianSys https://github.com/jetsonhacks?tab=repositories https://github.com/Netzeband/JetsonTX1_im2txt https://github.com/DJTobias/Cherry-Autonomous-Racecar https://github.com/jetsonhacks/postFlashTX1 https://github.com/jetsonhacks/installTensorFlowTX1 http://www.jetsonhacks.com/2016/12/21/jetson-tx1-swap-file-and-development-preparation/
... View more
Labels: