1973
Posts
1225
Kudos Received
124
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2470 | 04-03-2024 06:39 AM | |
| 3819 | 01-12-2024 08:19 AM | |
| 2064 | 12-07-2023 01:49 PM | |
| 3045 | 08-02-2023 07:30 AM | |
| 4182 | 03-29-2023 01:22 PM |
09-10-2017
02:15 PM
1 Kudo
Part 1: Installation and Setup NVIDIA Maxwell ™, 256 CUDA cores
Quad ARM® A57/2 MB L2
4K x 2K 30 Hz Encode (HEVC) 4K x 2K 60 Hz Decode (10-Bit Support)
4 GB 64 bit LPDDR4 25.6 GB/s
2x DSI, 1x eDP 1.4 / DP 1.2 / HDMI
16 GB eMMC, SDIO, SATA
Up to 6 Cameras (2 Lane) CSI2 D-PHY 1.1 (1.5 Gbps/Lane)
UART, SPI, I2C, I2S, GPIOs
1 Gigabit Ethernet, 802.11ac WLAN, Bluetooth Part 2: Classifying Images with ImageNet Part 3: Detecting Faces in Images Part 4: Using MiniFi to Send the Data and NiFi to Consume and Convert Build the Config File minifi-toolkit-0.2.0/bin/config.sh transform TensorRTMiniFi.xml config.yml Note: Do not install minifi as a service, I had issues with this version of ubuntu and that References: https://nifi.apache.org/minifi/system-admin-guide.html minifi.sh flowStatus processor:TailFile:health,stats,bulletins https://nifi.apache.org/minifi/getting-started.html https://unsplash.it/
... View more
Labels:
09-09-2017
02:02 PM
2 Kudos
Part 1: Installation and Setup Part 2: Classifying Images with ImageNet Part 3: Detecting Faces in Images NVidia also provides a good example C++ program for detecting faces, so we try that out next. You can add more training data to improve results, but it found me okay. In the next step we'll connect to MiniFi. Shell Source: root@tegra-ubuntu:/media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/jetson-inference-master/build/aarch64/bin# ./facedetect.sh
detectnet-console
args (4): 0 [/media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/jetson-inference-master/build/aarch64/bin/detectnet-console] 1 [backupimages/tim.jpg] 2 [images/outputtim.png] 3 [facenet]
detectNet -- loading detection network model from:
-- prototxt networks/facenet-120/deploy.prototxt
-- model networks/facenet-120/snapshot_iter_24000.caffemodel
-- input_blob 'data'
-- output_cvg 'coverage'
-- output_bbox 'bboxes'
-- threshold 0.500000
-- batch_size 2
[GIE] attempting to open cache file networks/facenet-120/snapshot_iter_24000.caffemodel.2.tensorcache
[GIE] loading network profile from cache... networks/facenet-120/snapshot_iter_24000.caffemodel.2.tensorcache
[GIE] platform has FP16 support.
[GIE] networks/facenet-120/snapshot_iter_24000.caffemodel loaded
[GIE] CUDA engine context initialized with 3 bindings
[GIE] networks/facenet-120/snapshot_iter_24000.caffemodel input binding index: 0
[GIE] networks/facenet-120/snapshot_iter_24000.caffemodel input dims (b=2 c=3 h=450 w=450) size=4860000
[cuda] cudaAllocMapped 4860000 bytes, CPU 0x100ce0000 GPU 0x100ce0000
[GIE] networks/facenet-120/snapshot_iter_24000.caffemodel output 0 coverage binding index: 1
[GIE] networks/facenet-120/snapshot_iter_24000.caffemodel output 0 coverage dims (b=2 c=1 h=28 w=28) size=6272
[cuda] cudaAllocMapped 6272 bytes, CPU 0x1011a0000 GPU 0x1011a0000
[GIE] networks/facenet-120/snapshot_iter_24000.caffemodel output 1 bboxes binding index: 2
[GIE] networks/facenet-120/snapshot_iter_24000.caffemodel output 1 bboxes dims (b=2 c=4 h=28 w=28) size=25088
[cuda] cudaAllocMapped 25088 bytes, CPU 0x1012a0000 GPU 0x1012a0000
networks/facenet-120/snapshot_iter_24000.caffemodel initialized.
[cuda] cudaAllocMapped 16 bytes, CPU 0x1013a0000 GPU 0x1013a0000
maximum bounding boxes: 3136
[cuda] cudaAllocMapped 50176 bytes, CPU 0x1012a6200 GPU 0x1012a6200
[cuda] cudaAllocMapped 12544 bytes, CPU 0x1011a1a00 GPU 0x1011a1a00
loaded image backupimages/tim.jpg (400 x 400) 2560000 bytes
[cuda] cudaAllocMapped 2560000 bytes, CPU 0x1014a0000 GPU 0x1014a0000
detectnet-console: beginning processing network (1505047556083)
[GIE] layer deploy_transform input reformatter 0 - 4.594114 ms
[GIE] layer deploy_transform - 1.522865 ms
[GIE] layer conv1/7x7_s2 + conv1/relu_7x7 - 24.272917 ms
[GIE] layer pool1/3x3_s2 - 4.988593 ms
[GIE] layer pool1/norm1 - 1.322396 ms
[GIE] layer conv2/3x3_reduce + conv2/relu_3x3_reduce - 2.462032 ms
[GIE] layer conv2/3x3 + conv2/relu_3x3 - 29.438957 ms
[GIE] layer conv2/norm2 - 3.703281 ms
[GIE] layer pool2/3x3_s2 - 3.817292 ms
[GIE] layer inception_3a/1x1 + inception_3a/relu_1x1 || inception_3a/3x3_reduce + inception_3a/relu_3x3_reduce || inception_3a/5x5_reduce + inception_3a/relu_5x5_reduce - 4.193281 ms
[GIE] layer inception_3a/3x3 + inception_3a/relu_3x3 - 11.074271 ms
[GIE] layer inception_3a/5x5 + inception_3a/relu_5x5 - 2.207708 ms
[GIE] layer inception_3a/pool - 1.708906 ms
[GIE] layer inception_3a/pool_proj + inception_3a/relu_pool_proj - 1.522240 ms
[GIE] layer inception_3a/1x1 copy - 0.194323 ms
[GIE] layer inception_3b/1x1 + inception_3b/relu_1x1 || inception_3b/3x3_reduce + inception_3b/relu_3x3_reduce || inception_3b/5x5_reduce + inception_3b/relu_5x5_reduce - 8.700052 ms
[GIE] layer inception_3b/3x3 + inception_3b/relu_3x3 - 21.696459 ms
[GIE] layer inception_3b/5x5 + inception_3b/relu_5x5 - 10.463386 ms
[GIE] layer inception_3b/pool - 2.265937 ms
[GIE] layer inception_3b/pool_proj + inception_3b/relu_pool_proj - 1.910729 ms
[GIE] layer inception_3b/1x1 copy - 0.354375 ms
[GIE] layer pool3/3x3_s2 - 1.903125 ms
[GIE] layer inception_4a/1x1 + inception_4a/relu_1x1 || inception_4a/3x3_reduce + inception_4a/relu_3x3_reduce || inception_4a/5x5_reduce + inception_4a/relu_5x5_reduce - 4.471615 ms
[GIE] layer inception_4a/3x3 + inception_4a/relu_3x3 - 6.044531 ms
[GIE] layer inception_4a/5x5 + inception_4a/relu_5x5 - 0.968907 ms
[GIE] layer inception_4a/pool - 1.064114 ms
[GIE] layer inception_4a/pool_proj + inception_4a/relu_pool_proj - 1.103750 ms
[GIE] layer inception_4a/1x1 copy - 0.152396 ms
[GIE] layer inception_4b/1x1 + inception_4b/relu_1x1 || inception_4b/3x3_reduce + inception_4b/relu_3x3_reduce || inception_4b/5x5_reduce + inception_4b/relu_5x5_reduce - 4.764219 ms
[GIE] layer inception_4b/3x3 + inception_4b/relu_3x3 - 4.324583 ms
[GIE] layer inception_4b/5x5 + inception_4b/relu_5x5 - 1.413073 ms
[GIE] layer inception_4b/pool - 1.132969 ms
[GIE] layer inception_4b/pool_proj + inception_4b/relu_pool_proj - 1.176146 ms
[GIE] layer inception_4b/1x1 copy - 0.132864 ms
[GIE] layer inception_4c/1x1 + inception_4c/relu_1x1 || inception_4c/3x3_reduce + inception_4c/relu_3x3_reduce || inception_4c/5x5_reduce + inception_4c/relu_5x5_reduce - 4.738177 ms
[GIE] layer inception_4c/3x3 + inception_4c/relu_3x3 - 5.503698 ms
[GIE] layer inception_4c/5x5 + inception_4c/relu_5x5 - 1.394011 ms
[GIE] layer inception_4c/pool - 1.132656 ms
[GIE] layer inception_4c/pool_proj + inception_4c/relu_pool_proj - 1.157812 ms
[GIE] layer inception_4c/1x1 copy - 0.111927 ms
[GIE] layer inception_4d/1x1 + inception_4d/relu_1x1 || inception_4d/3x3_reduce + inception_4d/relu_3x3_reduce || inception_4d/5x5_reduce + inception_4d/relu_5x5_reduce - 4.727709 ms
[GIE] layer inception_4d/3x3 + inception_4d/relu_3x3 - 6.811302 ms
[GIE] layer inception_4d/5x5 + inception_4d/relu_5x5 - 1.772187 ms
[GIE] layer inception_4d/pool - 1.132084 ms
[GIE] layer inception_4d/pool_proj + inception_4d/relu_pool_proj - 1.161718 ms
[GIE] layer inception_4d/1x1 copy - 0.103438 ms
[GIE] layer inception_4e/1x1 + inception_4e/relu_1x1 || inception_4e/3x3_reduce + inception_4e/relu_3x3_reduce || inception_4e/5x5_reduce + inception_4e/relu_5x5_reduce - 7.476458 ms
[GIE] layer inception_4e/3x3 + inception_4e/relu_3x3 - 12.779844 ms
[GIE] layer inception_4e/5x5 + inception_4e/relu_5x5 - 3.287656 ms
[GIE] layer inception_4e/pool - 1.165417 ms
[GIE] layer inception_4e/pool_proj + inception_4e/relu_pool_proj - 2.159844 ms
[GIE] layer inception_4e/1x1 copy - 0.195000 ms
[GIE] layer inception_5a/1x1 + inception_5a/relu_1x1 || inception_5a/3x3_reduce + inception_5a/relu_3x3_reduce || inception_5a/5x5_reduce + inception_5a/relu_5x5_reduce - 11.466510 ms
[GIE] layer inception_5a/3x3 + inception_5a/relu_3x3 - 12.746927 ms
[GIE] layer inception_5a/5x5 + inception_5a/relu_5x5 - 3.235729 ms
[GIE] layer inception_5a/pool - 1.818386 ms
[GIE] layer inception_5a/pool_proj + inception_5a/relu_pool_proj - 3.259010 ms
[GIE] layer inception_5a/1x1 copy - 0.194844 ms
[GIE] layer inception_5b/1x1 + inception_5b/relu_1x1 || inception_5b/3x3_reduce + inception_5b/relu_3x3_reduce || inception_5b/5x5_reduce + inception_5b/relu_5x5_reduce - 14.704739 ms
[GIE] layer inception_5b/3x3 + inception_5b/relu_3x3 - 11.462292 ms
[GIE] layer inception_5b/5x5 + inception_5b/relu_5x5 - 4.753594 ms
[GIE] layer inception_5b/pool - 1.817604 ms
[GIE] layer inception_5b/pool_proj + inception_5b/relu_pool_proj - 3.259792 ms
[GIE] layer inception_5b/1x1 copy - 0.274687 ms
[GIE] layer cvg/classifier - 2.113386 ms
[GIE] layer coverage/sig - 0.059687 ms
[GIE] layer coverage/sig output reformatter 0 - 0.042969 ms
[GIE] layer bbox/regressor - 2.062864 ms
[GIE] layer bbox/regressor output reformatter 0 - 0.053386 ms
[GIE] layer network time - 301.203705 ms
detectnet-console: finished processing network (1505047556394)
1 bounding boxes detected
bounding box 0 (17.527779, -34.222221) (193.388885, 238.500000) w=175.861115 h=272.722229
draw boxes 1 0 0.000000 200.000000 255.000000 100.000000
detectnet-console: writing 400x400 image to 'images/outputtim.png'
detectnet-console: successfully wrote 400x400 image to 'images/outputtim.png'
References:
http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html https://unsplash.com/search/photos/face networks/bvlc_googlenet.caffemodel https://github.com/JunhongXu/tx1-neural-navigation https://github.com/jetsonhacks?tab=repositories https://github.com/dusty-nv/jetson-inference https://github.com/dusty-nv/jetson-inference/blob/master/docs/deep-learning.md https://github.com/NVIDIA/DIGITS/tree/master/examples/semantic-segmentation https://github.com/jetsonhacks/installTensorFlowTX1 https://github.com/open-horizon/cogwerx-jetson-tx1/wiki/Yolo-and-Darknet-on-the-TX1 https://nvidia.qwiklab.com/focuses/preview/223?locale=en http://www.jetsonhacks.com/2016/12/30/tensorflow-nvidia-jetson-tx1-development-kit/ http://docs.nvidia.com/jetpack-l4t/index.html#developertools/mobile/jetpack/l4t/3.0/jetpack_l4t_install.htm https://developer.nvidia.com/embedded/buy/jetson-tx1-devkit https://developer.nvidia.com/cublas
... View more
Labels:
09-09-2017
01:56 PM
3 Kudos
Part 1: Installation and Setup Part 2: Classifying Images with ImageNet Use This Project: https://github.com/dusty-nv/jetson-inference This will create a C++ executable to run Classification for ImageNet with TensorRT. Shell Call Example root@tegra-ubuntu:/media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/jetson-inference-master/build/aarch64/bin# ./runclassify.sh
imagenet-console
args (3): 0 [/media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/jetson-inference-master/build/aarch64/bin/imagenet-console] 1 [backupimages/granny_smith_1.jpg] 2 [images/output_0.jpg]
imageNet -- loading classification network model from:
-- prototxt networks/googlenet.prototxt
-- model networks/bvlc_googlenet.caffemodel
-- class_labels networks/ilsvrc12_synset_words.txt
-- input_blob 'data'
-- output_blob 'prob'
-- batch_size 2
[GIE] attempting to open cache file networks/bvlc_googlenet.caffemodel.2.tensorcache
[GIE] loading network profile from cache... networks/bvlc_googlenet.caffemodel.2.tensorcache
[GIE] platform has FP16 support.
[GIE] networks/bvlc_googlenet.caffemodel loaded
[GIE] CUDA engine context initialized with 2 bindings
[GIE] networks/bvlc_googlenet.caffemodel input binding index: 0
[GIE] networks/bvlc_googlenet.caffemodel input dims (b=2 c=3 h=224 w=224) size=1204224
[cuda] cudaAllocMapped 1204224 bytes, CPU 0x100ce0000 GPU 0x100ce0000
[GIE] networks/bvlc_googlenet.caffemodel output 0 prob binding index: 1
[GIE] networks/bvlc_googlenet.caffemodel output 0 prob dims (b=2 c=1000 h=1 w=1) size=8000
[cuda] cudaAllocMapped 8000 bytes, CPU 0x100e20000 GPU 0x100e20000
networks/bvlc_googlenet.caffemodel initialized.
[GIE] networks/bvlc_googlenet.caffemodel loaded
imageNet -- loaded 1000 class info entries
networks/bvlc_googlenet.caffemodel initialized.
loaded image backupimages/granny_smith_1.jpg (1000 x 1000) 16000000 bytes
[cuda] cudaAllocMapped 16000000 bytes, CPU 0x100f20000 GPU 0x100f20000
[GIE] layer conv1/7x7_s2 + conv1/relu_7x7 input reformatter 0 - 1.207813 ms
[GIE] layer conv1/7x7_s2 + conv1/relu_7x7 - 6.144531 ms
[GIE] layer pool1/3x3_s2 - 1.301354 ms
[GIE] layer pool1/norm1 - 0.412240 ms
[GIE] layer conv2/3x3_reduce + conv2/relu_3x3_reduce - 0.737552 ms
[GIE] layer conv2/3x3 + conv2/relu_3x3 - 11.184843 ms
[GIE] layer conv2/norm2 - 1.052657 ms
[GIE] layer pool2/3x3_s2 - 0.946510 ms
[GIE] layer inception_3a/1x1 + inception_3a/relu_1x1 || inception_3a/3x3_reduce + inception_3a/relu_3x3_reduce || inception_3a/5x5_reduce + inception_3a/relu_5x5_reduce - 1.299844 ms
[GIE] layer inception_3a/3x3 + inception_3a/relu_3x3 - 3.431562 ms
[GIE] layer inception_3a/5x5 + inception_3a/relu_5x5 - 0.697657 ms
[GIE] layer inception_3a/pool - 0.449479 ms
[GIE] layer inception_3a/pool_proj + inception_3a/relu_pool_proj - 0.542916 ms
[GIE] layer inception_3a/1x1 copy - 0.074375 ms
[GIE] layer inception_3b/1x1 + inception_3b/relu_1x1 || inception_3b/3x3_reduce + inception_3b/relu_3x3_reduce || inception_3b/5x5_reduce + inception_3b/relu_5x5_reduce - 2.582917 ms
[GIE] layer inception_3b/3x3 + inception_3b/relu_3x3 - 6.324167 ms
[GIE] layer inception_3b/5x5 + inception_3b/relu_5x5 - 3.262968 ms
[GIE] layer inception_3b/pool - 0.586719 ms
[GIE] layer inception_3b/pool_proj + inception_3b/relu_pool_proj - 0.657552 ms
[GIE] layer inception_3b/1x1 copy - 0.111511 ms
[GIE] layer pool3/3x3_s2 - 0.608333 ms
[GIE] layer inception_4a/1x1 + inception_4a/relu_1x1 || inception_4a/3x3_reduce + inception_4a/relu_3x3_reduce || inception_4a/5x5_reduce + inception_4a/relu_5x5_reduce - 1.589531 ms
[GIE] layer inception_4a/3x3 + inception_4a/relu_3x3 - 1.027396 ms
[GIE] layer inception_4a/5x5 + inception_4a/relu_5x5 - 0.420052 ms
[GIE] layer inception_4a/pool - 0.306563 ms
[GIE] layer inception_4a/pool_proj + inception_4a/relu_pool_proj - 0.464583 ms
[GIE] layer inception_4a/1x1 copy - 0.060417 ms
[GIE] layer inception_4b/1x1 + inception_4b/relu_1x1 || inception_4b/3x3_reduce + inception_4b/relu_3x3_reduce || inception_4b/5x5_reduce + inception_4b/relu_5x5_reduce - 1.416875 ms
[GIE] layer inception_4b/3x3 + inception_4b/relu_3x3 - 1.157135 ms
[GIE] layer inception_4b/5x5 + inception_4b/relu_5x5 - 0.555886 ms
[GIE] layer inception_4b/pool - 0.331354 ms
[GIE] layer inception_4b/pool_proj + inception_4b/relu_pool_proj - 0.485677 ms
[GIE] layer inception_4b/1x1 copy - 0.056041 ms
[GIE] layer inception_4c/1x1 + inception_4c/relu_1x1 || inception_4c/3x3_reduce + inception_4c/relu_3x3_reduce || inception_4c/5x5_reduce + inception_4c/relu_5x5_reduce - 1.454011 ms
[GIE] layer inception_4c/3x3 + inception_4c/relu_3x3 - 2.771198 ms
[GIE] layer inception_4c/5x5 + inception_4c/relu_5x5 - 0.554844 ms
[GIE] layer inception_4c/pool - 0.502604 ms
[GIE] layer inception_4c/pool_proj + inception_4c/relu_pool_proj - 0.486198 ms
[GIE] layer inception_4c/1x1 copy - 0.050833 ms
[GIE] layer inception_4d/1x1 + inception_4d/relu_1x1 || inception_4d/3x3_reduce + inception_4d/relu_3x3_reduce || inception_4d/5x5_reduce + inception_4d/relu_5x5_reduce - 1.419271 ms
[GIE] layer inception_4d/3x3 + inception_4d/relu_3x3 - 1.781406 ms
[GIE] layer inception_4d/5x5 + inception_4d/relu_5x5 - 0.680052 ms
[GIE] layer inception_4d/pool - 0.333542 ms
[GIE] layer inception_4d/pool_proj + inception_4d/relu_pool_proj - 0.483854 ms
[GIE] layer inception_4d/1x1 copy - 0.048229 ms
[GIE] layer inception_4e/1x1 + inception_4e/relu_1x1 || inception_4e/3x3_reduce + inception_4e/relu_3x3_reduce || inception_4e/5x5_reduce + inception_4e/relu_5x5_reduce - 2.225573 ms
[GIE] layer inception_4e/3x3 + inception_4e/relu_3x3 - 4.142656 ms
[GIE] layer inception_4e/5x5 + inception_4e/relu_5x5 - 0.954427 ms
[GIE] layer inception_4e/pool - 0.332917 ms
[GIE] layer inception_4e/pool_proj + inception_4e/relu_pool_proj - 0.667344 ms
[GIE] layer inception_4e/1x1 copy - 0.071666 ms
[GIE] layer pool4/3x3_s2 - 0.275625 ms
[GIE] layer inception_5a/1x1 + inception_5a/relu_1x1 || inception_5a/3x3_reduce + inception_5a/relu_3x3_reduce || inception_5a/5x5_reduce + inception_5a/relu_5x5_reduce - 1.685417 ms
[GIE] layer inception_5a/3x3 + inception_5a/relu_3x3 - 2.085990 ms
[GIE] layer inception_5a/5x5 + inception_5a/relu_5x5 - 0.391198 ms
[GIE] layer inception_5a/pool - 0.187552 ms
[GIE] layer inception_5a/pool_proj + inception_5a/relu_pool_proj - 0.964791 ms
[GIE] layer inception_5a/1x1 copy - 0.041094 ms
[GIE] layer inception_5b/1x1 + inception_5b/relu_1x1 || inception_5b/3x3_reduce + inception_5b/relu_3x3_reduce || inception_5b/5x5_reduce + inception_5b/relu_5x5_reduce - 2.327656 ms
[GIE] layer inception_5b/3x3 + inception_5b/relu_3x3 - 1.884532 ms
[GIE] layer inception_5b/5x5 + inception_5b/relu_5x5 - 1.364895 ms
[GIE] layer inception_5b/pool - 0.189219 ms
[GIE] layer inception_5b/pool_proj + inception_5b/relu_pool_proj - 0.453490 ms
[GIE] layer inception_5b/1x1 copy - 0.045781 ms
[GIE] layer pool5/7x7_s1 - 0.743281 ms
[GIE] layer loss3/classifier input reformatter 0 - 0.042552 ms
[GIE] layer loss3/classifier - 0.848386 ms
[GIE] layer loss3/classifier output reformatter 0 - 0.042969 ms
[GIE] layer prob - 0.092343 ms
[GIE] layer prob output reformatter 0 - 0.042552 ms
[GIE] layer network time - 84.158958 ms
class 0948 - 1.000000 (Granny Smith)
imagenet-console: 'backupimages/granny_smith_1.jpg' -> 100.00000% class #948 (Granny Smith)
loaded image fontmapA.png (256 x 512) 2097152 bytes
[cuda] cudaAllocMapped 2097152 bytes, CPU 0x101fa0000 GPU 0x101fa0000
[cuda] cudaAllocMapped 8192 bytes, CPU 0x100e22000 GPU 0x100e22000
imagenet-console: attempting to save output image to 'images/output_0.jpg'
imagenet-console: completed saving 'images/output_0.jpg'
shutting down...
Input File Output File The output image contains the highest probability of what's in the image. This one says sunscreen which is a bit weird, I am guessing because my original image is very sunny. Source Code https://github.com/tspannhw/jetsontx1-TensorRT Resources http://www.jetsonhacks.com/2017/01/28/install-samsung-ssd-on-nvidia-jetson-tx1/ https://github.com/PhilipChicco/pedestrianSys https://github.com/jetsonhacks?tab=repositories https://github.com/Netzeband/JetsonTX1_im2txt https://github.com/DJTobias/Cherry-Autonomous-Racecar https://github.com/jetsonhacks/postFlashTX1 https://github.com/jetsonhacks/installTensorFlowTX1 http://www.jetsonhacks.com/2016/12/21/jetson-tx1-swap-file-and-development-preparation/
... View more
Labels:
09-09-2017
01:40 PM
5 Kudos
Use Case: Ingesting sensors, images, voice and video from moving vehicles and running deep learning in the running vehicle. Transporting data and messages to remote data centers via Apache MiniFi and NiFi over Secure S2S HTTPS. Background: NVidia Jetson TX1 is a specialized developer kit for running a powerful GPU as an embedded device for robots, UAV and specialized platforms. I envision it's usage in field trucks for intermodal, utilities, telecommunications, delivery services, government and other industries with field vehicles. Installation and Setup You will need a workstation running Ubuntu 16 with enough disk space and network access. This will be to download all the software and push it over a network to your NVidia Jetson TX1. You can download Ubuntu here (https://www.ubuntu.com/download/desktop). Fortunately I had a MiniPC with 4GB of RAM that reformatted with Ubuntu to be the host PC to build my Jetson. You cannot run this from a Mac or Windows machine. You will need a monitor, mouse and keyboard for your host machine and a set for your NVIdia Jetson. First step boot to your NVidia Jetson and set up WiFi networking and make sure your monitor, keyboards and mouse work. Make sure you download the latest NVidia JetPack on your host Ubuntu machine https://developer.nvidia.com/embedded/jetpack The one I used was JetPack 3.1 and that included: 64-bit Ubuntu 16.04
cuDNN 6.0
TensorRT 2.1
CUDA 8.0. Initial login: ubuntu/ubuntu After installation it will be nvidia/nvidia. Please change that password, security is important and this GPU could do some serious bitcoin mining. sudo su
apt update
apt-get install git zip unzip autoconf automake libtool curl zlib1g-dev maven swig bzip2
apt-get purge libopencv4tegra-dev libopencv4tegra
apt-get purge libopencv4tegra-repo
apt-get update
apt-get install build-essential
apt-get install cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
apt-get install libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev
apt-get install python2.7-dev
apt-get install python-dev python-numpy libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev
apt-get install libgtkglext1 libgtkglext1-dev
apt-get install qtbase5-dev apt-get install libv4l-dev v4l-utils qv4l2 v4l2ucp
cd $HOME/NVIDIA-INSTALL
./installer.sh Downloaded and run NVidia Jetson TX1 JetPack from host Ubuntu computer
./JetPack-L4T-3.1-linux-x64.run. This will run on the host server for probably an hour and require networking connection between the two and a few reboots. I added a 64GB SD Card as the space on the Jetson is tiny. I would recommend adding a big SATA hard drive. umount /dev/sdb1
mount -o umask=000 -t vfat /dev/sdb1 /media/ Turn on the fan on the Jetson, echo 255 > /sys/kernel/debug/tegra_fan/target_pwm. Download MiniFi https://nifi.apache.org/minifi/download.html or https://hortonworks.com/downloads/#dataflow. You will need to install JDK 8. sudo add-apt-repository ppa:webupd8team/java
sudo apt update
sudo apt install oracle-java8-installer -y
download minifi-0.2.0-bin.zip
unzip *.zip
bin/minifi.sh start
In the next part, we will classify images. Part 2: Classifying Images with ImageNet Part 3: Detecting Faces in Images Part 4: Ingesting with MiniFi and NiFi Shell Call Example /media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/jetson-inference-master/build/aarch64/bin/detectnet-console pic-007.png outputface7.png facenet Source Code https://github.com/tspannhw/jetsontx1-TensorRT References http://elinux.org/Jetson/Computer_Vision_Performance#Hardware_Acceleration_of_OpenCV https://github.com/dusty-nv/jetson-inference#system-setup https://github.com/dusty-nv/jetson-inference/blob/master/docs/deep-learning.md https://github.com/NVIDIA/DIGITS/tree/master/examples/semantic-segmentation https://developer.nvidia.com/tensorrt https://community.hortonworks.com/articles/130814/sensors-and-image-capture-and-deep-learning-analys.html https://github.com/dusty-nv/jetson-inference/blob/master/detectnet-console/detectnet-console.cpp http://elinux.org/Jetson_TX1#Jetson_TX1_Module http://www.jetsonhacks.com/2016/12/30/tensorflow-nvidia-jetson-tx1-development-kit/ https://developer.nvidia.com/embedded/learn/tutorials#collapseOne http://www.jetsonhacks.com/2016/12/30/install-tensorflow-on-nvidia-jetson-tx1-development-kit/ http://www.nvidia.com/object/JetsonTX1DeveloperKitSE.html? http://www.nvidia.com/object/embedded-systems-dev-kits-modules.html https://github.com/jetsonhacks/installTensorFlowTX1/blob/master/scripts/installDependencies.sh https://github.com/dusty-nv/jetson-inference#system-setup http://docs.nvidia.com/jetpack-l4t/index.html#developertools/mobile/jetpack/l4t/3.0/jetpack_l4t_install.htm https://github.com/dusty-nv/jetson-inference/blob/master/data/networks/ilsvrc12_synset_words.txt https://github.com/tspannhw/rpi-rainbowhat/blob/master/minifi.py https://community.hortonworks.com/articles/130814/sensors-and-image-capture-and-deep-learning-analys.html https://nifi.apache.org/minifi/system-admin-guide.html
... View more
Labels:
08-28-2017
11:25 AM
Great!! I set up a spark-cluster with 2 workers. I save a Dataframe using partitionBy ("column x") as a parquet format to some path on each worker. The matter is that i am able to save it but if i want to read it back i am getting these errors: - Could not read footer for file file´status ...... - unable to specify Schema ... Any Suggestions?
... View more
08-21-2017
10:02 PM
6 Kudos
The MiniFi flow executes two scripts: one to call TensorFlow Python that captures an OpenCV Raspberry Pi Camera and runs Inception on it. That message is formatted as JSON and sent on. The second script reads GPS values from a USB GSP sensor and outputs JSON. Get File reads the Pi Camera image produced by the ClassifyImages process. Cleanup logs is a standalone timed script that cleans up old logs on the Raspberry Pi. Using InferredAvroSchema I created a schema for the GPS unit and stored it in the Hortonworks Schema Registry. This is the provenance event for a typical GPS message sent. You can see what shell script we ran and from what host. In Apache NiFi we process the message, routing to the correct place, setting a schema and querying it for a latitude. Then we convert the AVRO record to ORC to save as a Hive table. MiniFi requires we change the NiFi created template to a configuration file via the command-line MiniFi Toolkit. minifi-toolkit-0.2.0/bin/config.sh transform gpstensorflowpiminifi2.xml config.yml
scp config.yml pi@192.168.1.167:/opt/demo/minifi/conf/
./gpsrun.sh
{"ipaddress": "192.168.1.167", "utc": "2017-08-21T20:00:06.000Z", "epx": "10.301", "epv": "50.6", "serialno": "000000002a1f1e34", "altitude": "38.393", "cputemp": 58.0, "eps": "37.16",
"longitude": "-74.52923472", "ts": "2017-08-21 20:00:03", "public_ip": "71.168.184.247", "track": "236.6413", "host": "vid5", "mode": "3", "time": "2017-08-21T20:00:06.000Z",
"latitude": "40.268194845", "climb": "-0.054", "speed": "0.513", "ept": "0.005"}
2017-08-21 16:20:33,199 INFO [Timer-Driven Process Thread-6] o.apache.nifi.remote.client.PeerSelector New Weighted Distribution of Nodes:
PeerStatus[hostname=HW13125.local,port=8080,secure=false,flowFileCount=0] will receive 100.0% of data
2017-08-21 16:20:34,261 INFO [Timer-Driven Process Thread-6] o.a.nifi.remote.StandardRemoteGroupPort RemoteGroupPort[name=MiniFi TensorFlowImage,targets=http://hw13125.local:8080/nifi]
Successfully sent [StandardFlowFileRecord[uuid=f84767ec-c627-4b63-9e88-bba1dfb4eb9b,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1503346615133-2, container=default, section=2], offset=2198, length=441],offset=0,name=3460526041973,size=441]] (441 bytes) to http://HW13125.local:8080/nifi-api in 117 milliseconds at a rate of 3.65 KB/sec
{"ipaddress": "192.168.1.167", "utc": "2017-08-21T20:17:21.010Z", "epx": "10.301", "epv": "50.6", "serialno": "000000002a1f1e34",
"altitude": "43.009", "cputemp": 52.0, "eps": "1.33", "longitude": "-74.529242206", "ts": "2017-08-21 20:16:55", "public_ip": "71.168.184.247",
"track": "190.894", "host": "vid5", "mode": "3", "time": "2017-08-21T20:17:21.010Z", "latitude": "40.268159632",
"climb": "0.022", "speed": "0.353", "ept": "0.005"} To collect our GPS information, below is my script called by MiniFi. Source: #! /usr/bin/python
import os
from gps import *
from time import *
import time
import threading
import json
import time
import colorsys
import os
import json
import sys, socket
import subprocess
import time
import datetime
from time import sleep
from time import gmtime, strftime
import signal
import time
import urllib2
# Need sudo apt-get install gpsd gpsd-clients python-gps ntp
# Based on
#Author: Callum Pritchard, Joachim Hummel
#Project Name: Flick 3D Gesture
#Project Description: Sending Flick 3D Gesture sensor data to mqtt
#Version Number: 0.1
#Date: 15/6/17
#Release State: Alpha testing
#Changes: Created
# Based on
# Written by Dan Mandle http://dan.mandle.me September 2012
# License: GPL 2.0
# Based on: https://hortonworks.com/tutorial/analyze-iot-weather-station-data-via-connected-data-architecture/section/3/
#### Initialization
# yyyy-mm-dd hh:mm:ss
currenttime= strftime("%Y-%m-%d %H:%M:%S",gmtime())
external_IP_and_port = ('198.41.0.4', 53) # a.root-servers.net
socket_family = socket.AF_INET
host = os.uname()[1]
def getCPUtemperature():
res = os.popen('vcgencmd measure_temp').readline()
return(res.replace("temp=","").replace("'C\n",""))
def IP_address():
try:
s = socket.socket(socket_family, socket.SOCK_DGRAM)
s.connect(external_IP_and_port)
answer = s.getsockname()
s.close()
return answer[0] if answer else None
except socket.error:
return None
# Get Raspberry Pi Serial Number
def get_serial():
# Extract serial from cpuinfo file
cpuserial = "0000000000000000"
try:
f = open('/proc/cpuinfo','r')
for line in f:
if line[0:6]=='Serial':
cpuserial = line[10:26]
f.close()
except:
cpuserial = "ERROR000000000"
return cpuserial
# Get Raspberry Pi Public IP via IPIFY Rest Call
def get_public_ip():
ip = json.load(urllib2.urlopen('https://api.ipify.org/?format=json'))['ip']
return ip
cpuTemp=int(float(getCPUtemperature()))
ipaddress = IP_address()
# Attempt to get Public IP
public_ip = get_public_ip()
# Attempt to get Raspberry Pi Serial Number
serial = get_serial()
gpsd = None
class GpsPoller(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
global gpsd #bring it in scope
gpsd = gps(mode=WATCH_ENABLE) #starting the stream of info
self.current_value = None
self.running = True #setting the thread running to true
def run(self):
global gpsd
while gpsp.running:
gpsd.next() #this will continue to loop and grab EACH set of gpsd info to clear the buffer
if __name__ == '__main__':
gpsp = GpsPoller() # create the thread
stopthis = False
try:
gpsp.start() # start it up
while not stopthis:
if gpsd.fix.latitude > 0:
row = { 'latitude': str(gpsd.fix.latitude),
'longitude': str(gpsd.fix.longitude),
'utc': str(gpsd.utc),
'time': str(gpsd.fix.time),
'altitude': str(gpsd.fix.altitude),
'eps': str(gpsd.fix.eps),
'epx': str(gpsd.fix.epx),
'epv': str(gpsd.fix.epv),
'ept': str(gpsd.fix.ept),
'speed': str(gpsd.fix.speed),
'climb': str(gpsd.fix.climb),
'track': str(gpsd.fix.track),
'ts': currenttime,
'public_ip': public_ip,
'serialno': serial,
'host': host,
'cputemp': round(cpuTemp,2),
'ipaddress': ipaddress,
'mode': str(gpsd.fix.mode)}
json_string = json.dumps(row)
print json_string
gpsp.running = False
stopthis = True
except (KeyboardInterrupt, SystemExit): #when you press ctrl+c
gpsp.running = False
gpsp.join() # wait for the thread to finish what it's doing Link https://github.com/tspannhw/dws2017sydney
... View more
Labels:
08-21-2017
07:53 PM
I upload a sample of the data incase you don't want to generate your own with mockaroo. It's in simplecsv.txt. Drop this file in the GetFile directory
... View more
08-14-2017
09:27 PM
5 Kudos
Sometimes you want to trigger events with a click on a special touchpad or device mounted somewhere. This could be in a factory, on a door or at your desk. For me it's a small device on my desk that I can use to trigger events. I have this running every 15 seconds looking for gestures. I could put it in an infinite loop, but Python and RPI could leak some memory. We are very constrained, so I am trying to keep it a little more minimal. I keep my batch duration to 15 seconds and my run schedule to 15 seconds. Let's Build A Simple NiFi Flow to Receive The JSON Data and React To It In my RouteOnContent, I just look for the word "center". I have thought of many options for running sql, doing a backup, etc.. Build The MiniFi Flow in NiFi Downloaded minifi 0.2.0 and minifi 0.2.0 toolkit (you can use newer, but make sure you install the same version on the device you are going to move your config.yml to). minifi-toolkit-0.2.0/bin/config.sh transform minififlick.xml config.yml Then SCP that config.yml and the minifi-*.zip to your device. Unzip (or tar-cvf it). Then you can run. This requires Java 8 JDK installed and running on your machine. The Oracle version runs best on RPI. Let's Install and Run MiniFi cd /opt/demo/minifi-0.2.0
bin/minifi.sh install
bin/minifi.sh start Example Message {"flick": "center", "host": "herrflick", "ipaddress": "192.168.1.185", "ts": "2017-08-14 21:19:21", "cputemp": 47.0} The important data is flick which is the gesture made (click, tap, movement, double click, etc...) The other data is one's I always like to grab for devices (hostname, IP Address, timestamp and CPU temperature). Since flick = center, we send a Slack message We could do just about anything you want in the flow based on the trigger. Start backups, send system information, anything you want to trigger on demand. or Source Code: #!/usr/bin/env python
# -*- coding: <utf-8> -*-
# Based on
#Author: Callum Pritchard, Joachim Hummel
#Project Name: Flick 3D Gesture
#Project Description: Sending Flick 3D Gesture sensor data to mqtt
#Version Number: 0.1
#Date: 15/6/17
#Release State: Alpha testing
#Changes: Created
import time
import colorsys
import os
import json
import sys, socket
import subprocess
import time
import datetime
from time import sleep
from time import gmtime, strftime
import signal
import flicklib
import time
from curses import wrapper
some_value = 5000
flicktxt = ''
#### Initialization
# yyyy-mm-dd hh:mm:ss
currenttime= strftime("%Y-%m-%d %H:%M:%S",gmtime())
external_IP_and_port = ('198.41.0.4', 53) # a.root-servers.net
socket_family = socket.AF_INET
host = os.uname()[1]
def getCPUtemperature():
res = os.popen('vcgencmd measure_temp').readline()
return(res.replace("temp=","").replace("'C\n",""))
def IP_address():
try:
s = socket.socket(socket_family, socket.SOCK_DGRAM)
s.connect(external_IP_and_port)
answer = s.getsockname()
s.close()
return answer[0] if answer else None
except socket.error:
return None
def message(publisher, value):
print value
@flicklib.move()
def move(x, y, z):
global xyztxt
xyztxt = '{:5.3f} {:5.3f} {:5.3f}'.format(x,y,z)
@flicklib.flick()
def flick(start,finish):
global flicktxt
flicktxt = 'FLICK-' + start[0].upper() + finish[0].upper()
message('flick',flicktxt)
@flicklib.airwheel()
def spinny(delta):
global some_value
global airwheeltxt
global flicktxt
some_value += delta
if some_value < 0:
some_value = 0
if some_value > 10000:
some_value = 10000
airwheeltxt = str(some_value/100)
flicktxt = airwheeltxt
@flicklib.double_tap()
def doubletap(position):
global doubletaptxt
global flicktxt
doubletaptxt = position
flicktxt = doubletaptxt
@flicklib.tap()
def tap(position):
global taptxt
global flicktxt
taptxt = position
flicktxt = taptxt
@flicklib.touch()
def touch(position):
global touchtxt
global flicktxt
touchtxt = position
flicktxt = touchtxt
def main():
global xyztxt
global flicktxt
global airwheeltxt
global touchtxt
global taptxt
global doubletaptxt
flickcount = 0
airwheeltxt = ''
airwheelcount = 0
touchtxt = ''
touchcount = 0
taptxt = ''
tapcount = 0
doubletaptxt = ''
doubletapcount = 0
time.sleep(0.1)
while flickcount < 100:
if (flicktxt != "") :
flickcount += 100
cpuTemp=int(float(getCPUtemperature()))
ipaddress = IP_address()
row = { 'ts': currenttime, 'host': host, 'cputemp': round(cpuTemp,2), 'ipaddress': ipaddress, 'flick': flicktxt }
json_string = json.dumps(row)
print(json_string)
sys.exit()
time.sleep(0.1)
flickcount += 1
main() See: https://github.com/tspannhw/rpi-rainbowhat References:
https://github.com/PiSupply/Flick.git Apps https://www.pi-supply.com/make/flick-quick-start-faq/ https://github.com/unixweb/Flick https://github.com/unixweb/Flick/blob/master/Flick.py https://github.com/tspannhw/rpi-rainbowhat https://github.com/tspannhw/rpi-rainbowhat/blob/master/minifi.py https://github.com/tspannhw/rpi-sensehat-mqtt-nifi https://github.com/tspannhw/rpi-sensehat-minifi-python https://github.com/tspannhw/rpi-flickhat-minifi/tree/master https://nifi.apache.org/minifi/getting-started.html
... View more
Labels:
08-06-2017
01:40 PM
5 Kudos
Python Word Cloud Integrating existing Python libraries and scripts is very easy in Apache NiFi. I add the library for both version of Python I have on my system, while moving all new scripts to the 3.x branch. Install the library for both Python 2.7 and 3.5 pip install wordcloud
pip3 install wordcloud Example Usage echo "NiFi\nHadoop\nSpark\n" | wordcloud_cli.py --imagefile wordcloud.png For use in NiFi, I wrap my call with a shell script wc.sh echo $1 | tr " " "\n" | wordcloud_cli.py This will build a PNG for me that I can store in a file system or in HDFS, I updated the filename to add png at the end. This will take a parameter to a shell script (our Tweet) and convert it into words usable for a word cloud. You can use other sources or other methods of splitting words. I am pulling twitter messages, so I use ReplaceText to replace the flow file with ${msg} which is just the tweet. Then I execute the Python WordCloud CLI: Example References:
https://amueller.github.io/word_cloud/auto_examples/a_new_hope.html https://amueller.github.io/word_cloud/auto_examples/simple.html#sphx-glr-auto-examples-simple-py https://github.com/amueller/word_cloud
... View more
Labels:
08-06-2017
12:03 AM
4 Kudos
Technology: Python, TensorFlow, Apache Hive, MiniFi, NiFi, HDFS, WebHDFS, Zeppelin, SQL, Raspberry Pi, Pi Camera, S2S. Apache NiFi For Ingest of Images and TensorFlow Analysis from the Edge (Raspberry Pi 3) The Apache NiFi ingestion flow is straightforward. MiniFi sends us flow files over S2S from the RPI which consists of two types of messages. One is a JSON formatted file of metadata and TensorFlow analysis of an image. The second is the actual image captured. We route on the filename attribute to handle each file type appropriately. We send the image to HDFS for storage and retrieval via WebHDFS. The JSON I add a schema and a JSON content-type and split up the file. You see I have some non-JSON junk in there I want pulled out. Then I send my JSON record to QueryRecord to filter out any empty messages. This produces AVRO files from JSON which I convert to ORC and store in HDFS. From there it's easy to query my new deep learning produced multimedia data via standard SQL. Routing by FileName Attribute Split Into Lnes Extract Only JSON Data ORC Configuration Query Record MiniFi Flow Installed on Raspberry Pi The flow on the Pi is simple. We have three processes running. The First is to execute our classify.sh to activate the PiCamera to take a picture and then feed the picture to TensorFlow. The CleanupLogs process is a shell script that deletes old logs. The GetFile reads any image produced by the first shell execute and send the image to NiFi. Hive DDL: CREATE EXTERNAL TABLE IF NOT EXISTS tfimage (image STRING, ts STRING, host STRING, score STRING, human_string STRING, node_id FLOAT) STORED AS ORC
LOCATION '/tfimage' Hive SQL: %jdbc(hive)
select ts, score, human_string,
concat('%html <img width=200 height=200 src="http://princeton10.field.hortonworks.com:50070/webhdfs/v1/tfimagefiles/', SUBSTR(image,18), '?op=OPEN">') as cam_image
from tfimage where image like '%2017%' Shell classify.sh python -W ignore /opt/demo/classify_image.py Modified TensorFlow Example Python # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Simple image classification with Inception.
Run image classification with Inception trained on ImageNet 2012 Challenge data
set.
This program creates a graph from a saved GraphDef protocol buffer,
and runs inference on an input JPEG image. It outputs human readable
strings of the top 5 predictions along with their probabilities.
Change the --image_file argument to any jpg image to compute a
classification of that image.
Please see the tutorial and website for a detailed description of how
to use this script to perform image recognition.
https://tensorflow.org/tutorials/image_recognition/
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import argparse
import os.path
import re
import sys
import tarfile
import os
import datetime
import math
import random, string
import base64
import json
import time
import picamera
from time import sleep
from time import gmtime, strftime
import numpy as np
from six.moves import urllib
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
FLAGS = None
# pylint: disable=line-too-long
DATA_URL = 'http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz'
# pylint: enable=line-too-long
# yyyy-mm-dd hh:mm:ss
currenttime= strftime("%Y-%m-%d %H:%M:%S",gmtime())
host = os.uname()[1]
def randomword(length):
return ''.join(random.choice(string.lowercase) for i in range(length))
class NodeLookup(object):
"""Converts integer node ID's to human readable labels."""
def __init__(self,
label_lookup_path=None,
uid_lookup_path=None):
if not label_lookup_path:
label_lookup_path = os.path.join(
FLAGS.model_dir, 'imagenet_2012_challenge_label_map_proto.pbtxt')
if not uid_lookup_path:
uid_lookup_path = os.path.join(
FLAGS.model_dir, 'imagenet_synset_to_human_label_map.txt')
self.node_lookup = self.load(label_lookup_path, uid_lookup_path)
def load(self, label_lookup_path, uid_lookup_path):
"""Loads a human readable English name for each softmax node.
Args:
label_lookup_path: string UID to integer node ID.
uid_lookup_path: string UID to human-readable string.
Returns:
dict from integer node ID to human-readable string.
"""
if not tf.gfile.Exists(uid_lookup_path):
tf.logging.fatal('File does not exist %s', uid_lookup_path)
if not tf.gfile.Exists(label_lookup_path):
tf.logging.fatal('File does not exist %s', label_lookup_path)
# Loads mapping from string UID to human-readable string
proto_as_ascii_lines = tf.gfile.GFile(uid_lookup_path).readlines()
uid_to_human = {}
p = re.compile(r'[n\d]*[ \S,]*')
for line in proto_as_ascii_lines:
parsed_items = p.findall(line)
uid = parsed_items[0]
human_string = parsed_items[2]
uid_to_human[uid] = human_string
# Loads mapping from string UID to integer node ID.
node_id_to_uid = {}
proto_as_ascii = tf.gfile.GFile(label_lookup_path).readlines()
for line in proto_as_ascii:
if line.startswith(' target_class:'):
target_class = int(line.split(': ')[1])
if line.startswith(' target_class_string:'):
target_class_string = line.split(': ')[1]
node_id_to_uid[target_class] = target_class_string[1:-2]
# Loads the final mapping of integer node ID to human-readable string
node_id_to_name = {}
for key, val in node_id_to_uid.items():
if val not in uid_to_human:
tf.logging.fatal('Failed to locate: %s', val)
name = uid_to_human[val]
node_id_to_name[key] = name
return node_id_to_name
def id_to_string(self, node_id):
if node_id not in self.node_lookup:
return ''
return self.node_lookup[node_id]
def create_graph():
"""Creates a graph from saved GraphDef file and returns a saver."""
# Creates graph from saved graph_def.pb.
with tf.gfile.FastGFile(os.path.join(
FLAGS.model_dir, 'classify_image_graph_def.pb'), 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
_ = tf.import_graph_def(graph_def, name='')
def run_inference_on_image(image):
"""Runs inference on an image.
Args:
image: Image file name.
Returns:
Nothing
"""
if not tf.gfile.Exists(image):
tf.logging.fatal('File does not exist %s', image)
image_data = tf.gfile.FastGFile(image, 'rb').read()
# Creates graph from saved GraphDef.
create_graph()
with tf.Session() as sess:
# Some useful tensors:
# 'softmax:0': A tensor containing the normalized prediction across
# 1000 labels.
# 'pool_3:0': A tensor containing the next-to-last layer containing 2048
# float description of the image.
# 'DecodeJpeg/contents:0': A tensor containing a string providing JPEG
# encoding of the image.
# Runs the softmax tensor by feeding the image_data as input to the graph.
softmax_tensor = sess.graph.get_tensor_by_name('softmax:0')
predictions = sess.run(softmax_tensor,
{'DecodeJpeg/contents:0': image_data})
predictions = np.squeeze(predictions)
# Creates node ID --> English string lookup.
node_lookup = NodeLookup()
top_k = predictions.argsort()[-FLAGS.num_top_predictions:][::-1]
row = []
for node_id in top_k:
human_string = node_lookup.id_to_string(node_id)
score = predictions[node_id]
row.append( { 'node_id': node_id, 'image': image, 'host': host, 'ts': currenttime, 'human_string': str(human_string), 'score': str(score)} )
json_string = json.dumps(row)
print( json_string )
def maybe_download_and_extract():
"""Download and extract model tar file."""
dest_directory = FLAGS.model_dir
if not os.path.exists(dest_directory):
os.makedirs(dest_directory)
filename = DATA_URL.split('/')[-1]
filepath = os.path.join(dest_directory, filename)
if not os.path.exists(filepath):
def _progress(count, block_size, total_size):
sys.stdout.write('\r>> Downloading %s %.1f%%' % (
filename, float(count * block_size) / float(total_size) * 100.0))
sys.stdout.flush()
filepath, _ = urllib.request.urlretrieve(DATA_URL, filepath, _progress)
print()
statinfo = os.stat(filepath)
print('Successfully downloaded', filename, statinfo.st_size, 'bytes.')
tarfile.open(filepath, 'r:gz').extractall(dest_directory)
def main(_):
maybe_download_and_extract()
# Create unique image name
img_name = '/opt/demo/images/pi_image_{0}_{1}.jpg'.format(randomword(3),strftime("%Y%m%d%H%M%S",gmtime()))
# Capture Image from Pi Camera
try:
camera = picamera.PiCamera()
camera.resolution = (1024,768)
camera.annotate_text = " Stored with Apache NiFi "
camera.capture(img_name, resize=(600,400))
pass
finally:
camera.close()
# image = (FLAGS.image_file if FLAGS.image_file else
# os.path.join(FLAGS.model_dir, 'cropped_panda.jpg'))
run_inference_on_image(img_name)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
# classify_image_graph_def.pb:
# Binary representation of the GraphDef protocol buffer.
# imagenet_synset_to_human_label_map.txt:
# Map from synset ID to a human readable string.
# imagenet_2012_challenge_label_map_proto.pbtxt:
# Text representation of a protocol buffer mapping a label to synset ID.
parser.add_argument(
'--model_dir',
type=str,
default='/tmp/imagenet',
help=""" Path to classify_image_graph_def.pb,
imagenet_synset_to_human_label_map.txt, and
imagenet_2012_challenge_label_map_proto.pbtxt. """
)
parser.add_argument(
'--image_file',
type=str,
default='',
help='Absolute path to image file.'
)
parser.add_argument(
'--num_top_predictions',
type=int,
default=5,
help='Display this many predictions.'
)
FLAGS, unparsed = parser.parse_known_args()
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
Key Additions to Python row.append( { 'node_id': node_id, 'image': image, 'host': host, 'ts': currenttime, 'human_string': str(human_string), 'score': str(score)} )
json_string = json.dumps(row)
print( json_string ) Image Captured by Camera (Also Added "Stored with Apache NiFi" via Python) References: http://regexr.com/
... View more