1973
Posts
1225
Kudos Received
124
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1856 | 04-03-2024 06:39 AM | |
| 2887 | 01-12-2024 08:19 AM | |
| 1594 | 12-07-2023 01:49 PM | |
| 2360 | 08-02-2023 07:30 AM | |
| 3248 | 03-29-2023 01:22 PM |
09-10-2017
02:15 PM
1 Kudo
Part 1: Installation and Setup NVIDIA Maxwell ™, 256 CUDA cores
Quad ARM® A57/2 MB L2
4K x 2K 30 Hz Encode (HEVC) 4K x 2K 60 Hz Decode (10-Bit Support)
4 GB 64 bit LPDDR4 25.6 GB/s
2x DSI, 1x eDP 1.4 / DP 1.2 / HDMI
16 GB eMMC, SDIO, SATA
Up to 6 Cameras (2 Lane) CSI2 D-PHY 1.1 (1.5 Gbps/Lane)
UART, SPI, I2C, I2S, GPIOs
1 Gigabit Ethernet, 802.11ac WLAN, Bluetooth Part 2: Classifying Images with ImageNet Part 3: Detecting Faces in Images Part 4: Using MiniFi to Send the Data and NiFi to Consume and Convert Build the Config File minifi-toolkit-0.2.0/bin/config.sh transform TensorRTMiniFi.xml config.yml Note: Do not install minifi as a service, I had issues with this version of ubuntu and that References: https://nifi.apache.org/minifi/system-admin-guide.html minifi.sh flowStatus processor:TailFile:health,stats,bulletins https://nifi.apache.org/minifi/getting-started.html https://unsplash.it/
... View more
Labels:
09-09-2017
02:02 PM
2 Kudos
Part 1: Installation and Setup Part 2: Classifying Images with ImageNet Part 3: Detecting Faces in Images NVidia also provides a good example C++ program for detecting faces, so we try that out next. You can add more training data to improve results, but it found me okay. In the next step we'll connect to MiniFi. Shell Source: root@tegra-ubuntu:/media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/jetson-inference-master/build/aarch64/bin# ./facedetect.sh
detectnet-console
args (4): 0 [/media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/jetson-inference-master/build/aarch64/bin/detectnet-console] 1 [backupimages/tim.jpg] 2 [images/outputtim.png] 3 [facenet]
detectNet -- loading detection network model from:
-- prototxt networks/facenet-120/deploy.prototxt
-- model networks/facenet-120/snapshot_iter_24000.caffemodel
-- input_blob 'data'
-- output_cvg 'coverage'
-- output_bbox 'bboxes'
-- threshold 0.500000
-- batch_size 2
[GIE] attempting to open cache file networks/facenet-120/snapshot_iter_24000.caffemodel.2.tensorcache
[GIE] loading network profile from cache... networks/facenet-120/snapshot_iter_24000.caffemodel.2.tensorcache
[GIE] platform has FP16 support.
[GIE] networks/facenet-120/snapshot_iter_24000.caffemodel loaded
[GIE] CUDA engine context initialized with 3 bindings
[GIE] networks/facenet-120/snapshot_iter_24000.caffemodel input binding index: 0
[GIE] networks/facenet-120/snapshot_iter_24000.caffemodel input dims (b=2 c=3 h=450 w=450) size=4860000
[cuda] cudaAllocMapped 4860000 bytes, CPU 0x100ce0000 GPU 0x100ce0000
[GIE] networks/facenet-120/snapshot_iter_24000.caffemodel output 0 coverage binding index: 1
[GIE] networks/facenet-120/snapshot_iter_24000.caffemodel output 0 coverage dims (b=2 c=1 h=28 w=28) size=6272
[cuda] cudaAllocMapped 6272 bytes, CPU 0x1011a0000 GPU 0x1011a0000
[GIE] networks/facenet-120/snapshot_iter_24000.caffemodel output 1 bboxes binding index: 2
[GIE] networks/facenet-120/snapshot_iter_24000.caffemodel output 1 bboxes dims (b=2 c=4 h=28 w=28) size=25088
[cuda] cudaAllocMapped 25088 bytes, CPU 0x1012a0000 GPU 0x1012a0000
networks/facenet-120/snapshot_iter_24000.caffemodel initialized.
[cuda] cudaAllocMapped 16 bytes, CPU 0x1013a0000 GPU 0x1013a0000
maximum bounding boxes: 3136
[cuda] cudaAllocMapped 50176 bytes, CPU 0x1012a6200 GPU 0x1012a6200
[cuda] cudaAllocMapped 12544 bytes, CPU 0x1011a1a00 GPU 0x1011a1a00
loaded image backupimages/tim.jpg (400 x 400) 2560000 bytes
[cuda] cudaAllocMapped 2560000 bytes, CPU 0x1014a0000 GPU 0x1014a0000
detectnet-console: beginning processing network (1505047556083)
[GIE] layer deploy_transform input reformatter 0 - 4.594114 ms
[GIE] layer deploy_transform - 1.522865 ms
[GIE] layer conv1/7x7_s2 + conv1/relu_7x7 - 24.272917 ms
[GIE] layer pool1/3x3_s2 - 4.988593 ms
[GIE] layer pool1/norm1 - 1.322396 ms
[GIE] layer conv2/3x3_reduce + conv2/relu_3x3_reduce - 2.462032 ms
[GIE] layer conv2/3x3 + conv2/relu_3x3 - 29.438957 ms
[GIE] layer conv2/norm2 - 3.703281 ms
[GIE] layer pool2/3x3_s2 - 3.817292 ms
[GIE] layer inception_3a/1x1 + inception_3a/relu_1x1 || inception_3a/3x3_reduce + inception_3a/relu_3x3_reduce || inception_3a/5x5_reduce + inception_3a/relu_5x5_reduce - 4.193281 ms
[GIE] layer inception_3a/3x3 + inception_3a/relu_3x3 - 11.074271 ms
[GIE] layer inception_3a/5x5 + inception_3a/relu_5x5 - 2.207708 ms
[GIE] layer inception_3a/pool - 1.708906 ms
[GIE] layer inception_3a/pool_proj + inception_3a/relu_pool_proj - 1.522240 ms
[GIE] layer inception_3a/1x1 copy - 0.194323 ms
[GIE] layer inception_3b/1x1 + inception_3b/relu_1x1 || inception_3b/3x3_reduce + inception_3b/relu_3x3_reduce || inception_3b/5x5_reduce + inception_3b/relu_5x5_reduce - 8.700052 ms
[GIE] layer inception_3b/3x3 + inception_3b/relu_3x3 - 21.696459 ms
[GIE] layer inception_3b/5x5 + inception_3b/relu_5x5 - 10.463386 ms
[GIE] layer inception_3b/pool - 2.265937 ms
[GIE] layer inception_3b/pool_proj + inception_3b/relu_pool_proj - 1.910729 ms
[GIE] layer inception_3b/1x1 copy - 0.354375 ms
[GIE] layer pool3/3x3_s2 - 1.903125 ms
[GIE] layer inception_4a/1x1 + inception_4a/relu_1x1 || inception_4a/3x3_reduce + inception_4a/relu_3x3_reduce || inception_4a/5x5_reduce + inception_4a/relu_5x5_reduce - 4.471615 ms
[GIE] layer inception_4a/3x3 + inception_4a/relu_3x3 - 6.044531 ms
[GIE] layer inception_4a/5x5 + inception_4a/relu_5x5 - 0.968907 ms
[GIE] layer inception_4a/pool - 1.064114 ms
[GIE] layer inception_4a/pool_proj + inception_4a/relu_pool_proj - 1.103750 ms
[GIE] layer inception_4a/1x1 copy - 0.152396 ms
[GIE] layer inception_4b/1x1 + inception_4b/relu_1x1 || inception_4b/3x3_reduce + inception_4b/relu_3x3_reduce || inception_4b/5x5_reduce + inception_4b/relu_5x5_reduce - 4.764219 ms
[GIE] layer inception_4b/3x3 + inception_4b/relu_3x3 - 4.324583 ms
[GIE] layer inception_4b/5x5 + inception_4b/relu_5x5 - 1.413073 ms
[GIE] layer inception_4b/pool - 1.132969 ms
[GIE] layer inception_4b/pool_proj + inception_4b/relu_pool_proj - 1.176146 ms
[GIE] layer inception_4b/1x1 copy - 0.132864 ms
[GIE] layer inception_4c/1x1 + inception_4c/relu_1x1 || inception_4c/3x3_reduce + inception_4c/relu_3x3_reduce || inception_4c/5x5_reduce + inception_4c/relu_5x5_reduce - 4.738177 ms
[GIE] layer inception_4c/3x3 + inception_4c/relu_3x3 - 5.503698 ms
[GIE] layer inception_4c/5x5 + inception_4c/relu_5x5 - 1.394011 ms
[GIE] layer inception_4c/pool - 1.132656 ms
[GIE] layer inception_4c/pool_proj + inception_4c/relu_pool_proj - 1.157812 ms
[GIE] layer inception_4c/1x1 copy - 0.111927 ms
[GIE] layer inception_4d/1x1 + inception_4d/relu_1x1 || inception_4d/3x3_reduce + inception_4d/relu_3x3_reduce || inception_4d/5x5_reduce + inception_4d/relu_5x5_reduce - 4.727709 ms
[GIE] layer inception_4d/3x3 + inception_4d/relu_3x3 - 6.811302 ms
[GIE] layer inception_4d/5x5 + inception_4d/relu_5x5 - 1.772187 ms
[GIE] layer inception_4d/pool - 1.132084 ms
[GIE] layer inception_4d/pool_proj + inception_4d/relu_pool_proj - 1.161718 ms
[GIE] layer inception_4d/1x1 copy - 0.103438 ms
[GIE] layer inception_4e/1x1 + inception_4e/relu_1x1 || inception_4e/3x3_reduce + inception_4e/relu_3x3_reduce || inception_4e/5x5_reduce + inception_4e/relu_5x5_reduce - 7.476458 ms
[GIE] layer inception_4e/3x3 + inception_4e/relu_3x3 - 12.779844 ms
[GIE] layer inception_4e/5x5 + inception_4e/relu_5x5 - 3.287656 ms
[GIE] layer inception_4e/pool - 1.165417 ms
[GIE] layer inception_4e/pool_proj + inception_4e/relu_pool_proj - 2.159844 ms
[GIE] layer inception_4e/1x1 copy - 0.195000 ms
[GIE] layer inception_5a/1x1 + inception_5a/relu_1x1 || inception_5a/3x3_reduce + inception_5a/relu_3x3_reduce || inception_5a/5x5_reduce + inception_5a/relu_5x5_reduce - 11.466510 ms
[GIE] layer inception_5a/3x3 + inception_5a/relu_3x3 - 12.746927 ms
[GIE] layer inception_5a/5x5 + inception_5a/relu_5x5 - 3.235729 ms
[GIE] layer inception_5a/pool - 1.818386 ms
[GIE] layer inception_5a/pool_proj + inception_5a/relu_pool_proj - 3.259010 ms
[GIE] layer inception_5a/1x1 copy - 0.194844 ms
[GIE] layer inception_5b/1x1 + inception_5b/relu_1x1 || inception_5b/3x3_reduce + inception_5b/relu_3x3_reduce || inception_5b/5x5_reduce + inception_5b/relu_5x5_reduce - 14.704739 ms
[GIE] layer inception_5b/3x3 + inception_5b/relu_3x3 - 11.462292 ms
[GIE] layer inception_5b/5x5 + inception_5b/relu_5x5 - 4.753594 ms
[GIE] layer inception_5b/pool - 1.817604 ms
[GIE] layer inception_5b/pool_proj + inception_5b/relu_pool_proj - 3.259792 ms
[GIE] layer inception_5b/1x1 copy - 0.274687 ms
[GIE] layer cvg/classifier - 2.113386 ms
[GIE] layer coverage/sig - 0.059687 ms
[GIE] layer coverage/sig output reformatter 0 - 0.042969 ms
[GIE] layer bbox/regressor - 2.062864 ms
[GIE] layer bbox/regressor output reformatter 0 - 0.053386 ms
[GIE] layer network time - 301.203705 ms
detectnet-console: finished processing network (1505047556394)
1 bounding boxes detected
bounding box 0 (17.527779, -34.222221) (193.388885, 238.500000) w=175.861115 h=272.722229
draw boxes 1 0 0.000000 200.000000 255.000000 100.000000
detectnet-console: writing 400x400 image to 'images/outputtim.png'
detectnet-console: successfully wrote 400x400 image to 'images/outputtim.png'
References:
http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html https://unsplash.com/search/photos/face networks/bvlc_googlenet.caffemodel https://github.com/JunhongXu/tx1-neural-navigation https://github.com/jetsonhacks?tab=repositories https://github.com/dusty-nv/jetson-inference https://github.com/dusty-nv/jetson-inference/blob/master/docs/deep-learning.md https://github.com/NVIDIA/DIGITS/tree/master/examples/semantic-segmentation https://github.com/jetsonhacks/installTensorFlowTX1 https://github.com/open-horizon/cogwerx-jetson-tx1/wiki/Yolo-and-Darknet-on-the-TX1 https://nvidia.qwiklab.com/focuses/preview/223?locale=en http://www.jetsonhacks.com/2016/12/30/tensorflow-nvidia-jetson-tx1-development-kit/ http://docs.nvidia.com/jetpack-l4t/index.html#developertools/mobile/jetpack/l4t/3.0/jetpack_l4t_install.htm https://developer.nvidia.com/embedded/buy/jetson-tx1-devkit https://developer.nvidia.com/cublas
... View more
Labels:
09-09-2017
01:56 PM
3 Kudos
Part 1: Installation and Setup Part 2: Classifying Images with ImageNet Use This Project: https://github.com/dusty-nv/jetson-inference This will create a C++ executable to run Classification for ImageNet with TensorRT. Shell Call Example root@tegra-ubuntu:/media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/jetson-inference-master/build/aarch64/bin# ./runclassify.sh
imagenet-console
args (3): 0 [/media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/jetson-inference-master/build/aarch64/bin/imagenet-console] 1 [backupimages/granny_smith_1.jpg] 2 [images/output_0.jpg]
imageNet -- loading classification network model from:
-- prototxt networks/googlenet.prototxt
-- model networks/bvlc_googlenet.caffemodel
-- class_labels networks/ilsvrc12_synset_words.txt
-- input_blob 'data'
-- output_blob 'prob'
-- batch_size 2
[GIE] attempting to open cache file networks/bvlc_googlenet.caffemodel.2.tensorcache
[GIE] loading network profile from cache... networks/bvlc_googlenet.caffemodel.2.tensorcache
[GIE] platform has FP16 support.
[GIE] networks/bvlc_googlenet.caffemodel loaded
[GIE] CUDA engine context initialized with 2 bindings
[GIE] networks/bvlc_googlenet.caffemodel input binding index: 0
[GIE] networks/bvlc_googlenet.caffemodel input dims (b=2 c=3 h=224 w=224) size=1204224
[cuda] cudaAllocMapped 1204224 bytes, CPU 0x100ce0000 GPU 0x100ce0000
[GIE] networks/bvlc_googlenet.caffemodel output 0 prob binding index: 1
[GIE] networks/bvlc_googlenet.caffemodel output 0 prob dims (b=2 c=1000 h=1 w=1) size=8000
[cuda] cudaAllocMapped 8000 bytes, CPU 0x100e20000 GPU 0x100e20000
networks/bvlc_googlenet.caffemodel initialized.
[GIE] networks/bvlc_googlenet.caffemodel loaded
imageNet -- loaded 1000 class info entries
networks/bvlc_googlenet.caffemodel initialized.
loaded image backupimages/granny_smith_1.jpg (1000 x 1000) 16000000 bytes
[cuda] cudaAllocMapped 16000000 bytes, CPU 0x100f20000 GPU 0x100f20000
[GIE] layer conv1/7x7_s2 + conv1/relu_7x7 input reformatter 0 - 1.207813 ms
[GIE] layer conv1/7x7_s2 + conv1/relu_7x7 - 6.144531 ms
[GIE] layer pool1/3x3_s2 - 1.301354 ms
[GIE] layer pool1/norm1 - 0.412240 ms
[GIE] layer conv2/3x3_reduce + conv2/relu_3x3_reduce - 0.737552 ms
[GIE] layer conv2/3x3 + conv2/relu_3x3 - 11.184843 ms
[GIE] layer conv2/norm2 - 1.052657 ms
[GIE] layer pool2/3x3_s2 - 0.946510 ms
[GIE] layer inception_3a/1x1 + inception_3a/relu_1x1 || inception_3a/3x3_reduce + inception_3a/relu_3x3_reduce || inception_3a/5x5_reduce + inception_3a/relu_5x5_reduce - 1.299844 ms
[GIE] layer inception_3a/3x3 + inception_3a/relu_3x3 - 3.431562 ms
[GIE] layer inception_3a/5x5 + inception_3a/relu_5x5 - 0.697657 ms
[GIE] layer inception_3a/pool - 0.449479 ms
[GIE] layer inception_3a/pool_proj + inception_3a/relu_pool_proj - 0.542916 ms
[GIE] layer inception_3a/1x1 copy - 0.074375 ms
[GIE] layer inception_3b/1x1 + inception_3b/relu_1x1 || inception_3b/3x3_reduce + inception_3b/relu_3x3_reduce || inception_3b/5x5_reduce + inception_3b/relu_5x5_reduce - 2.582917 ms
[GIE] layer inception_3b/3x3 + inception_3b/relu_3x3 - 6.324167 ms
[GIE] layer inception_3b/5x5 + inception_3b/relu_5x5 - 3.262968 ms
[GIE] layer inception_3b/pool - 0.586719 ms
[GIE] layer inception_3b/pool_proj + inception_3b/relu_pool_proj - 0.657552 ms
[GIE] layer inception_3b/1x1 copy - 0.111511 ms
[GIE] layer pool3/3x3_s2 - 0.608333 ms
[GIE] layer inception_4a/1x1 + inception_4a/relu_1x1 || inception_4a/3x3_reduce + inception_4a/relu_3x3_reduce || inception_4a/5x5_reduce + inception_4a/relu_5x5_reduce - 1.589531 ms
[GIE] layer inception_4a/3x3 + inception_4a/relu_3x3 - 1.027396 ms
[GIE] layer inception_4a/5x5 + inception_4a/relu_5x5 - 0.420052 ms
[GIE] layer inception_4a/pool - 0.306563 ms
[GIE] layer inception_4a/pool_proj + inception_4a/relu_pool_proj - 0.464583 ms
[GIE] layer inception_4a/1x1 copy - 0.060417 ms
[GIE] layer inception_4b/1x1 + inception_4b/relu_1x1 || inception_4b/3x3_reduce + inception_4b/relu_3x3_reduce || inception_4b/5x5_reduce + inception_4b/relu_5x5_reduce - 1.416875 ms
[GIE] layer inception_4b/3x3 + inception_4b/relu_3x3 - 1.157135 ms
[GIE] layer inception_4b/5x5 + inception_4b/relu_5x5 - 0.555886 ms
[GIE] layer inception_4b/pool - 0.331354 ms
[GIE] layer inception_4b/pool_proj + inception_4b/relu_pool_proj - 0.485677 ms
[GIE] layer inception_4b/1x1 copy - 0.056041 ms
[GIE] layer inception_4c/1x1 + inception_4c/relu_1x1 || inception_4c/3x3_reduce + inception_4c/relu_3x3_reduce || inception_4c/5x5_reduce + inception_4c/relu_5x5_reduce - 1.454011 ms
[GIE] layer inception_4c/3x3 + inception_4c/relu_3x3 - 2.771198 ms
[GIE] layer inception_4c/5x5 + inception_4c/relu_5x5 - 0.554844 ms
[GIE] layer inception_4c/pool - 0.502604 ms
[GIE] layer inception_4c/pool_proj + inception_4c/relu_pool_proj - 0.486198 ms
[GIE] layer inception_4c/1x1 copy - 0.050833 ms
[GIE] layer inception_4d/1x1 + inception_4d/relu_1x1 || inception_4d/3x3_reduce + inception_4d/relu_3x3_reduce || inception_4d/5x5_reduce + inception_4d/relu_5x5_reduce - 1.419271 ms
[GIE] layer inception_4d/3x3 + inception_4d/relu_3x3 - 1.781406 ms
[GIE] layer inception_4d/5x5 + inception_4d/relu_5x5 - 0.680052 ms
[GIE] layer inception_4d/pool - 0.333542 ms
[GIE] layer inception_4d/pool_proj + inception_4d/relu_pool_proj - 0.483854 ms
[GIE] layer inception_4d/1x1 copy - 0.048229 ms
[GIE] layer inception_4e/1x1 + inception_4e/relu_1x1 || inception_4e/3x3_reduce + inception_4e/relu_3x3_reduce || inception_4e/5x5_reduce + inception_4e/relu_5x5_reduce - 2.225573 ms
[GIE] layer inception_4e/3x3 + inception_4e/relu_3x3 - 4.142656 ms
[GIE] layer inception_4e/5x5 + inception_4e/relu_5x5 - 0.954427 ms
[GIE] layer inception_4e/pool - 0.332917 ms
[GIE] layer inception_4e/pool_proj + inception_4e/relu_pool_proj - 0.667344 ms
[GIE] layer inception_4e/1x1 copy - 0.071666 ms
[GIE] layer pool4/3x3_s2 - 0.275625 ms
[GIE] layer inception_5a/1x1 + inception_5a/relu_1x1 || inception_5a/3x3_reduce + inception_5a/relu_3x3_reduce || inception_5a/5x5_reduce + inception_5a/relu_5x5_reduce - 1.685417 ms
[GIE] layer inception_5a/3x3 + inception_5a/relu_3x3 - 2.085990 ms
[GIE] layer inception_5a/5x5 + inception_5a/relu_5x5 - 0.391198 ms
[GIE] layer inception_5a/pool - 0.187552 ms
[GIE] layer inception_5a/pool_proj + inception_5a/relu_pool_proj - 0.964791 ms
[GIE] layer inception_5a/1x1 copy - 0.041094 ms
[GIE] layer inception_5b/1x1 + inception_5b/relu_1x1 || inception_5b/3x3_reduce + inception_5b/relu_3x3_reduce || inception_5b/5x5_reduce + inception_5b/relu_5x5_reduce - 2.327656 ms
[GIE] layer inception_5b/3x3 + inception_5b/relu_3x3 - 1.884532 ms
[GIE] layer inception_5b/5x5 + inception_5b/relu_5x5 - 1.364895 ms
[GIE] layer inception_5b/pool - 0.189219 ms
[GIE] layer inception_5b/pool_proj + inception_5b/relu_pool_proj - 0.453490 ms
[GIE] layer inception_5b/1x1 copy - 0.045781 ms
[GIE] layer pool5/7x7_s1 - 0.743281 ms
[GIE] layer loss3/classifier input reformatter 0 - 0.042552 ms
[GIE] layer loss3/classifier - 0.848386 ms
[GIE] layer loss3/classifier output reformatter 0 - 0.042969 ms
[GIE] layer prob - 0.092343 ms
[GIE] layer prob output reformatter 0 - 0.042552 ms
[GIE] layer network time - 84.158958 ms
class 0948 - 1.000000 (Granny Smith)
imagenet-console: 'backupimages/granny_smith_1.jpg' -> 100.00000% class #948 (Granny Smith)
loaded image fontmapA.png (256 x 512) 2097152 bytes
[cuda] cudaAllocMapped 2097152 bytes, CPU 0x101fa0000 GPU 0x101fa0000
[cuda] cudaAllocMapped 8192 bytes, CPU 0x100e22000 GPU 0x100e22000
imagenet-console: attempting to save output image to 'images/output_0.jpg'
imagenet-console: completed saving 'images/output_0.jpg'
shutting down...
Input File Output File The output image contains the highest probability of what's in the image. This one says sunscreen which is a bit weird, I am guessing because my original image is very sunny. Source Code https://github.com/tspannhw/jetsontx1-TensorRT Resources http://www.jetsonhacks.com/2017/01/28/install-samsung-ssd-on-nvidia-jetson-tx1/ https://github.com/PhilipChicco/pedestrianSys https://github.com/jetsonhacks?tab=repositories https://github.com/Netzeband/JetsonTX1_im2txt https://github.com/DJTobias/Cherry-Autonomous-Racecar https://github.com/jetsonhacks/postFlashTX1 https://github.com/jetsonhacks/installTensorFlowTX1 http://www.jetsonhacks.com/2016/12/21/jetson-tx1-swap-file-and-development-preparation/
... View more
Labels:
09-09-2017
01:40 PM
5 Kudos
Use Case: Ingesting sensors, images, voice and video from moving vehicles and running deep learning in the running vehicle. Transporting data and messages to remote data centers via Apache MiniFi and NiFi over Secure S2S HTTPS. Background: NVidia Jetson TX1 is a specialized developer kit for running a powerful GPU as an embedded device for robots, UAV and specialized platforms. I envision it's usage in field trucks for intermodal, utilities, telecommunications, delivery services, government and other industries with field vehicles. Installation and Setup You will need a workstation running Ubuntu 16 with enough disk space and network access. This will be to download all the software and push it over a network to your NVidia Jetson TX1. You can download Ubuntu here (https://www.ubuntu.com/download/desktop). Fortunately I had a MiniPC with 4GB of RAM that reformatted with Ubuntu to be the host PC to build my Jetson. You cannot run this from a Mac or Windows machine. You will need a monitor, mouse and keyboard for your host machine and a set for your NVIdia Jetson. First step boot to your NVidia Jetson and set up WiFi networking and make sure your monitor, keyboards and mouse work. Make sure you download the latest NVidia JetPack on your host Ubuntu machine https://developer.nvidia.com/embedded/jetpack The one I used was JetPack 3.1 and that included: 64-bit Ubuntu 16.04
cuDNN 6.0
TensorRT 2.1
CUDA 8.0. Initial login: ubuntu/ubuntu After installation it will be nvidia/nvidia. Please change that password, security is important and this GPU could do some serious bitcoin mining. sudo su
apt update
apt-get install git zip unzip autoconf automake libtool curl zlib1g-dev maven swig bzip2
apt-get purge libopencv4tegra-dev libopencv4tegra
apt-get purge libopencv4tegra-repo
apt-get update
apt-get install build-essential
apt-get install cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
apt-get install libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev
apt-get install python2.7-dev
apt-get install python-dev python-numpy libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev
apt-get install libgtkglext1 libgtkglext1-dev
apt-get install qtbase5-dev apt-get install libv4l-dev v4l-utils qv4l2 v4l2ucp
cd $HOME/NVIDIA-INSTALL
./installer.sh Downloaded and run NVidia Jetson TX1 JetPack from host Ubuntu computer
./JetPack-L4T-3.1-linux-x64.run. This will run on the host server for probably an hour and require networking connection between the two and a few reboots. I added a 64GB SD Card as the space on the Jetson is tiny. I would recommend adding a big SATA hard drive. umount /dev/sdb1
mount -o umask=000 -t vfat /dev/sdb1 /media/ Turn on the fan on the Jetson, echo 255 > /sys/kernel/debug/tegra_fan/target_pwm. Download MiniFi https://nifi.apache.org/minifi/download.html or https://hortonworks.com/downloads/#dataflow. You will need to install JDK 8. sudo add-apt-repository ppa:webupd8team/java
sudo apt update
sudo apt install oracle-java8-installer -y
download minifi-0.2.0-bin.zip
unzip *.zip
bin/minifi.sh start
In the next part, we will classify images. Part 2: Classifying Images with ImageNet Part 3: Detecting Faces in Images Part 4: Ingesting with MiniFi and NiFi Shell Call Example /media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/jetson-inference-master/build/aarch64/bin/detectnet-console pic-007.png outputface7.png facenet Source Code https://github.com/tspannhw/jetsontx1-TensorRT References http://elinux.org/Jetson/Computer_Vision_Performance#Hardware_Acceleration_of_OpenCV https://github.com/dusty-nv/jetson-inference#system-setup https://github.com/dusty-nv/jetson-inference/blob/master/docs/deep-learning.md https://github.com/NVIDIA/DIGITS/tree/master/examples/semantic-segmentation https://developer.nvidia.com/tensorrt https://community.hortonworks.com/articles/130814/sensors-and-image-capture-and-deep-learning-analys.html https://github.com/dusty-nv/jetson-inference/blob/master/detectnet-console/detectnet-console.cpp http://elinux.org/Jetson_TX1#Jetson_TX1_Module http://www.jetsonhacks.com/2016/12/30/tensorflow-nvidia-jetson-tx1-development-kit/ https://developer.nvidia.com/embedded/learn/tutorials#collapseOne http://www.jetsonhacks.com/2016/12/30/install-tensorflow-on-nvidia-jetson-tx1-development-kit/ http://www.nvidia.com/object/JetsonTX1DeveloperKitSE.html? http://www.nvidia.com/object/embedded-systems-dev-kits-modules.html https://github.com/jetsonhacks/installTensorFlowTX1/blob/master/scripts/installDependencies.sh https://github.com/dusty-nv/jetson-inference#system-setup http://docs.nvidia.com/jetpack-l4t/index.html#developertools/mobile/jetpack/l4t/3.0/jetpack_l4t_install.htm https://github.com/dusty-nv/jetson-inference/blob/master/data/networks/ilsvrc12_synset_words.txt https://github.com/tspannhw/rpi-rainbowhat/blob/master/minifi.py https://community.hortonworks.com/articles/130814/sensors-and-image-capture-and-deep-learning-analys.html https://nifi.apache.org/minifi/system-admin-guide.html
... View more
Labels:
08-28-2017
11:25 AM
Great!! I set up a spark-cluster with 2 workers. I save a Dataframe using partitionBy ("column x") as a parquet format to some path on each worker. The matter is that i am able to save it but if i want to read it back i am getting these errors: - Could not read footer for file file´status ...... - unable to specify Schema ... Any Suggestions?
... View more
05-07-2018
01:30 PM
My custom processor is pretty easy to customize. https://github.com/tspannhw/nifi-extracttext-processor You can tweak it to extract just somethings, Apache Tika is very powerful.
... View more
08-21-2017
10:02 PM
6 Kudos
The MiniFi flow executes two scripts: one to call TensorFlow Python that captures an OpenCV Raspberry Pi Camera and runs Inception on it. That message is formatted as JSON and sent on. The second script reads GPS values from a USB GSP sensor and outputs JSON. Get File reads the Pi Camera image produced by the ClassifyImages process. Cleanup logs is a standalone timed script that cleans up old logs on the Raspberry Pi. Using InferredAvroSchema I created a schema for the GPS unit and stored it in the Hortonworks Schema Registry. This is the provenance event for a typical GPS message sent. You can see what shell script we ran and from what host. In Apache NiFi we process the message, routing to the correct place, setting a schema and querying it for a latitude. Then we convert the AVRO record to ORC to save as a Hive table. MiniFi requires we change the NiFi created template to a configuration file via the command-line MiniFi Toolkit. minifi-toolkit-0.2.0/bin/config.sh transform gpstensorflowpiminifi2.xml config.yml
scp config.yml pi@192.168.1.167:/opt/demo/minifi/conf/
./gpsrun.sh
{"ipaddress": "192.168.1.167", "utc": "2017-08-21T20:00:06.000Z", "epx": "10.301", "epv": "50.6", "serialno": "000000002a1f1e34", "altitude": "38.393", "cputemp": 58.0, "eps": "37.16",
"longitude": "-74.52923472", "ts": "2017-08-21 20:00:03", "public_ip": "71.168.184.247", "track": "236.6413", "host": "vid5", "mode": "3", "time": "2017-08-21T20:00:06.000Z",
"latitude": "40.268194845", "climb": "-0.054", "speed": "0.513", "ept": "0.005"}
2017-08-21 16:20:33,199 INFO [Timer-Driven Process Thread-6] o.apache.nifi.remote.client.PeerSelector New Weighted Distribution of Nodes:
PeerStatus[hostname=HW13125.local,port=8080,secure=false,flowFileCount=0] will receive 100.0% of data
2017-08-21 16:20:34,261 INFO [Timer-Driven Process Thread-6] o.a.nifi.remote.StandardRemoteGroupPort RemoteGroupPort[name=MiniFi TensorFlowImage,targets=http://hw13125.local:8080/nifi]
Successfully sent [StandardFlowFileRecord[uuid=f84767ec-c627-4b63-9e88-bba1dfb4eb9b,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1503346615133-2, container=default, section=2], offset=2198, length=441],offset=0,name=3460526041973,size=441]] (441 bytes) to http://HW13125.local:8080/nifi-api in 117 milliseconds at a rate of 3.65 KB/sec
{"ipaddress": "192.168.1.167", "utc": "2017-08-21T20:17:21.010Z", "epx": "10.301", "epv": "50.6", "serialno": "000000002a1f1e34",
"altitude": "43.009", "cputemp": 52.0, "eps": "1.33", "longitude": "-74.529242206", "ts": "2017-08-21 20:16:55", "public_ip": "71.168.184.247",
"track": "190.894", "host": "vid5", "mode": "3", "time": "2017-08-21T20:17:21.010Z", "latitude": "40.268159632",
"climb": "0.022", "speed": "0.353", "ept": "0.005"} To collect our GPS information, below is my script called by MiniFi. Source: #! /usr/bin/python
import os
from gps import *
from time import *
import time
import threading
import json
import time
import colorsys
import os
import json
import sys, socket
import subprocess
import time
import datetime
from time import sleep
from time import gmtime, strftime
import signal
import time
import urllib2
# Need sudo apt-get install gpsd gpsd-clients python-gps ntp
# Based on
#Author: Callum Pritchard, Joachim Hummel
#Project Name: Flick 3D Gesture
#Project Description: Sending Flick 3D Gesture sensor data to mqtt
#Version Number: 0.1
#Date: 15/6/17
#Release State: Alpha testing
#Changes: Created
# Based on
# Written by Dan Mandle http://dan.mandle.me September 2012
# License: GPL 2.0
# Based on: https://hortonworks.com/tutorial/analyze-iot-weather-station-data-via-connected-data-architecture/section/3/
#### Initialization
# yyyy-mm-dd hh:mm:ss
currenttime= strftime("%Y-%m-%d %H:%M:%S",gmtime())
external_IP_and_port = ('198.41.0.4', 53) # a.root-servers.net
socket_family = socket.AF_INET
host = os.uname()[1]
def getCPUtemperature():
res = os.popen('vcgencmd measure_temp').readline()
return(res.replace("temp=","").replace("'C\n",""))
def IP_address():
try:
s = socket.socket(socket_family, socket.SOCK_DGRAM)
s.connect(external_IP_and_port)
answer = s.getsockname()
s.close()
return answer[0] if answer else None
except socket.error:
return None
# Get Raspberry Pi Serial Number
def get_serial():
# Extract serial from cpuinfo file
cpuserial = "0000000000000000"
try:
f = open('/proc/cpuinfo','r')
for line in f:
if line[0:6]=='Serial':
cpuserial = line[10:26]
f.close()
except:
cpuserial = "ERROR000000000"
return cpuserial
# Get Raspberry Pi Public IP via IPIFY Rest Call
def get_public_ip():
ip = json.load(urllib2.urlopen('https://api.ipify.org/?format=json'))['ip']
return ip
cpuTemp=int(float(getCPUtemperature()))
ipaddress = IP_address()
# Attempt to get Public IP
public_ip = get_public_ip()
# Attempt to get Raspberry Pi Serial Number
serial = get_serial()
gpsd = None
class GpsPoller(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
global gpsd #bring it in scope
gpsd = gps(mode=WATCH_ENABLE) #starting the stream of info
self.current_value = None
self.running = True #setting the thread running to true
def run(self):
global gpsd
while gpsp.running:
gpsd.next() #this will continue to loop and grab EACH set of gpsd info to clear the buffer
if __name__ == '__main__':
gpsp = GpsPoller() # create the thread
stopthis = False
try:
gpsp.start() # start it up
while not stopthis:
if gpsd.fix.latitude > 0:
row = { 'latitude': str(gpsd.fix.latitude),
'longitude': str(gpsd.fix.longitude),
'utc': str(gpsd.utc),
'time': str(gpsd.fix.time),
'altitude': str(gpsd.fix.altitude),
'eps': str(gpsd.fix.eps),
'epx': str(gpsd.fix.epx),
'epv': str(gpsd.fix.epv),
'ept': str(gpsd.fix.ept),
'speed': str(gpsd.fix.speed),
'climb': str(gpsd.fix.climb),
'track': str(gpsd.fix.track),
'ts': currenttime,
'public_ip': public_ip,
'serialno': serial,
'host': host,
'cputemp': round(cpuTemp,2),
'ipaddress': ipaddress,
'mode': str(gpsd.fix.mode)}
json_string = json.dumps(row)
print json_string
gpsp.running = False
stopthis = True
except (KeyboardInterrupt, SystemExit): #when you press ctrl+c
gpsp.running = False
gpsp.join() # wait for the thread to finish what it's doing Link https://github.com/tspannhw/dws2017sydney
... View more
Labels:
08-23-2017
05:27 PM
1 Kudo
@Sina Talebian A little disappointing when the resolution is reinstall, but it works now 🙂 VMs can be fickle.
... View more
08-21-2017
07:53 PM
I upload a sample of the data incase you don't want to generate your own with mockaroo. It's in simplecsv.txt. Drop this file in the GetFile directory
... View more
02-14-2018
02:40 PM
1 Kudo
@Bryan Bende & @Greg Keys the error seems to persist with HDF 3.0.2. Are there any news on a workaround or a fix?
... View more