Created on 09-09-2017 01:56 PM - edited 08-17-2019 11:08 AM
Part 1: Installation and Setup
Part 2: Classifying Images with ImageNet
Use This Project: https://github.com/dusty-nv/jetson-inference
This will create a C++ executable to run Classification for ImageNet with TensorRT.
Shell Call Example
root@tegra-ubuntu:/media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/jetson-inference-master/build/aarch64/bin# ./runclassify.sh imagenet-console args (3): 0 [/media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/jetson-inference-master/build/aarch64/bin/imagenet-console] 1 [backupimages/granny_smith_1.jpg] 2 [images/output_0.jpg] imageNet -- loading classification network model from: -- prototxt networks/googlenet.prototxt -- model networks/bvlc_googlenet.caffemodel -- class_labels networks/ilsvrc12_synset_words.txt -- input_blob 'data' -- output_blob 'prob' -- batch_size 2 [GIE] attempting to open cache file networks/bvlc_googlenet.caffemodel.2.tensorcache [GIE] loading network profile from cache... networks/bvlc_googlenet.caffemodel.2.tensorcache [GIE] platform has FP16 support. [GIE] networks/bvlc_googlenet.caffemodel loaded [GIE] CUDA engine context initialized with 2 bindings [GIE] networks/bvlc_googlenet.caffemodel input binding index: 0 [GIE] networks/bvlc_googlenet.caffemodel input dims (b=2 c=3 h=224 w=224) size=1204224 [cuda] cudaAllocMapped 1204224 bytes, CPU 0x100ce0000 GPU 0x100ce0000 [GIE] networks/bvlc_googlenet.caffemodel output 0 prob binding index: 1 [GIE] networks/bvlc_googlenet.caffemodel output 0 prob dims (b=2 c=1000 h=1 w=1) size=8000 [cuda] cudaAllocMapped 8000 bytes, CPU 0x100e20000 GPU 0x100e20000 networks/bvlc_googlenet.caffemodel initialized. [GIE] networks/bvlc_googlenet.caffemodel loaded imageNet -- loaded 1000 class info entries networks/bvlc_googlenet.caffemodel initialized. loaded image backupimages/granny_smith_1.jpg (1000 x 1000) 16000000 bytes [cuda] cudaAllocMapped 16000000 bytes, CPU 0x100f20000 GPU 0x100f20000 [GIE] layer conv1/7x7_s2 + conv1/relu_7x7 input reformatter 0 - 1.207813 ms [GIE] layer conv1/7x7_s2 + conv1/relu_7x7 - 6.144531 ms [GIE] layer pool1/3x3_s2 - 1.301354 ms [GIE] layer pool1/norm1 - 0.412240 ms [GIE] layer conv2/3x3_reduce + conv2/relu_3x3_reduce - 0.737552 ms [GIE] layer conv2/3x3 + conv2/relu_3x3 - 11.184843 ms [GIE] layer conv2/norm2 - 1.052657 ms [GIE] layer pool2/3x3_s2 - 0.946510 ms [GIE] layer inception_3a/1x1 + inception_3a/relu_1x1 || inception_3a/3x3_reduce + inception_3a/relu_3x3_reduce || inception_3a/5x5_reduce + inception_3a/relu_5x5_reduce - 1.299844 ms [GIE] layer inception_3a/3x3 + inception_3a/relu_3x3 - 3.431562 ms [GIE] layer inception_3a/5x5 + inception_3a/relu_5x5 - 0.697657 ms [GIE] layer inception_3a/pool - 0.449479 ms [GIE] layer inception_3a/pool_proj + inception_3a/relu_pool_proj - 0.542916 ms [GIE] layer inception_3a/1x1 copy - 0.074375 ms [GIE] layer inception_3b/1x1 + inception_3b/relu_1x1 || inception_3b/3x3_reduce + inception_3b/relu_3x3_reduce || inception_3b/5x5_reduce + inception_3b/relu_5x5_reduce - 2.582917 ms [GIE] layer inception_3b/3x3 + inception_3b/relu_3x3 - 6.324167 ms [GIE] layer inception_3b/5x5 + inception_3b/relu_5x5 - 3.262968 ms [GIE] layer inception_3b/pool - 0.586719 ms [GIE] layer inception_3b/pool_proj + inception_3b/relu_pool_proj - 0.657552 ms [GIE] layer inception_3b/1x1 copy - 0.111511 ms [GIE] layer pool3/3x3_s2 - 0.608333 ms [GIE] layer inception_4a/1x1 + inception_4a/relu_1x1 || inception_4a/3x3_reduce + inception_4a/relu_3x3_reduce || inception_4a/5x5_reduce + inception_4a/relu_5x5_reduce - 1.589531 ms [GIE] layer inception_4a/3x3 + inception_4a/relu_3x3 - 1.027396 ms [GIE] layer inception_4a/5x5 + inception_4a/relu_5x5 - 0.420052 ms [GIE] layer inception_4a/pool - 0.306563 ms [GIE] layer inception_4a/pool_proj + inception_4a/relu_pool_proj - 0.464583 ms [GIE] layer inception_4a/1x1 copy - 0.060417 ms [GIE] layer inception_4b/1x1 + inception_4b/relu_1x1 || inception_4b/3x3_reduce + inception_4b/relu_3x3_reduce || inception_4b/5x5_reduce + inception_4b/relu_5x5_reduce - 1.416875 ms [GIE] layer inception_4b/3x3 + inception_4b/relu_3x3 - 1.157135 ms [GIE] layer inception_4b/5x5 + inception_4b/relu_5x5 - 0.555886 ms [GIE] layer inception_4b/pool - 0.331354 ms [GIE] layer inception_4b/pool_proj + inception_4b/relu_pool_proj - 0.485677 ms [GIE] layer inception_4b/1x1 copy - 0.056041 ms [GIE] layer inception_4c/1x1 + inception_4c/relu_1x1 || inception_4c/3x3_reduce + inception_4c/relu_3x3_reduce || inception_4c/5x5_reduce + inception_4c/relu_5x5_reduce - 1.454011 ms [GIE] layer inception_4c/3x3 + inception_4c/relu_3x3 - 2.771198 ms [GIE] layer inception_4c/5x5 + inception_4c/relu_5x5 - 0.554844 ms [GIE] layer inception_4c/pool - 0.502604 ms [GIE] layer inception_4c/pool_proj + inception_4c/relu_pool_proj - 0.486198 ms [GIE] layer inception_4c/1x1 copy - 0.050833 ms [GIE] layer inception_4d/1x1 + inception_4d/relu_1x1 || inception_4d/3x3_reduce + inception_4d/relu_3x3_reduce || inception_4d/5x5_reduce + inception_4d/relu_5x5_reduce - 1.419271 ms [GIE] layer inception_4d/3x3 + inception_4d/relu_3x3 - 1.781406 ms [GIE] layer inception_4d/5x5 + inception_4d/relu_5x5 - 0.680052 ms [GIE] layer inception_4d/pool - 0.333542 ms [GIE] layer inception_4d/pool_proj + inception_4d/relu_pool_proj - 0.483854 ms [GIE] layer inception_4d/1x1 copy - 0.048229 ms [GIE] layer inception_4e/1x1 + inception_4e/relu_1x1 || inception_4e/3x3_reduce + inception_4e/relu_3x3_reduce || inception_4e/5x5_reduce + inception_4e/relu_5x5_reduce - 2.225573 ms [GIE] layer inception_4e/3x3 + inception_4e/relu_3x3 - 4.142656 ms [GIE] layer inception_4e/5x5 + inception_4e/relu_5x5 - 0.954427 ms [GIE] layer inception_4e/pool - 0.332917 ms [GIE] layer inception_4e/pool_proj + inception_4e/relu_pool_proj - 0.667344 ms [GIE] layer inception_4e/1x1 copy - 0.071666 ms [GIE] layer pool4/3x3_s2 - 0.275625 ms [GIE] layer inception_5a/1x1 + inception_5a/relu_1x1 || inception_5a/3x3_reduce + inception_5a/relu_3x3_reduce || inception_5a/5x5_reduce + inception_5a/relu_5x5_reduce - 1.685417 ms [GIE] layer inception_5a/3x3 + inception_5a/relu_3x3 - 2.085990 ms [GIE] layer inception_5a/5x5 + inception_5a/relu_5x5 - 0.391198 ms [GIE] layer inception_5a/pool - 0.187552 ms [GIE] layer inception_5a/pool_proj + inception_5a/relu_pool_proj - 0.964791 ms [GIE] layer inception_5a/1x1 copy - 0.041094 ms [GIE] layer inception_5b/1x1 + inception_5b/relu_1x1 || inception_5b/3x3_reduce + inception_5b/relu_3x3_reduce || inception_5b/5x5_reduce + inception_5b/relu_5x5_reduce - 2.327656 ms [GIE] layer inception_5b/3x3 + inception_5b/relu_3x3 - 1.884532 ms [GIE] layer inception_5b/5x5 + inception_5b/relu_5x5 - 1.364895 ms [GIE] layer inception_5b/pool - 0.189219 ms [GIE] layer inception_5b/pool_proj + inception_5b/relu_pool_proj - 0.453490 ms [GIE] layer inception_5b/1x1 copy - 0.045781 ms [GIE] layer pool5/7x7_s1 - 0.743281 ms [GIE] layer loss3/classifier input reformatter 0 - 0.042552 ms [GIE] layer loss3/classifier - 0.848386 ms [GIE] layer loss3/classifier output reformatter 0 - 0.042969 ms [GIE] layer prob - 0.092343 ms [GIE] layer prob output reformatter 0 - 0.042552 ms [GIE] layer network time - 84.158958 ms class 0948 - 1.000000 (Granny Smith) imagenet-console: 'backupimages/granny_smith_1.jpg' -> 100.00000% class #948 (Granny Smith) loaded image fontmapA.png (256 x 512) 2097152 bytes [cuda] cudaAllocMapped 2097152 bytes, CPU 0x101fa0000 GPU 0x101fa0000 [cuda] cudaAllocMapped 8192 bytes, CPU 0x100e22000 GPU 0x100e22000 imagenet-console: attempting to save output image to 'images/output_0.jpg' imagenet-console: completed saving 'images/output_0.jpg' shutting down...
Input File
Output File
The output image contains the highest probability of what's in the image. This one says sunscreen which is a bit weird, I am guessing because my original image is very sunny.
Source Code
https://github.com/tspannhw/jetsontx1-TensorRT
Resources