Community Articles
Find and share helpful community-sourced technical articles
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

I've just stared the third edition of the fastai course with Jeremy Howard. There’s been a lot of buzz around fast.ai and how the non-profit is making deep learning accessible to hundreds of thousands of developers. The latest mention was in the Economist: https://www.economist.com/business/2018/10/27/new-schemes-teach-the-masses-to-build-ai.

For me the most exciting part of the course is learning how to get cutting edge results (in the 90%+ accuracy range) with just a few lines of code using fastai library methods that have best practices baked in.

Below, I'll present a few lines of code that allow you to quickly classify different breeds of cats and dogs. You may recall that distinguishing between cats and dogs was big just a few years ago but now it's too easy. Thus, we’re using the Oxford-IIIT Pet Dataset http://www.robots.ox.ac.uk/~vgg/data/pets/.

The code example references the latest fastai library v1.0.x built on top of PyTorch. See the github repo for more details https://github.com/fastai.

So let’s get started!

First, import prerequisite libraries

from fastai import *
from fastai.vision import *

Set training batch size to 64

bs = 64

Note: if your GPU is running out of memory, set a smaller batch size, e.g. 32 or 16.

Assuming path points to your dataset of pet images, where the image labels (representing type of pet breed) are the folder name, we use a handy data preparation method ImageDataBunch.

We set our validation set to 20% and transform all the images to size 224. (The size 224 is set as a multiplier of 7 which is optimal for the ResNet-34 model used in this example.)

data = ImageDataBunch.from_folder(path, train=".", valid_pct=0.2, 
 ds_tfms=get_transforms(), size=224)

Also, let’s normalize our dataset

data.normalize(imagenet_stats)

And preview our training data

data.show_batch(rows=3, figsize=(7,8))

93053-2018-10-30-15-19-06.jpg

Now we’re ready to train the model. We’re using a pre-trained convolutional neural net ResNet-34. (To learn more about convolutional neural networks see https://cs231n.github.io/convolutional-networks/.)

learn = ConvLearner(data, models.resnet34, metrics=error_rate)

Now let’s train the last layer of model in four epochs (or cycles)

learn.fit_one_cycle(4)

And here’s the model training output

Total time: 02:14
epoch  train_loss  valid_loss error_rate
1      1.169139    0.316307   0.097804    (00:34)
2      0.510523    0.229121   0.072522    (00:33)
3      0.337948    0.201726   0.065868    (00:33)
4      0.242196    0.189312   0.060546    (00:33)

As you can see, after four epochs and total time of 2 min and 14 sec we get a model with 94% accuracy (error rate 6.0546%).

For comparison, the state-of-the-art classification accuracy on this dataset in 2012 was only 59%! Here's the 2012 paper: http://www.robots.ox.ac.uk/~vgg/publications/2012/parkhi12a/parkhi12a.pdf.

Final comments

Of course, we could further fine tune this model and adjust the weights across all 34 layers. We could also replace ResNet-34 with a larger model, e.g. ResNet-50.

You can check the Stanford Deep Learning benchmark site https://dawn.cs.stanford.edu/benchmark/ for the top performing models. (As of Sep 2018 ResNet-50 was the top one.) If you do decide to use ResNet-50 for your training, make sure to set image size to 320. Also, for ResNet-50 your GPU should have at least 11GB of memory.

If you want to learn more about the fast.ai course, here's the link: https://course.fast.ai/

1,079 Views
0 Kudos
Don't have an account?
Coming from Hortonworks? Activate your account here
Version history
Revision #:
2 of 2
Last update:
‎08-17-2019 05:46 AM
Updated by:
 
Contributors
Top Kudoed Authors