Community Articles
Find and share helpful community-sourced technical articles.

I've just stared the third edition of the fastai course with Jeremy Howard. There’s been a lot of buzz around and how the non-profit is making deep learning accessible to hundreds of thousands of developers. The latest mention was in the Economist:

For me the most exciting part of the course is learning how to get cutting edge results (in the 90%+ accuracy range) with just a few lines of code using fastai library methods that have best practices baked in.

Below, I'll present a few lines of code that allow you to quickly classify different breeds of cats and dogs. You may recall that distinguishing between cats and dogs was big just a few years ago but now it's too easy. Thus, we’re using the Oxford-IIIT Pet Dataset

The code example references the latest fastai library v1.0.x built on top of PyTorch. See the github repo for more details

So let’s get started!

First, import prerequisite libraries

from fastai import *
from import *

Set training batch size to 64

bs = 64

Note: if your GPU is running out of memory, set a smaller batch size, e.g. 32 or 16.

Assuming path points to your dataset of pet images, where the image labels (representing type of pet breed) are the folder name, we use a handy data preparation method ImageDataBunch.

We set our validation set to 20% and transform all the images to size 224. (The size 224 is set as a multiplier of 7 which is optimal for the ResNet-34 model used in this example.)

data = ImageDataBunch.from_folder(path, train=".", valid_pct=0.2, 
 ds_tfms=get_transforms(), size=224)

Also, let’s normalize our dataset


And preview our training data

data.show_batch(rows=3, figsize=(7,8))


Now we’re ready to train the model. We’re using a pre-trained convolutional neural net ResNet-34. (To learn more about convolutional neural networks see

learn = ConvLearner(data, models.resnet34, metrics=error_rate)

Now let’s train the last layer of model in four epochs (or cycles)


And here’s the model training output

Total time: 02:14
epoch  train_loss  valid_loss error_rate
1      1.169139    0.316307   0.097804    (00:34)
2      0.510523    0.229121   0.072522    (00:33)
3      0.337948    0.201726   0.065868    (00:33)
4      0.242196    0.189312   0.060546    (00:33)

As you can see, after four epochs and total time of 2 min and 14 sec we get a model with 94% accuracy (error rate 6.0546%).

For comparison, the state-of-the-art classification accuracy on this dataset in 2012 was only 59%! Here's the 2012 paper:

Final comments

Of course, we could further fine tune this model and adjust the weights across all 34 layers. We could also replace ResNet-34 with a larger model, e.g. ResNet-50.

You can check the Stanford Deep Learning benchmark site for the top performing models. (As of Sep 2018 ResNet-50 was the top one.) If you do decide to use ResNet-50 for your training, make sure to set image size to 320. Also, for ResNet-50 your GPU should have at least 11GB of memory.

If you want to learn more about the course, here's the link:

0 Kudos
Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.
Version history
Last update:
‎08-17-2019 05:46 AM
Updated by:
Top Kudoed Authors