vision

Application to Computer Vision

Computer vision

The vision module of the fastai library contains all the necessary functions to define a Dataset and train a model for computer vision tasks. It contains four different submodules to reach that goal:

  • vision.image contains the basic definition of an Image object and all the functions that are used behind the scenes to apply transformations to such an object.
  • vision.transform contains all the transforms we can use for data augmentation.
  • vision.data contains the definition of ImageDataBunch as well as the utility function to easily build a DataBunch for Computer Vision problems.
  • vision.learner lets you build and fine-tune models with a pretrained CNN backbone or train a randomly initialized model from scratch.

Each of the four module links above includes a quick overview and examples of the functionality of that module, as well as complete API documentation. Below, we’ll provide a walk-thru of end to end computer vision model training with the most commonly used functionality.

Minimal training example

First, import everything you need from the fastai library.

  1. from fastai.vision import *

First, create a data folder containing a MNIST subset in data/mnist_sample using this little helper that will download it for you:

  1. path = untar_data(URLs.MNIST_SAMPLE)
  2. path
  1. PosixPath('/home/ubuntu/.fastai/data/mnist_sample')

Since this contains standard train and valid folders, and each contains one folder per class, you can create a DataBunch in a single line:

  1. data = ImageDataBunch.from_folder(path)

You load a pretrained model (from vision.models) ready for fine tuning:

  1. learn = cnn_learner(data, models.resnet18, metrics=accuracy)

And now you’re ready to train!

  1. learn.fit(1)

Total time: 00:09

epochtrain_lossvalid_lossaccuracy
10.1404440.0976850.968597

Let’s look briefly at each of the vision submodules.

Getting the data

The most important piece of vision.data for classification is the ImageDataBunch. If you’ve got labels as subfolders, then you can just say:

  1. data = ImageDataBunch.from_folder(path)

It will grab the data in a train and validation sets from subfolders of classes. You can then access that training and validation set by grabbing the corresponding attribute in data.

  1. ds = data.train_ds

Images

That brings us to vision.image, which defines the Image class. Our dataset will return Image objects when we index it. Images automatically display in notebooks:

  1. img,label = ds[0]
  2. img

Overview - 图1

You can change the way they’re displayed:

  1. img.show(figsize=(2,2), title='MNIST digit')

Overview - 图2

And you can transform them in various ways:

  1. img.rotate(35)

Overview - 图3

Data augmentation

vision.transform lets us do data augmentation. Simplest is to choose from a standard set of transforms, where the defaults are designed for photos:

  1. help(get_transforms)
  1. Help on function get_transforms in module fastai.vision.transform:
  2. get_transforms(do_flip: bool = True, flip_vert: bool = False, max_rotate: float = 10.0, max_zoom: float = 1.1, max_lighting: float = 0.2, max_warp: float = 0.2, p_affine: float = 0.75, p_lighting: float = 0.75, xtra_tfms: Union[Collection[fastai.vision.image.Transform], NoneType] = None) -> Collection[fastai.vision.image.Transform]
  3. Utility func to easily create a list of flip, rotate, `zoom`, warp, lighting transforms.

…or create the exact list you want:

  1. tfms = [rotate(degrees=(-20,20)), symmetric_warp(magnitude=(-0.3,0.3))]

You can apply these transforms to your images by using their apply_tfms method.

  1. fig,axes = plt.subplots(1,4,figsize=(8,2))
  2. for ax in axes: ds[0][0].apply_tfms(tfms).show(ax=ax)

Overview - 图4

You can create a DataBunch with your transformed training and validation data loaders in a single step, passing in a tuple of (train_tfms, valid_tfms):

  1. data = ImageDataBunch.from_folder(path, ds_tfms=(tfms, []))

Training and interpretation

Now you’re ready to train a model. To create a model, simply pass your DataBunch and a model creation function (such as one provided by vision.models or torchvision.models) to cnn_learner, and call fit:

  1. learn = cnn_learner(data, models.resnet18, metrics=accuracy)
  2. learn.fit(1)

Total time: 00:08

epochtrain_lossvalid_lossaccuracy
10.1947790.1317090.950932

Now we can take a look at the most incorrect images, and also the classification matrix.

  1. interp = ClassificationInterpretation.from_learner(learn)
  1. interp.plot_top_losses(9, figsize=(6,6))

Overview - 图5

  1. interp.plot_confusion_matrix()

Overview - 图6

To simply predict the result of a new image (of type Image, so opened with open_image for instance), just use learn.predict. It returns the class, its index and the probabilities of each class.

  1. img = learn.data.train_ds[0][0]
  2. learn.predict(img)
  1. (Category 3, tensor(0), tensor([0.5551, 0.4449]))

Company logo

©2021 fast.ai. All rights reserved.
Site last generated: Jan 5, 2021