Multi-Label Classification
Multi-label classification refers to the problem of identifying the categories of objects in images that may not contain exactly one type of object. There may be more than one kind of object, or there may be no objects at all in the classes that you are looking for.
For instance, this would have been a great approach for our bear classifier. One problem with the bear classifier that we rolled out in <> was that if a user uploaded something that wasn’t any kind of bear, the model would still say it was either a grizzly, black, or teddy bear—it had no ability to predict “not a bear at all.” In fact, after we have completed this chapter, it would be a great exercise for you to go back to your image classifier application, and try to retrain it using the multi-label technique, then test it by passing in an image that is not of any of your recognized classes.
In practice, we have not seen many examples of people training multi-label classifiers for this purpose—but we very often see both users and developers complaining about this problem. It appears that this simple solution is not at all widely understood or appreciated! Because in practice it is probably more common to have some images with zero matches or more than one match, we should probably expect in practice that multi-label classifiers are more widely applicable than single-label classifiers.
First, let’s see what a multi-label dataset looks like, then we’ll explain how to get it ready for our model. You’ll see that the architecture of the model does not change from the last chapter; only the loss function does. Let’s start with the data.
The Data
For our example we are going to use the PASCAL dataset, which can have more than one kind of classified object per image.
We begin by downloading and extracting the dataset as per usual:
In [ ]:
from fastai.vision.all import *
path = untar_data(URLs.PASCAL_2007)
This dataset is different from the ones we have seen before, in that it is not structured by filename or folder but instead comes with a CSV (comma-separated values) file telling us what labels to use for each image. We can inspect the CSV file by reading it into a Pandas DataFrame:
In [ ]:
df = pd.read_csv(path/'train.csv')
df.head()
Out[ ]:
fname | labels | is_valid | |
---|---|---|---|
0 | 000005.jpg | chair | True |
1 | 000007.jpg | car | True |
2 | 000009.jpg | horse person | True |
3 | 000012.jpg | car | False |
4 | 000016.jpg | bicycle | True |
As you can see, the list of categories in each image is shown as a space-delimited string.
Sidebar: Pandas and DataFrames
No, it’s not actually a panda! Pandas is a Python library that is used to manipulate and analyze tabular and time series data. The main class is DataFrame
, which represents a table of rows and columns. You can get a DataFrame from a CSV file, a database table, Python dictionaries, and many other sources. In Jupyter, a DataFrame is output as a formatted table, as shown here.
You can access rows and columns of a DataFrame with the iloc
property, as if it were a matrix:
In [ ]:
df.iloc[:,0]
Out[ ]:
0 000005.jpg
1 000007.jpg
2 000009.jpg
3 000012.jpg
4 000016.jpg
...
5006 009954.jpg
5007 009955.jpg
5008 009958.jpg
5009 009959.jpg
5010 009961.jpg
Name: fname, Length: 5011, dtype: object
In [ ]:
df.iloc[0,:]
# Trailing :s are always optional (in numpy, pytorch, pandas, etc.),
# so this is equivalent:
df.iloc[0]
Out[ ]:
fname 000005.jpg
labels chair
is_valid True
Name: 0, dtype: object
You can also grab a column by name by indexing into a DataFrame directly:
In [ ]:
df['fname']
Out[ ]:
0 000005.jpg
1 000007.jpg
2 000009.jpg
3 000012.jpg
4 000016.jpg
...
5006 009954.jpg
5007 009955.jpg
5008 009958.jpg
5009 009959.jpg
5010 009961.jpg
Name: fname, Length: 5011, dtype: object
You can create new columns and do calculations using columns:
In [ ]:
tmp_df = pd.DataFrame({'a':[1,2], 'b':[3,4]})
tmp_df
Out[ ]:
a | b | |
---|---|---|
0 | 1 | 3 |
1 | 2 | 4 |
In [ ]:
tmp_df['c'] = tmp_df['a']+tmp_df['b']
tmp_df
Out[ ]:
a | b | c | |
---|---|---|---|
0 | 1 | 3 | 4 |
1 | 2 | 4 | 6 |
Pandas is a fast and flexible library, and an important part of every data scientist’s Python toolbox. Unfortunately, its API can be rather confusing and surprising, so it takes a while to get familiar with it. If you haven’t used Pandas before, we’d suggest going through a tutorial; we are particularly fond of the book Python for Data Analysis by Wes McKinney, the creator of Pandas (O’Reilly). It also covers other important libraries like matplotlib
and numpy
. We will try to briefly describe Pandas functionality we use as we come across it, but will not go into the level of detail of McKinney’s book.
End sidebar
Now that we have seen what the data looks like, let’s make it ready for model training.
Constructing a DataBlock
How do we convert from a DataFrame
object to a DataLoaders
object? We generally suggest using the data block API for creating a DataLoaders
object, where possible, since it provides a good mix of flexibility and simplicity. Here we will show you the steps that we take to use the data blocks API to construct a DataLoaders
object in practice, using this dataset as an example.
As we have seen, PyTorch and fastai have two main classes for representing and accessing a training set or validation set:
Dataset
:: A collection that returns a tuple of your independent and dependent variable for a single itemDataLoader
:: An iterator that provides a stream of mini-batches, where each mini-batch is a tuple of a batch of independent variables and a batch of dependent variables
On top of these, fastai provides two classes for bringing your training and validation sets together:
Datasets
:: An object that contains a trainingDataset
and a validationDataset
DataLoaders
:: An object that contains a trainingDataLoader
and a validationDataLoader
Since a DataLoader
builds on top of a Dataset
and adds additional functionality to it (collating multiple items into a mini-batch), it’s often easiest to start by creating and testing Datasets
, and then look at DataLoaders
after that’s working.
When we create a DataBlock
, we build up gradually, step by step, and use the notebook to check our data along the way. This is a great way to make sure that you maintain momentum as you are coding, and that you keep an eye out for any problems. It’s easy to debug, because you know that if a problem arises, it is in the line of code you just typed!
Let’s start with the simplest case, which is a data block created with no parameters:
In [ ]:
dblock = DataBlock()
We can create a Datasets
object from this. The only thing needed is a source—in this case, our DataFrame:
In [ ]:
dsets = dblock.datasets(df)
This contains a train
and a valid
dataset, which we can index into:
In [ ]:
len(dsets.train),len(dsets.valid)
Out[ ]:
(4009, 1002)
In [ ]:
x,y = dsets.train[0]
x,y
Out[ ]:
(fname 008663.jpg
labels car person
is_valid False
Name: 4346, dtype: object,
fname 008663.jpg
labels car person
is_valid False
Name: 4346, dtype: object)
As you can see, this simply returns a row of the DataFrame, twice. This is because by default, the data block assumes we have two things: input and target. We are going to need to grab the appropriate fields from the DataFrame, which we can do by passing get_x
and get_y
functions:
In [ ]:
x['fname']
Out[ ]:
'008663.jpg'
In [ ]:
dblock = DataBlock(get_x = lambda r: r['fname'], get_y = lambda r: r['labels'])
dsets = dblock.datasets(df)
dsets.train[0]
Out[ ]:
('005620.jpg', 'aeroplane')
As you can see, rather than defining a function in the usual way, we are using Python’s lambda
keyword. This is just a shortcut for defining and then referring to a function. The following more verbose approach is identical:
In [ ]:
def get_x(r): return r['fname']
def get_y(r): return r['labels']
dblock = DataBlock(get_x = get_x, get_y = get_y)
dsets = dblock.datasets(df)
dsets.train[0]
Out[ ]:
('002549.jpg', 'tvmonitor')
Lambda functions are great for quickly iterating, but they are not compatible with serialization, so we advise you to use the more verbose approach if you want to export your Learner
after training (lambdas are fine if you are just experimenting).
We can see that the independent variable will need to be converted into a complete path, so that we can open it as an image, and the dependent variable will need to be split on the space character (which is the default for Python’s split
function) so that it becomes a list:
In [ ]:
def get_x(r): return path/'train'/r['fname']
def get_y(r): return r['labels'].split(' ')
dblock = DataBlock(get_x = get_x, get_y = get_y)
dsets = dblock.datasets(df)
dsets.train[0]
Out[ ]:
(Path('/home/jhoward/.fastai/data/pascal_2007/train/002844.jpg'), ['train'])
To actually open the image and do the conversion to tensors, we will need to use a set of transforms; block types will provide us with those. We can use the same block types that we have used previously, with one exception: the ImageBlock
will work fine again, because we have a path that points to a valid image, but the CategoryBlock
is not going to work. The problem is that block returns a single integer, but we need to be able to have multiple labels for each item. To solve this, we use a MultiCategoryBlock
. This type of block expects to receive a list of strings, as we have in this case, so let’s test it out:
In [ ]:
dblock = DataBlock(blocks=(ImageBlock, MultiCategoryBlock),
get_x = get_x, get_y = get_y)
dsets = dblock.datasets(df)
dsets.train[0]
Out[ ]:
(PILImage mode=RGB size=500x375,
TensorMultiCategory([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.]))
As you can see, our list of categories is not encoded in the same way that it was for the regular CategoryBlock
. In that case, we had a single integer representing which category was present, based on its location in our vocab. In this case, however, we instead have a list of zeros, with a one in any position where that category is present. For example, if there is a one in the second and fourth positions, then that means that vocab items two and four are present in this image. This is known as one-hot encoding. The reason we can’t easily just use a list of category indices is that each list would be a different length, and PyTorch requires tensors, where everything has to be the same length.
jargon: One-hot encoding: Using a vector of zeros, with a one in each location that is represented in the data, to encode a list of integers.
Let’s check what the categories represent for this example (we are using the convenient torch.where
function, which tells us all of the indices where our condition is true or false):
In [ ]:
idxs = torch.where(dsets.train[0][1]==1.)[0]
dsets.train.vocab[idxs]
Out[ ]:
(#1) ['dog']
With NumPy arrays, PyTorch tensors, and fastai’s L
class, we can index directly using a list or vector, which makes a lot of code (such as this example) much clearer and more concise.
We have ignored the column is_valid
up until now, which means that DataBlock
has been using a random split by default. To explicitly choose the elements of our validation set, we need to write a function and pass it to splitter
(or use one of fastai’s predefined functions or classes). It will take the items (here our whole DataFrame) and must return two (or more) lists of integers:
In [ ]:
def splitter(df):
train = df.index[~df['is_valid']].tolist()
valid = df.index[df['is_valid']].tolist()
return train,valid
dblock = DataBlock(blocks=(ImageBlock, MultiCategoryBlock),
splitter=splitter,
get_x=get_x,
get_y=get_y)
dsets = dblock.datasets(df)
dsets.train[0]
Out[ ]:
(PILImage mode=RGB size=500x333,
TensorMultiCategory([0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]))
As we have discussed, a DataLoader
collates the items from a Dataset
into a mini-batch. This is a tuple of tensors, where each tensor simply stacks the items from that location in the Dataset
item.
Now that we have confirmed that the individual items look okay, there’s one more step we need to ensure we can create our DataLoaders
, which is to ensure that every item is of the same size. To do this, we can use RandomResizedCrop
:
In [ ]:
dblock = DataBlock(blocks=(ImageBlock, MultiCategoryBlock),
splitter=splitter,
get_x=get_x,
get_y=get_y,
item_tfms = RandomResizedCrop(128, min_scale=0.35))
dls = dblock.dataloaders(df)
And now we can display a sample of our data:
In [ ]:
dls.show_batch(nrows=1, ncols=3)
Remember that if anything goes wrong when you create your DataLoaders
from your DataBlock
, or if you want to view exactly what happens with your DataBlock
, you can use the summary
method we presented in the last chapter.
Our data is now ready for training a model. As we will see, nothing is going to change when we create our Learner
, but behind the scenes, the fastai library will pick a new loss function for us: binary cross-entropy.
Binary Cross-Entropy
Now we’ll create our Learner
. We saw in <> that a Learner
object contains four main things: the model, a DataLoaders
object, an Optimizer
, and the loss function to use. We already have our DataLoaders
, we can leverage fastai’s resnet
models (which we’ll learn how to create from scratch later), and we know how to create an SGD
optimizer. So let’s focus on ensuring we have a suitable loss function. To do this, let’s use cnn_learner
to create a Learner
, so we can look at its activations:
In [ ]:
learn = cnn_learner(dls, resnet18)
We also saw that the model in a Learner
is generally an object of a class inheriting from nn.Module
, and that we can call it using parentheses and it will return the activations of a model. You should pass it your independent variable, as a mini-batch. We can try it out by grabbing a mini batch from our DataLoader
and then passing it to the model:
In [ ]:
x,y = to_cpu(dls.train.one_batch())
activs = learn.model(x)
activs.shape
Out[ ]:
torch.Size([64, 20])
Think about why activs
has this shape—we have a batch size of 64, and we need to calculate the probability of each of 20 categories. Here’s what one of those activations looks like:
In [ ]:
activs[0]
Out[ ]:
TensorImage([ 0.7476, -1.1988, 4.5421, -1.5915, -0.6749, 0.0343, -2.4930, -0.8330, -0.3817, -1.4876, -0.1683, 2.1547, -3.4151, -1.1743, 0.1530, -1.6801, -2.3067, 0.7063, -1.3358, -0.3715],
grad_fn=<AliasBackward>)
note: Getting Model Activations: Knowing how to manually get a mini-batch and pass it into a model, and look at the activations and loss, is really important for debugging your model. It is also very helpful for learning, so that you can see exactly what is going on.
They aren’t yet scaled to between 0 and 1, but we learned how to do that in <>, using the sigmoid
function. We also saw how to calculate a loss based on this—this is our loss function from <>, with the addition of log
as discussed in the last chapter:
In [ ]:
def binary_cross_entropy(inputs, targets):
inputs = inputs.sigmoid()
return -torch.where(targets==1, 1-inputs, inputs).log().mean()
Note that because we have a one-hot-encoded dependent variable, we can’t directly use nll_loss
or softmax
(and therefore we can’t use cross_entropy
):
softmax
, as we saw, requires that all predictions sum to 1, and tends to push one activation to be much larger than the others (due to the use ofexp
); however, we may well have multiple objects that we’re confident appear in an image, so restricting the maximum sum of activations to 1 is not a good idea. By the same reasoning, we may want the sum to be less than 1, if we don’t think any of the categories appear in an image.nll_loss
, as we saw, returns the value of just one activation: the single activation corresponding with the single label for an item. This doesn’t make sense when we have multiple labels.
On the other hand, the binary_cross_entropy
function, which is just mnist_loss
along with log
, provides just what we need, thanks to the magic of PyTorch’s elementwise operations. Each activation will be compared to each target for each column, so we don’t have to do anything to make this function work for multiple columns.
j: One of the things I really like about working with libraries like PyTorch, with broadcasting and elementwise operations, is that quite frequently I find I can write code that works equally well for a single item or a batch of items, without changes.
binary_cross_entropy
is a great example of this. By using these operations, we don’t have to write loops ourselves, and can rely on PyTorch to do the looping we need as appropriate for the rank of the tensors we’re working with.
PyTorch already provides this function for us. In fact, it provides a number of versions, with rather confusing names!
F.binary_cross_entropy
and its module equivalent nn.BCELoss
calculate cross-entropy on a one-hot-encoded target, but do not include the initial sigmoid
. Normally for one-hot-encoded targets you’ll want F.binary_cross_entropy_with_logits
(or nn.BCEWithLogitsLoss
), which do both sigmoid and binary cross-entropy in a single function, as in the preceding example.
The equivalent for single-label datasets (like MNIST or the Pet dataset), where the target is encoded as a single integer, is F.nll_loss
or nn.NLLLoss
for the version without the initial softmax, and F.cross_entropy
or nn.CrossEntropyLoss
for the version with the initial softmax.
Since we have a one-hot-encoded target, we will use BCEWithLogitsLoss
:
In [ ]:
loss_func = nn.BCEWithLogitsLoss()
loss = loss_func(activs, y)
loss
Out[ ]:
TensorImage(1.0342, grad_fn=<AliasBackward>)
We don’t actually need to tell fastai to use this loss function (although we can if we want) since it will be automatically chosen for us. fastai knows that the DataLoaders
has multiple category labels, so it will use nn.BCEWithLogitsLoss
by default.
One change compared to the last chapter is the metric we use: because this is a multilabel problem, we can’t use the accuracy function. Why is that? Well, accuracy was comparing our outputs to our targets like so:
def accuracy(inp, targ, axis=-1):
"Compute accuracy with `targ` when `pred` is bs * n_classes"
pred = inp.argmax(dim=axis)
return (pred == targ).float().mean()
The class predicted was the one with the highest activation (this is what argmax
does). Here it doesn’t work because we could have more than one prediction on a single image. After applying the sigmoid to our activations (to make them between 0 and 1), we need to decide which ones are 0s and which ones are 1s by picking a threshold. Each value above the threshold will be considered as a 1, and each value lower than the threshold will be considered a 0:
def accuracy_multi(inp, targ, thresh=0.5, sigmoid=True):
"Compute accuracy when `inp` and `targ` are the same size."
if sigmoid: inp = inp.sigmoid()
return ((inp>thresh)==targ.bool()).float().mean()
If we pass accuracy_multi
directly as a metric, it will use the default value for threshold
, which is 0.5. We might want to adjust that default and create a new version of accuracy_multi
that has a different default. To help with this, there is a function in Python called partial
. It allows us to bind a function with some arguments or keyword arguments, making a new version of that function that, whenever it is called, always includes those arguments. For instance, here is a simple function taking two arguments:
In [ ]:
def say_hello(name, say_what="Hello"): return f"{say_what} {name}."
say_hello('Jeremy'),say_hello('Jeremy', 'Ahoy!')
Out[ ]:
('Hello Jeremy.', 'Ahoy! Jeremy.')
We can switch to a French version of that function by using partial
:
In [ ]:
f = partial(say_hello, say_what="Bonjour")
f("Jeremy"),f("Sylvain")
Out[ ]:
('Bonjour Jeremy.', 'Bonjour Sylvain.')
We can now train our model. Let’s try setting the accuracy threshold to 0.2 for our metric:
In [ ]:
learn = cnn_learner(dls, resnet50, metrics=partial(accuracy_multi, thresh=0.2))
learn.fine_tune(3, base_lr=3e-3, freeze_epochs=4)
epoch | train_loss | valid_loss | accuracy_multi | time |
---|---|---|---|---|
0 | 0.942663 | 0.703737 | 0.233307 | 00:08 |
1 | 0.821548 | 0.550827 | 0.295319 | 00:08 |
2 | 0.604189 | 0.202585 | 0.816474 | 00:08 |
3 | 0.359258 | 0.123299 | 0.944283 | 00:08 |
epoch | train_loss | valid_loss | accuracy_multi | time |
---|---|---|---|---|
0 | 0.135746 | 0.123404 | 0.944442 | 00:09 |
1 | 0.118443 | 0.107534 | 0.951255 | 00:09 |
2 | 0.098525 | 0.104778 | 0.951554 | 00:10 |
Picking a threshold is important. If you pick a threshold that’s too low, you’ll often be failing to select correctly labeled objects. We can see this by changing our metric, and then calling validate
, which returns the validation loss and metrics:
In [ ]:
learn.metrics = partial(accuracy_multi, thresh=0.1)
learn.validate()
Out[ ]:
(#2) [0.10477833449840546,0.9314740300178528]
If you pick a threshold that’s too high, you’ll only be selecting the objects for which your model is very confident:
In [ ]:
learn.metrics = partial(accuracy_multi, thresh=0.99)
learn.validate()
Out[ ]:
(#2) [0.10477833449840546,0.9429482221603394]
We can find the best threshold by trying a few levels and seeing what works best. This is much faster if we just grab the predictions once:
In [ ]:
preds,targs = learn.get_preds()
Then we can call the metric directly. Note that by default get_preds
applies the output activation function (sigmoid, in this case) for us, so we’ll need to tell accuracy_multi
to not apply it:
In [ ]:
accuracy_multi(preds, targs, thresh=0.9, sigmoid=False)
Out[ ]:
TensorImage(0.9567)
We can now use this approach to find the best threshold level:
In [ ]:
xs = torch.linspace(0.05,0.95,29)
accs = [accuracy_multi(preds, targs, thresh=i, sigmoid=False) for i in xs]
plt.plot(xs,accs);
In this case, we’re using the validation set to pick a hyperparameter (the threshold), which is the purpose of the validation set. Sometimes students have expressed their concern that we might be overfitting to the validation set, since we’re trying lots of values to see which is the best. However, as you see in the plot, changing the threshold in this case results in a smooth curve, so we’re clearly not picking some inappropriate outlier. This is a good example of where you have to be careful of the difference between theory (don’t try lots of hyperparameter values or you might overfit the validation set) versus practice (if the relationship is smooth, then it’s fine to do this).
This concludes the part of this chapter dedicated to multi-label classification. Next, we’ll take a look at a regression problem.