CAM and Hooks

The class activation map (CAM) was introduced by Bolei Zhou et al. in “Learning Deep Features for Discriminative Localization”. It uses the output of the last convolutional layer (just before the average pooling layer) together with the predictions to give us a heatmap visualization of why the model made its decision. This is a useful tool for interpretation.

More precisely, at each position of our final convolutional layer, we have as many filters as in the last linear layer. We can therefore compute the dot product of those activations with the final weights to get, for each location on our feature map, the score of the feature that was used to make a decision.

We’re going to need a way to get access to the activations inside the model while it’s training. In PyTorch this can be done with a hook. Hooks are PyTorch’s equivalent of fastai’s callbacks. However, rather than allowing you to inject code into the training loop like a fastai Learner callback, hooks allow you to inject code into the forward and backward calculations themselves. We can attach a hook to any layer of the model, and it will be executed when we compute the outputs (forward hook) or during backpropagation (backward hook). A forward hook is a function that takes three things—a module, its input, and its output—and it can perform any behavior you want. (fastai also provides a handy HookCallback that we won’t cover here, but take a look at the fastai docs; it makes working with hooks a little easier.)

To illustrate, we’ll use the same cats and dogs model we trained in <>:

In [ ]:

  1. path = untar_data(URLs.PETS)/'images'
  2. def is_cat(x): return x[0].isupper()
  3. dls = ImageDataLoaders.from_name_func(
  4. path, get_image_files(path), valid_pct=0.2, seed=21,
  5. label_func=is_cat, item_tfms=Resize(224))
  6. learn = cnn_learner(dls, resnet34, metrics=error_rate)
  7. learn.fine_tune(1)
epochtrain_lossvalid_losserror_ratetime
00.1459940.0192720.00608900:14
epochtrain_lossvalid_losserror_ratetime
00.0534050.0525400.01082500:19

To start, we’ll grab a cat picture and a batch of data:

In [ ]:

  1. img = PILImage.create(image_cat())
  2. x, = first(dls.test_dl([img]))

For CAM we want to store the activations of the last convolutional layer. We put our hook function in a class so it has a state that we can access later, and just store a copy of the output:

In [ ]:

  1. class Hook():
  2. def hook_func(self, m, i, o): self.stored = o.detach().clone()

We can then instantiate a Hook and attach it to the layer we want, which is the last layer of the CNN body:

In [ ]:

  1. hook_output = Hook()
  2. hook = learn.model[0].register_forward_hook(hook_output.hook_func)

Now we can grab a batch and feed it through our model:

In [ ]:

  1. with torch.no_grad(): output = learn.model.eval()(x)

And we can access our stored activations:

In [ ]:

  1. act = hook_output.stored[0]

Let’s also double-check our predictions:

In [ ]:

  1. F.softmax(output, dim=-1)

Out[ ]:

  1. tensor([[0.0010, 0.9990]], device='cuda:0')

We know 0 (for False) is “dog,” because the classes are automatically sorted in fastai, bu we can still double-check by looking at dls.vocab:

In [ ]:

  1. dls.vocab

Out[ ]:

  1. (#2) [False,True]

So, our model is very confident this was a picture of a cat.

To do the dot product of our weight matrix (2 by number of activations) with the activations (batch size by activations by rows by cols), we use a custom einsum:

In [ ]:

  1. x.shape

Out[ ]:

  1. torch.Size([1, 3, 224, 224])

In [ ]:

  1. cam_map = torch.einsum('ck,kij->cij', learn.model[1][-1].weight, act)
  2. cam_map.shape

Out[ ]:

  1. torch.Size([2, 7, 7])

For each image in our batch, and for each class, we get a 7×7 feature map that tells us where the activations were higher and where they were lower. This will let us see which areas of the pictures influenced the model’s decision.

For instance, we can find out which areas made the model decide this animal was a cat (note that we need to decode the input x since it’s been normalized by the DataLoader, and we need to cast to TensorImage since at the time this book is written PyTorch does not maintain types when indexing—this may be fixed by the time you are reading this):

In [ ]:

  1. x_dec = TensorImage(dls.train.decode((x,))[0][0])
  2. _,ax = plt.subplots()
  3. x_dec.show(ctx=ax)
  4. ax.imshow(cam_map[1].detach().cpu(), alpha=0.6, extent=(0,224,224,0),
  5. interpolation='bilinear', cmap='magma');

CAM and Hooks - 图1

The areas in bright yellow correspond to high activations and the areas in purple to low activations. In this case, we can see the head and the front paw were the two main areas that made the model decide it was a picture of a cat.

Once you’re done with your hook, you should remove it as otherwise it might leak some memory:

In [ ]:

  1. hook.remove()

That’s why it’s usually a good idea to have the Hook class be a context manager, registering the hook when you enter it and removing it when you exit. A context manager is a Python construct that calls __enter__ when the object is created in a with clause, and __exit__ at the end of the with clause. For instance, this is how Python handles the with open(...) as f: construct that you’ll often see for opening files without requiring an explicit close(f) at the end. If we define Hook as follows:

In [ ]:

  1. class Hook():
  2. def __init__(self, m):
  3. self.hook = m.register_forward_hook(self.hook_func)
  4. def hook_func(self, m, i, o): self.stored = o.detach().clone()
  5. def __enter__(self, *args): return self
  6. def __exit__(self, *args): self.hook.remove()

we can safely use it this way:

In [ ]:

  1. with Hook(learn.model[0]) as hook:
  2. with torch.no_grad(): output = learn.model.eval()(x.cuda())
  3. act = hook.stored

fastai provides this Hook class for you, as well as some other handy classes to make working with hooks easier.

This method is useful, but only works for the last layer. Gradient CAM is a variant that addresses this problem.