layers
Provides essential functions to building and modifying Model
architectures.
Model Layers
This module contains many layer classes that we might be interested in using in our models. These layers complement the default Pytorch layers which we can also use as predefined layers.
Custom fastai modules
class
AdaptiveConcatPool2d
[source][test]
AdaptiveConcatPool2d
(sz
:Optional
[int
]=None
) ::PrePostInitMeta
::Module
No tests found forAdaptiveConcatPool2d
. To contribute a test please refer to this guide and this discussion.
Layer that concats AdaptiveAvgPool2d
and AdaptiveMaxPool2d
.
The output will be 2*sz
, or just 2 if sz
is None.
The AdaptiveConcatPool2d
object uses adaptive average pooling and adaptive max pooling and concatenates them both. We use this because it provides the model with the information of both methods and improves performance. This technique is called adaptive
because it allows us to decide on what output dimensions we want, instead of choosing the input’s dimensions to fit a desired output size.
Let’s try training with Adaptive Average Pooling first, then with Adaptive Max Pooling and finally with the concatenation of them both to see how they fare in performance.
We will first define a simple_cnn
using Adaptive Max Pooling by changing the source code a bit.
path = untar_data(URLs.MNIST_SAMPLE)
data = ImageDataBunch.from_folder(path)
def simple_cnn_max(actns:Collection[int], kernel_szs:Collection[int]=None,
strides:Collection[int]=None) -> nn.Sequential:
"CNN with `conv2d_relu` layers defined by `actns`, `kernel_szs` and `strides`"
nl = len(actns)-1
kernel_szs = ifnone(kernel_szs, [3]*nl)
strides = ifnone(strides , [2]*nl)
layers = [conv_layer(actns[i], actns[i+1], kernel_szs[i], stride=strides[i])
for i in range(len(strides))]
layers.append(nn.Sequential(nn.AdaptiveMaxPool2d(1), Flatten()))
return nn.Sequential(*layers)
model = simple_cnn_max((3,16,16,2))
learner = Learner(data, model, metrics=[accuracy])
learner.fit(1)
Total time: 00:02
epoch | train_loss | valid_loss | accuracy |
---|---|---|---|
1 | 0.102758 | 0.064676 | 0.984298 |
Now let’s try with Adaptive Average Pooling now.
def simple_cnn_avg(actns:Collection[int], kernel_szs:Collection[int]=None,
strides:Collection[int]=None) -> nn.Sequential:
"CNN with `conv2d_relu` layers defined by `actns`, `kernel_szs` and `strides`"
nl = len(actns)-1
kernel_szs = ifnone(kernel_szs, [3]*nl)
strides = ifnone(strides , [2]*nl)
layers = [conv_layer(actns[i], actns[i+1], kernel_szs[i], stride=strides[i])
for i in range(len(strides))]
layers.append(nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten()))
return nn.Sequential(*layers)
model = simple_cnn_avg((3,16,16,2))
learner = Learner(data, model, metrics=[accuracy])
learner.fit(1)
Total time: 00:02
epoch | train_loss | valid_loss | accuracy |
---|---|---|---|
1 | 0.241485 | 0.201116 | 0.973994 |
Finally we will try with the concatenation of them both AdaptiveConcatPool2d
. We will see that, in fact, it increases our accuracy and decreases our loss considerably!
def simple_cnn(actns:Collection[int], kernel_szs:Collection[int]=None,
strides:Collection[int]=None) -> nn.Sequential:
"CNN with `conv2d_relu` layers defined by `actns`, `kernel_szs` and `strides`"
nl = len(actns)-1
kernel_szs = ifnone(kernel_szs, [3]*nl)
strides = ifnone(strides , [2]*nl)
layers = [conv_layer(actns[i], actns[i+1], kernel_szs[i], stride=strides[i])
for i in range(len(strides))]
layers.append(nn.Sequential(AdaptiveConcatPool2d(1), Flatten()))
return nn.Sequential(*layers)
model = simple_cnn((3,16,16,2))
learner = Learner(data, model, metrics=[accuracy])
learner.fit(1)
Total time: 00:02
epoch | train_loss | valid_loss | accuracy |
---|---|---|---|
1 | 0.203015 | 0.122094 | 0.988224 |
class
Lambda
[source][test]
Lambda
(func
:LambdaFunc
) ::PrePostInitMeta
::Module
No tests found forLambda
. To contribute a test please refer to this guide and this discussion.
Create a layer that simply calls func
with x
This is very useful to use functions as layers in our networks inside a Sequential object. So, for example, say we want to apply a log_softmax loss and we need to change the shape of our output batches to be able to use this loss. We can add a layer that applies the necessary change in shape by calling:
Lambda(lambda x: x.view(x.size(0),-1))
Let’s see an example of how the shape of our output can change when we add this layer.
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.AdaptiveAvgPool2d(1),
)
model.cuda()
for xb, yb in data.train_dl:
out = (model(*[xb]))
print(out.size())
break
torch.Size([64, 10, 1, 1])
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.AdaptiveAvgPool2d(1),
Lambda(lambda x: x.view(x.size(0),-1))
)
model.cuda()
for xb, yb in data.train_dl:
out = (model(*[xb]))
print(out.size())
break
torch.Size([64, 10])
class
Flatten
[source][test]
Flatten
(full
:bool
=False
) ::PrePostInitMeta
::Module
No tests found forFlatten
. To contribute a test please refer to this guide and this discussion.
Flatten x
to a single dimension, often used at the end of a model. full
for rank-1 tensor
The function we build above is actually implemented in our library as Flatten
. We can see that it returns the same size when we run it.
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.AdaptiveAvgPool2d(1),
Flatten(),
)
model.cuda()
for xb, yb in data.train_dl:
out = (model(*[xb]))
print(out.size())
break
torch.Size([64, 10])
PoolFlatten
[source][test]
PoolFlatten
() →Sequential
No tests found forPoolFlatten
. To contribute a test please refer to this guide and this discussion.
Apply nn.AdaptiveAvgPool2d
to x
and then flatten the result.
We can combine these two final layers (AdaptiveAvgPool2d and Flatten
) by using PoolFlatten
.
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
PoolFlatten()
)
model.cuda()
for xb, yb in data.train_dl:
out = (model(*[xb]))
print(out.size())
break
torch.Size([64, 10])
Another use we give to the Lambda function is to resize batches with ResizeBatch
when we have a layer that expects a different input than what comes from the previous one.
class
ResizeBatch
[source][test]
ResizeBatch
(*size
:int
) ::PrePostInitMeta
::Module
No tests found forResizeBatch
. To contribute a test please refer to this guide and this discussion.
Reshape x
to size
, keeping batch dim the same size
a = torch.tensor([[1., -1.], [1., -1.]])[None]
print(a)
tensor([[[ 1., -1.],
[ 1., -1.]]])
out = ResizeBatch(4)
print(out(a))
tensor([[ 1., -1., 1., -1.]])
class
Debugger
[source][test]
Debugger
() ::PrePostInitMeta
::Module
No tests found forDebugger
. To contribute a test please refer to this guide and this discussion.
A module to debug inside a model.
The debugger module allows us to peek inside a network while its training and see in detail what is going on. We can see inputs, outputs and sizes at any point in the network.
For instance, if you run the following:
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
Debugger(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
)
model.cuda()
learner = Learner(data, model, metrics=[accuracy])
learner.fit(5)
… you’ll see something like this:
/home/ubuntu/fastai/fastai/layers.py(74)forward()
72 def forward(self,x:Tensor) -> Tensor:
73 set_trace()
---> 74 return x
75
76 class StdUpsample(nn.Module):
ipdb>
class
PixelShuffle_ICNR
[source][test]
PixelShuffle_ICNR
(ni
:int
,nf
:int
=None
,scale
:int
=2
,blur
:bool
=False
,norm_type
=<NormType.Weight: 3>
,leaky
:float
=None
) ::PrePostInitMeta
::Module
No tests found forPixelShuffle_ICNR
. To contribute a test please refer to this guide and this discussion.
Upsample by scale
from ni
filters to nf
(default ni
), using nn.PixelShuffle
, icnr
init, and weight_norm
.
class
MergeLayer
[source][test]
MergeLayer
(dense
:bool
=False
) ::PrePostInitMeta
::Module
No tests found forMergeLayer
. To contribute a test please refer to this guide and this discussion.
Merge a shortcut with the result of the module by adding them or concatenating them if dense=True
.
class
PartialLayer
[source][test]
PartialLayer
(func
, **kwargs
) ::PrePostInitMeta
::Module
No tests found forPartialLayer
. To contribute a test please refer to this guide and this discussion.
Layer that applies partial(func, **kwargs)
.
class
SigmoidRange
[source][test]
SigmoidRange
(low
,high
) ::PrePostInitMeta
::Module
No tests found forSigmoidRange
. To contribute a test please refer to this guide and this discussion.
Sigmoid module with range (low,x_max)
class
SequentialEx
[source][test]
SequentialEx
(*layers
) ::PrePostInitMeta
::Module
No tests found forSequentialEx
. To contribute a test please refer to this guide and this discussion.
Like nn.Sequential
, but with ModuleList semantics, and can access module input
class
SelfAttention
[source][test]
SelfAttention
(n_channels
:int
) ::PrePostInitMeta
::Module
Tests found forSelfAttention
:
pytest -sv tests/test_torch_core.py::test_keep_parameter
[source]
To run tests please refer to this guide.
Self attention layer for nd.
class
BatchNorm1dFlat
[source][test]
BatchNorm1dFlat
(num_features
,eps
=1e-05
,momentum
=0.1
,affine
=True
,track_running_stats
=True
) ::BatchNorm1d
No tests found forBatchNorm1dFlat
. To contribute a test please refer to this guide and this discussion.
nn.BatchNorm1d
, but first flattens leading dimensions
Loss functions
class
FlattenedLoss
[source][test]
FlattenedLoss
(func
, *args
,axis
:int
=-1
,floatify
:bool
=False
,is_2d
:bool
=True
, **kwargs
) No tests found forFlattenedLoss
. To contribute a test please refer to this guide and this discussion.
Same as func
, but flattens input and target.
Create an instance of func
with args
and kwargs
. When passing an output and target, it
- puts
axis
first in output and target with a transpose - casts the target to
float
iffloatify=True
- squeezes the
output
to two dimensions ifis_2d
, otherwise one dimension, squeezes the target to one dimension - applies the instance of
func
.
BCEFlat
[source][test]
BCEFlat
(*args
,axis
:int
=-1
,floatify
:bool
=True
, **kwargs
) No tests found forBCEFlat
. To contribute a test please refer to this guide and this discussion.
Same as nn.BCELoss
, but flattens input and target.
BCEWithLogitsFlat
[source][test]
BCEWithLogitsFlat
(*args
,axis
:int
=-1
,floatify
:bool
=True
, **kwargs
) No tests found forBCEWithLogitsFlat
. To contribute a test please refer to this guide and this discussion.
Same as nn.BCEWithLogitsLoss
, but flattens input and target.
CrossEntropyFlat
[source][test]
CrossEntropyFlat
(*args
,axis
:int
=-1
, **kwargs
) No tests found forCrossEntropyFlat
. To contribute a test please refer to this guide and this discussion.
Same as nn.CrossEntropyLoss
, but flattens input and target.
MSELossFlat
[source][test]
MSELossFlat
(*args
,axis
:int
=-1
,floatify
:bool
=True
, **kwargs
) No tests found forMSELossFlat
. To contribute a test please refer to this guide and this discussion.
Same as nn.MSELoss
, but flattens input and target.
class
NoopLoss
[source][test]
NoopLoss
() ::PrePostInitMeta
::Module
No tests found forNoopLoss
. To contribute a test please refer to this guide and this discussion.
Just returns the mean of the output
.
class
WassersteinLoss
[source][test]
WassersteinLoss
() ::PrePostInitMeta
::Module
No tests found forWassersteinLoss
. To contribute a test please refer to this guide and this discussion.
For WGAN.
Helper functions to create modules
bn_drop_lin
[source][test]
bn_drop_lin
(n_in
:int
,n_out
:int
,bn
:bool
=True
,p
:float
=0.0
,actn
:Optional
[Module
]=None
) No tests found forbn_drop_lin
. To contribute a test please refer to this guide and this discussion.
The bn_drop_lin
function returns a sequence of batch normalization, dropout and a linear layer. This custom layer is usually used at the end of a model.
n_in
represents the size of the input, n_out
the size of the output, bn
whether we want batch norm or not, p
how much dropout, and actn
(optional parameter) adds an activation function at the end.
conv2d
[source][test]
conv2d
(ni
:int
,nf
:int
,ks
:int
=3
,stride
:int
=1
,padding
:int
=None
,bias
=False
,init
:LayerFunc
='kaiming_normal_'
) →Conv2d
No tests found forconv2d
. To contribute a test please refer to this guide and this discussion.
Create and initialize nn.Conv2d
layer. padding
defaults to ks//2
.
conv2d_trans
[source][test]
conv2d_trans
(ni
:int
,nf
:int
,ks
:int
=2
,stride
:int
=2
,padding
:int
=0
,bias
=False
) →ConvTranspose2d
No tests found forconv2d_trans
. To contribute a test please refer to this guide and this discussion.
Create nn.ConvTranspose2d
layer.
conv_layer
[source][test]
conv_layer
(ni
:int
,nf
:int
,ks
:int
=3
,stride
:int
=1
,padding
:int
=None
,bias
:bool
=None
,is_1d
:bool
=False
,norm_type
:Optional
[NormType
]=<NormType.Batch: 1>
,use_activ
:bool
=True
,leaky
:float
=None
,transpose
:bool
=False
,init
:Callable
='kaiming_normal_'
,self_attention
:bool
=False
) No tests found forconv_layer
. To contribute a test please refer to this guide and this discussion.
The conv_layer
function returns a sequence of nn.Conv2D, BatchNorm and a ReLU or leaky RELU activation function.
n_in
represents the size of the input, n_out
the size of the output, ks
the kernel size, stride
the stride with which we want to apply the convolutions. bias
will decide if they have bias or not (if None, defaults to True unless using batchnorm). norm_type
selects the type of normalization (or None
). If leaky
is None, the activation is a standard ReLU
, otherwise it’s a LeakyReLU
of slope leaky
. Finally if transpose=True
, the convolution is replaced by a ConvTranspose2D
.
embedding
[source][test]
embedding
(ni
:int
,nf
:int
) →Module
No tests found forembedding
. To contribute a test please refer to this guide and this discussion.
Create an embedding layer with input size ni
and output size nf
.
relu
[source][test]
relu
(inplace
:bool
=False
,leaky
:float
=None
) No tests found forrelu
. To contribute a test please refer to this guide and this discussion.
Return a relu activation, maybe leaky
and inplace
.
res_block
[source][test]
res_block
(nf
,dense
:bool
=False
,norm_type
:Optional
[NormType
]=<NormType.Batch: 1>
,bottle
:bool
=False
, **conv_kwargs
) No tests found forres_block
. To contribute a test please refer to this guide and this discussion.
Resnet block of nf
features. conv_kwargs
are passed to conv_layer
.
sigmoid_range
[source][test]
sigmoid_range
(x
,low
,high
) No tests found forsigmoid_range
. To contribute a test please refer to this guide and this discussion.
Sigmoid function with range (low, high)
simple_cnn
[source][test]
simple_cnn
(actns
:Collection
[int
],kernel_szs
:Collection
[int
]=None
,strides
:Collection
[int
]=None
,bn
=False
) →Sequential
No tests found forsimple_cnn
. To contribute a test please refer to this guide and this discussion.
CNN with conv_layer
defined by actns
, kernel_szs
and strides
, plus batchnorm if bn
.
Initialization of modules
batchnorm_2d
[source][test]
batchnorm_2d
(nf
:int
,norm_type
:NormType
=<NormType.Batch: 1>
) No tests found forbatchnorm_2d
. To contribute a test please refer to this guide and this discussion.
A batchnorm2d layer with nf
features initialized depending on norm_type
.
icnr
[source][test]
icnr
(x
,scale
=2
,init
='kaiming_normal_'
) No tests found foricnr
. To contribute a test please refer to this guide and this discussion.
ICNR init of x
, with scale
and init
function.
trunc_normal_
[source][test]
trunc_normal_
(x
:Tensor
,mean
:float
=0.0
,std
:float
=1.0
) →Tensor
No tests found fortrunc_normal_
. To contribute a test please refer to this guide and this discussion.
Truncated normal initialization.
icnr
[source][test]
icnr
(x
,scale
=2
,init
='kaiming_normal_'
) No tests found foricnr
. To contribute a test please refer to this guide and this discussion.
ICNR init of x
, with scale
and init
function.
NormType
[test]
Enum
= [Batch, BatchZero, Weight, Spectral] No tests found forNormType
. To contribute a test please refer to this guide and this discussion.
An enumeration.
©2021 fast.ai. All rights reserved.
Site last generated: Jan 5, 2021