callbacks.fp16
Training in mixed precision implementation
Mixed precision training
This module allows the forward and backward passes of your neural net to be done in fp16 (also known as half precision). This is particularly important if you have an NVIDIA GPU with tensor cores, since it can speed up your training by 200% or more.
Overview
To train your model in mixed precision you just have to call Learner.to_fp16
, which converts the model and modifies the existing Learner
to add MixedPrecision
.
to_fp16
[source][test]
to_fp16
(learn
:Learner
,loss_scale
:float
=None
,max_noskip
:int
=1000
,dynamic
:bool
=True
,clip
:float
=None
,flat_master
:bool
=False
,max_scale
:float
=16777216
,loss_fp32
:bool
=True
) →Learner
No tests found forto_fp16
. To contribute a test please refer to this guide and this discussion.
Put learn
in FP16 precision mode.
For example:
path = untar_data(URLs.MNIST_SAMPLE)
data = ImageDataBunch.from_folder(path)
model = simple_cnn((3,16,16,2))
learn = Learner(data, model, metrics=[accuracy]).to_fp16()
learn.fit_one_cycle(1)
epoch | train_loss | valid_loss | accuracy | time |
---|---|---|---|---|
0 | 0.126117 | 0.117945 | 0.956820 | 00:03 |
Warning: Known issue: Learner.to_fp16
doesn’t work when training GANs.
Details about mixed precision training are available in NVIDIA’s documentation. We will just summarize the basics here.
The only parameter you may want to tweak is loss_scale
. This is used to scale the loss up, so that it doesn’t underflow fp16, leading to loss of accuracy (this is reversed for the final gradient calculation after converting back to fp32). Generally the default 512
works well, however. You can also enable or disable the flattening of the master parameter tensor with flat_master=True
, however in our testing the different is negligible.
Internally, the callback ensures that all model parameters (except batchnorm layers, which require fp32) are converted to fp16, and an fp32 copy is also saved. The fp32 copy (the master
parameters) is what is used for actually updating with the optimizer; the fp16 parameters are used for calculating gradients. This helps avoid underflow with small learning rates.
All of this is implemented by the following Callback.
class
MixedPrecision
[source][test]
MixedPrecision
(learn
:Learner
,loss_scale
:float
=None
,max_noskip
:int
=1000
,dynamic
:bool
=True
,clip
:float
=None
,flat_master
:bool
=False
,max_scale
:float
=16777216
,loss_fp32
:bool
=True
) ::LearnerCallback
No tests found forMixedPrecision
. To contribute a test please refer to this guide and this discussion.
Base class for creating callbacks for a Learner
.
Callback methods
You don’t have to call the following functions yourself - they’re called by fastai’s Callback
system automatically to enable the class’s functionality.
on_backward_begin
[source][test]
on_backward_begin
(last_loss
:Rank0Tensor
, **kwargs
:Any
) →Rank0Tensor
No tests found foron_backward_begin
. To contribute a test please refer to this guide and this discussion.
Scale gradients up by self.loss_scale
to prevent underflow.
on_backward_end
[source][test]
on_backward_end
(**kwargs
:Any
) No tests found foron_backward_end
. To contribute a test please refer to this guide and this discussion.
Convert the gradients back to FP32 and divide them by the scale.
on_loss_begin
[source][test]
on_loss_begin
(last_output
:Tensor
, **kwargs
:Any
) →Tensor
No tests found foron_loss_begin
. To contribute a test please refer to this guide and this discussion.
Convert half precision output to FP32 to avoid reduction overflow.
on_step_end
[source][test]
on_step_end
(**kwargs
:Any
) No tests found foron_step_end
. To contribute a test please refer to this guide and this discussion.
Update the params from master to model and zero grad.
on_train_begin
[source][test]
on_train_begin
(**kwargs
:Any
) No tests found foron_train_begin
. To contribute a test please refer to this guide and this discussion.
Prepare the master model.
©2021 fast.ai. All rights reserved.
Site last generated: Jan 5, 2021