- Metrics
- Single-label classification
accuracy
[source]error_rate
[source]top_k_accuracy
[source]APScoreBinary
[source]BalancedAccuracy
[source]BrierScore
[source]CohenKappa
[source]F1Score
[source]FBeta
[source]HammingLoss
[source]Jaccard
[source]Precision
[source]Recall
[source]RocAuc
[source]RocAucBinary
[source]MatthewsCorrCoef
[source]
class
Perplexity
[source]- Multi-label classification
- Regression
- Segmentation
class
Dice
[source]class
DiceMulti
[source]class
JaccardCoeff
[source]- NLP
Metrics
Definition of the metrics that can be used in training models
/usr/local/lib/python3.8/dist-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
Core metric
This is where the function that converts scikit-learn metrics to fastai metrics is defined. You should skip this section unless you want to know all about the internals of fastai.
flatten_check
[source]
flatten_check
(inp
,targ
)
Check that out
and targ
have the same number of elements and flatten them.
x1,x2 = torch.randn(5,4),torch.randn(20)
x1,x2 = flatten_check(x1,x2)
test_eq(x1.shape, [20])
test_eq(x2.shape, [20])
x1,x2 = torch.randn(5,4),torch.randn(21)
test_fail(lambda: flatten_check(x1,x2))
class
AccumMetric
[source]
AccumMetric
(func
,dim_argmax
=None
,activation
='no'
,thresh
=None
,to_np
=False
,invert_arg
=False
,flatten
=True
, **kwargs
) ::Metric
Stores predictions and targets on CPU in accumulate to perform final calculations with func
.
func
is only applied to the accumulated predictions/targets when the value
attribute is asked for (so at the end of a validation/training phase, in use with Learner
and its Recorder
).The signature of func
should be inp,targ
(where inp
are the predictions of the model and targ
the corresponding labels).
For classification problems with single label, predictions need to be transformed with a softmax then an argmax before being compared to the targets. Since a softmax doesn’t change the order of the numbers, we can just apply the argmax. Pass along dim_argmax
to have this done by AccumMetric
(usually -1 will work pretty well). If you need to pass to your metrics the probabilities and not the predictions, use softmax=True
.
For classification problems with multiple labels, or if your targets are one-hot encoded, predictions may need to pass through a sigmoid (if it wasn’t included in your model) then be compared to a given threshold (to decide between 0 and 1), this is done by AccumMetric
if you pass sigmoid=True
and/or a value for thresh
.
If you want to use a metric function sklearn.metrics, you will need to convert predictions and labels to numpy arrays with to_np=True
. Also, scikit-learn metrics adopt the convention y_true
, y_preds
which is the opposite from us, so you will need to pass invert_arg=True
to make AccumMetric
do the inversion for you.
@delegates()
class TstLearner(Learner):
def __init__(self,dls=None,model=None,**kwargs): self.pred,self.xb,self.yb = None,None,None
def _l2_mean(x,y): return torch.sqrt((x.float()-y.float()).pow(2).mean())
#Go through a fake cycle with various batch sizes and computes the value of met
def compute_val(met, x1, x2):
met.reset()
vals = [0,6,15,20]
learn = TstLearner()
for i in range(3):
learn.pred,learn.yb = x1[vals[i]:vals[i+1]],(x2[vals[i]:vals[i+1]],)
met.accumulate(learn)
return met.value
x1,x2 = torch.randn(20,5),torch.randn(20,5)
tst = AccumMetric(_l2_mean)
test_close(compute_val(tst, x1, x2), _l2_mean(x1, x2))
test_eq(torch.cat(tst.preds), x1.view(-1))
test_eq(torch.cat(tst.targs), x2.view(-1))
#test argmax
x1,x2 = torch.randn(20,5),torch.randint(0, 5, (20,))
tst = AccumMetric(_l2_mean, dim_argmax=-1)
test_close(compute_val(tst, x1, x2), _l2_mean(x1.argmax(dim=-1), x2))
#test thresh
x1,x2 = torch.randn(20,5),torch.randint(0, 2, (20,5)).bool()
tst = AccumMetric(_l2_mean, thresh=0.5)
test_close(compute_val(tst, x1, x2), _l2_mean((x1 >= 0.5), x2))
#test sigmoid
x1,x2 = torch.randn(20,5),torch.randn(20,5)
tst = AccumMetric(_l2_mean, activation=ActivationType.Sigmoid)
test_close(compute_val(tst, x1, x2), _l2_mean(torch.sigmoid(x1), x2))
#test to_np
x1,x2 = torch.randn(20,5),torch.randn(20,5)
tst = AccumMetric(lambda x,y: isinstance(x, np.ndarray) and isinstance(y, np.ndarray), to_np=True)
assert compute_val(tst, x1, x2)
#test invert_arg
x1,x2 = torch.randn(20,5),torch.randn(20,5)
tst = AccumMetric(lambda x,y: torch.sqrt(x.pow(2).mean()))
test_close(compute_val(tst, x1, x2), torch.sqrt(x1.pow(2).mean()))
tst = AccumMetric(lambda x,y: torch.sqrt(x.pow(2).mean()), invert_arg=True)
test_close(compute_val(tst, x1, x2), torch.sqrt(x2.pow(2).mean()))
skm_to_fastai
[source]
skm_to_fastai
(func
,is_class
=True
,thresh
=None
,axis
=-1
,activation
=None
, **kwargs
)
Convert func
from sklearn.metrics to a fastai metric
This is the quickest way to use a scikit-learn metric in a fastai training loop. is_class
indicates if you are in a classification problem or not. In this case:
- leaving
thresh
toNone
indicates it’s a single-label classification problem and predictions will pass through an argmax overaxis
before being compared to the targets - setting a value for
thresh
indicates it’s a multi-label classification problem and predictions will pass through a sigmoid (can be deactivated withsigmoid=False
) and be compared tothresh
before being compared to the targets
If is_class=False
, it indicates you are in a regression problem, and predictions are compared to the targets without being modified. In all cases, kwargs
are extra keyword arguments passed to func
.
tst_single = skm_to_fastai(skm.precision_score)
x1,x2 = torch.randn(20,2),torch.randint(0, 2, (20,))
test_close(compute_val(tst_single, x1, x2), skm.precision_score(x2, x1.argmax(dim=-1)))
tst_multi = skm_to_fastai(skm.precision_score, thresh=0.2)
x1,x2 = torch.randn(20),torch.randint(0, 2, (20,))
test_close(compute_val(tst_multi, x1, x2), skm.precision_score(x2, torch.sigmoid(x1) >= 0.2))
tst_multi = skm_to_fastai(skm.precision_score, thresh=0.2, activation=ActivationType.No)
x1,x2 = torch.randn(20),torch.randint(0, 2, (20,))
test_close(compute_val(tst_multi, x1, x2), skm.precision_score(x2, x1 >= 0.2))
tst_reg = skm_to_fastai(skm.r2_score, is_class=False)
x1,x2 = torch.randn(20,5),torch.randn(20,5)
test_close(compute_val(tst_reg, x1, x2), skm.r2_score(x2.view(-1), x1.view(-1)))
test_close(tst_reg(x1, x2), skm.r2_score(x2.view(-1), x1.view(-1)))
optim_metric
[source]
optim_metric
(f
,argname
,bounds
,tol
=0.01
,do_neg
=True
,get_x
=False
)
Replace metric f
with a version that optimizes argument argname
Single-label classification
Warning: All functions defined in this section are intended for single-label classification and targets that are not one-hot encoded. For multi-label problems or one-hot encoded targets, use the version suffixed with multi.
Warning: Many metrics in fastai are thin wrappers around sklearn functionality. However, sklearn metrics can handle python list strings, amongst other things, whereas fastai metrics work with PyTorch, and thus require tensors. The arguments that are passed to metrics are after all transformations, such as categories being converted to indices, have occurred. This means that when you pass a label of a metric, for instance, that you must pass indices, not strings. This can be converted with vocab.map_obj
.
accuracy
[source]
accuracy
(inp
,targ
,axis
=-1
)
Compute accuracy with targ
when pred
is bs * n_classes
def change_targ(targ, n, c):
idx = torch.randperm(len(targ))[:n]
res = targ.clone()
for i in idx: res[i] = (res[i]+random.randint(1,c-1))%c
return res
x = torch.randn(4,5)
y = x.argmax(dim=1)
test_eq(accuracy(x,y), 1)
y1 = change_targ(y, 2, 5)
test_eq(accuracy(x,y1), 0.5)
test_eq(accuracy(x.unsqueeze(1).expand(4,2,5), torch.stack([y,y1], dim=1)), 0.75)
error_rate
[source]
error_rate
(inp
,targ
,axis
=-1
)
1 - accuracy
x = torch.randn(4,5)
y = x.argmax(dim=1)
test_eq(error_rate(x,y), 0)
y1 = change_targ(y, 2, 5)
test_eq(error_rate(x,y1), 0.5)
test_eq(error_rate(x.unsqueeze(1).expand(4,2,5), torch.stack([y,y1], dim=1)), 0.25)
top_k_accuracy
[source]
top_k_accuracy
(inp
,targ
,k
=5
,axis
=-1
)
Computes the Top-k accuracy (targ
is in the top k
predictions of inp
)
x = torch.randn(6,5)
y = torch.arange(0,6)
test_eq(top_k_accuracy(x[:5],y[:5]), 1)
test_eq(top_k_accuracy(x, y), 5/6)
APScoreBinary
[source]
APScoreBinary
(axis
=-1
,average
='macro'
,pos_label
=1
,sample_weight
=None
)
Average Precision for single-label binary classification problems
See the scikit-learn documentation for more details.
BalancedAccuracy
[source]
BalancedAccuracy
(axis
=-1
,sample_weight
=None
,adjusted
=False
)
Balanced Accuracy for single-label binary classification problems
See the scikit-learn documentation for more details.
BrierScore
[source]
BrierScore
(axis
=-1
,sample_weight
=None
,pos_label
=None
)
Brier score for single-label classification problems
See the scikit-learn documentation for more details.
CohenKappa
[source]
CohenKappa
(axis
=-1
,labels
=None
,weights
=None
,sample_weight
=None
)
Cohen kappa for single-label classification problems
See the scikit-learn documentation for more details.
F1Score
[source]
F1Score
(axis
=-1
,labels
=None
,pos_label
=1
,average
='binary'
,sample_weight
=None
)
F1 score for single-label classification problems
See the scikit-learn documentation for more details.
FBeta
[source]
FBeta
(beta
,axis
=-1
,labels
=None
,pos_label
=1
,average
='binary'
,sample_weight
=None
)
FBeta score with beta
for single-label classification problems
See the scikit-learn documentation for more details.
HammingLoss
[source]
HammingLoss
(axis
=-1
,sample_weight
=None
)
Hamming loss for single-label classification problems
See the scikit-learn documentation for more details.
Jaccard
[source]
Jaccard
(axis
=-1
,labels
=None
,pos_label
=1
,average
='binary'
,sample_weight
=None
)
Jaccard score for single-label classification problems
See the scikit-learn documentation for more details.
Precision
[source]
Precision
(axis
=-1
,labels
=None
,pos_label
=1
,average
='binary'
,sample_weight
=None
)
Precision for single-label classification problems
See the scikit-learn documentation for more details.
Recall
[source]
Recall
(axis
=-1
,labels
=None
,pos_label
=1
,average
='binary'
,sample_weight
=None
)
Recall for single-label classification problems
See the scikit-learn documentation for more details.
RocAuc
[source]
RocAuc
(axis
=-1
,average
='macro'
,sample_weight
=None
,max_fpr
=None
,multi_class
='ovr'
)
Area Under the Receiver Operating Characteristic Curve for single-label multiclass classification problems
See the scikit-learn documentation for more details.
RocAucBinary
[source]
RocAucBinary
(axis
=-1
,average
='macro'
,sample_weight
=None
,max_fpr
=None
,multi_class
='raise'
)
Area Under the Receiver Operating Characteristic Curve for single-label binary classification problems
See the scikit-learn documentation for more details.
MatthewsCorrCoef
[source]
MatthewsCorrCoef
(sample_weight
=None
, **kwargs
)
Matthews correlation coefficient for single-label classification problems
See the scikit-learn documentation for more details.
class
Perplexity
[source]
Perplexity
() ::AvgLoss
Perplexity (exponential of cross-entropy loss) for Language Models
x1,x2 = torch.randn(20,5),torch.randint(0, 5, (20,))
tst = perplexity
tst.reset()
vals = [0,6,15,20]
learn = TstLearner()
for i in range(3):
learn.yb = (x2[vals[i]:vals[i+1]],)
learn.loss = F.cross_entropy(x1[vals[i]:vals[i+1]],x2[vals[i]:vals[i+1]])
tst.accumulate(learn)
test_close(tst.value, torch.exp(F.cross_entropy(x1,x2)))
Multi-label classification
accuracy_multi
[source]
accuracy_multi
(inp
,targ
,thresh
=0.5
,sigmoid
=True
)
Compute accuracy when inp
and targ
are the same size.
def change_1h_targ(targ, n):
idx = torch.randperm(targ.numel())[:n]
res = targ.clone().view(-1)
for i in idx: res[i] = 1-res[i]
return res.view(targ.shape)
x = torch.randn(4,5)
y = (torch.sigmoid(x) >= 0.5).byte()
test_eq(accuracy_multi(x,y), 1)
test_eq(accuracy_multi(x,1-y), 0)
y1 = change_1h_targ(y, 5)
test_eq(accuracy_multi(x,y1), 0.75)
#Different thresh
y = (torch.sigmoid(x) >= 0.2).byte()
test_eq(accuracy_multi(x,y, thresh=0.2), 1)
test_eq(accuracy_multi(x,1-y, thresh=0.2), 0)
y1 = change_1h_targ(y, 5)
test_eq(accuracy_multi(x,y1, thresh=0.2), 0.75)
#No sigmoid
y = (x >= 0.5).byte()
test_eq(accuracy_multi(x,y, sigmoid=False), 1)
test_eq(accuracy_multi(x,1-y, sigmoid=False), 0)
y1 = change_1h_targ(y, 5)
test_eq(accuracy_multi(x,y1, sigmoid=False), 0.75)
APScoreMulti
[source]
APScoreMulti
(sigmoid
=True
,average
='macro'
,pos_label
=1
,sample_weight
=None
)
Average Precision for multi-label classification problems
See the scikit-learn documentation for more details.
BrierScoreMulti
[source]
BrierScoreMulti
(thresh
=0.5
,sigmoid
=True
,sample_weight
=None
,pos_label
=None
)
Brier score for multi-label classification problems
See the scikit-learn documentation for more details.
F1ScoreMulti
[source]
F1ScoreMulti
(thresh
=0.5
,sigmoid
=True
,labels
=None
,pos_label
=1
,average
='macro'
,sample_weight
=None
)
F1 score for multi-label classification problems
See the scikit-learn documentation for more details.
FBetaMulti
[source]
FBetaMulti
(beta
,thresh
=0.5
,sigmoid
=True
,labels
=None
,pos_label
=1
,average
='macro'
,sample_weight
=None
)
FBeta score with beta
for multi-label classification problems
See the scikit-learn documentation for more details.
HammingLossMulti
[source]
HammingLossMulti
(thresh
=0.5
,sigmoid
=True
,labels
=None
,sample_weight
=None
)
Hamming loss for multi-label classification problems
See the scikit-learn documentation for more details.
JaccardMulti
[source]
JaccardMulti
(thresh
=0.5
,sigmoid
=True
,labels
=None
,pos_label
=1
,average
='macro'
,sample_weight
=None
)
Jaccard score for multi-label classification problems
See the scikit-learn documentation for more details.
MatthewsCorrCoefMulti
[source]
MatthewsCorrCoefMulti
(thresh
=0.5
,sigmoid
=True
,sample_weight
=None
)
Matthews correlation coefficient for multi-label classification problems
See the scikit-learn documentation for more details.
PrecisionMulti
[source]
PrecisionMulti
(thresh
=0.5
,sigmoid
=True
,labels
=None
,pos_label
=1
,average
='macro'
,sample_weight
=None
)
Precision for multi-label classification problems
See the scikit-learn documentation for more details.
RecallMulti
[source]
RecallMulti
(thresh
=0.5
,sigmoid
=True
,labels
=None
,pos_label
=1
,average
='macro'
,sample_weight
=None
)
Recall for multi-label classification problems
See the scikit-learn documentation for more details.
RocAucMulti
[source]
RocAucMulti
(sigmoid
=True
,average
='macro'
,sample_weight
=None
,max_fpr
=None
)
Area Under the Receiver Operating Characteristic Curve for multi-label binary classification problems
roc_auc_metric = RocAucMulti(sigmoid=False)
x,y = torch.tensor([np.arange(start=0, stop=0.2, step=0.04)]*20), torch.tensor([0, 0, 1, 1]).repeat(5)
assert compute_val(roc_auc_metric, x, y) == 0.5
See the scikit-learn documentation for more details.
Regression
mse
[source]
mse
(inp
,targ
)
Mean squared error between inp
and targ
.
x1,x2 = torch.randn(4,5),torch.randn(4,5)
test_close(mse(x1,x2), (x1-x2).pow(2).mean())
rmse
[source]
rmse
(preds
,targs
)
Root mean squared error
x1,x2 = torch.randn(20,5),torch.randn(20,5)
test_eq(compute_val(rmse, x1, x2), torch.sqrt(F.mse_loss(x1,x2)))
mae
[source]
mae
(inp
,targ
)
Mean absolute error between inp
and targ
.
x1,x2 = torch.randn(4,5),torch.randn(4,5)
test_eq(mae(x1,x2), torch.abs(x1-x2).mean())
msle
[source]
msle
(inp
,targ
)
Mean squared logarithmic error between inp
and targ
.
x1,x2 = torch.randn(4,5),torch.randn(4,5)
x1,x2 = torch.relu(x1),torch.relu(x2)
test_close(msle(x1,x2), (torch.log(x1+1)-torch.log(x2+1)).pow(2).mean())
exp_rmspe
[source]
exp_rmspe
(preds
,targs
)
Root mean square percentage error of the exponential of predictions and targets
x1,x2 = torch.randn(20,5),torch.randn(20,5)
test_eq(compute_val(exp_rmspe, x1, x2), torch.sqrt((((torch.exp(x2) - torch.exp(x1))/torch.exp(x2))**2).mean()))
ExplainedVariance
[source]
ExplainedVariance
(sample_weight
=None
)
Explained variance between predictions and targets
See the scikit-learn documentation for more details.
R2Score
[source]
R2Score
(sample_weight
=None
)
R2 score between predictions and targets
See the scikit-learn documentation for more details.
PearsonCorrCoef
[source]
PearsonCorrCoef
(dim_argmax
=None
,activation
='no'
,thresh
=None
,to_np
=False
,invert_arg
=False
,flatten
=True
)
Pearson correlation coefficient for regression problem
See the scipy documentation for more details.
x = torch.randint(-999, 999,(20,))
y = torch.randint(-999, 999,(20,))
test_eq(compute_val(PearsonCorrCoef(), x, y), scs.pearsonr(x.view(-1), y.view(-1))[0])
SpearmanCorrCoef
[source]
SpearmanCorrCoef
(dim_argmax
=None
,axis
=0
,nan_policy
='propagate'
,activation
='no'
,thresh
=None
,to_np
=False
,invert_arg
=False
,flatten
=True
)
Spearman correlation coefficient for regression problem
See the scipy documentation for more details.
x = torch.randint(-999, 999,(20,))
y = torch.randint(-999, 999,(20,))
test_eq(compute_val(SpearmanCorrCoef(), x, y), scs.spearmanr(x.view(-1), y.view(-1))[0])
Segmentation
foreground_acc
[source]
foreground_acc
(inp
,targ
,bkg_idx
=0
,axis
=1
)
Computes non-background accuracy for multiclass segmentation
x = torch.randn(4,5,3,3)
y = x.argmax(dim=1)[:,None]
test_eq(foreground_acc(x,y), 1)
y[0] = 0 #the 0s are ignored so we get the same value
test_eq(foreground_acc(x,y), 1)
class
Dice
[source]
Dice
(axis
=1
) ::Metric
Dice coefficient metric for binary target in segmentation
x1 = torch.randn(20,2,3,3)
x2 = torch.randint(0, 2, (20, 3, 3))
pred = x1.argmax(1)
inter = (pred*x2).float().sum().item()
union = (pred+x2).float().sum().item()
test_eq(compute_val(Dice(), x1, x2), 2*inter/union)
class
DiceMulti
[source]
DiceMulti
(axis
=1
) ::Metric
Averaged Dice metric (Macro F1) for multiclass target in segmentation
The DiceMulti method implements the “Averaged F1: arithmetic mean over harmonic means” described in this publication: https://arxiv.org/pdf/1911.03347.pdf
x1a = torch.ones(20,1,1,1)
x1b = torch.clone(x1a)*0.5
x1c = torch.clone(x1a)*0.3
x1 = torch.cat((x1a,x1b,x1c),dim=1) # Prediction: 20xClass0
x2 = torch.zeros(20,1,1) # Target: 20xClass0
test_eq(compute_val(DiceMulti(), x1, x2), 1.)
x2 = torch.ones(20,1,1) # Target: 20xClass1
test_eq(compute_val(DiceMulti(), x1, x2), 0.)
x2a = torch.zeros(10,1,1)
x2b = torch.ones(5,1,1)
x2c = torch.ones(5,1,1) * 2
x2 = torch.cat((x2a,x2b,x2c),dim=0) # Target: 10xClass0, 5xClass1, 5xClass2
dice1 = (2*10)/(2*10+10) # Dice: 2*TP/(2*TP+FP+FN)
dice2 = 0
dice3 = 0
test_eq(compute_val(DiceMulti(), x1, x2), (dice1+dice2+dice3)/3)
class
JaccardCoeff
[source]
JaccardCoeff
(axis
=1
) ::Dice
Implementation of the Jaccard coefficient that is lighter in RAM
x1 = torch.randn(20,2,3,3)
x2 = torch.randint(0, 2, (20, 3, 3))
pred = x1.argmax(1)
inter = (pred*x2).float().sum().item()
union = (pred+x2).float().sum().item()
test_eq(compute_val(JaccardCoeff(), x1, x2), inter/(union-inter))
NLP
class
CorpusBLEUMetric
[source]
CorpusBLEUMetric
(vocab_sz
=5000
,axis
=-1
) ::Metric
Blueprint for defining a metric
def create_vcb_emb(pred, targ):
# create vocab "embedding" for predictions
vcb_sz = max(torch.unique(torch.cat([pred, targ])))+1
pred_emb=torch.zeros(pred.size()[0], pred.size()[1] ,vcb_sz)
for i,v in enumerate(pred):
pred_emb[i].scatter_(1, v.view(len(v),1),1)
return pred_emb
def compute_bleu_val(met, x1, x2):
met.reset()
learn = TstLearner()
learn.training=False
for i in range(len(x1)):
learn.pred,learn.yb = x1, (x2,)
met.accumulate(learn)
return met.value
targ = torch.tensor([[1,2,3,4,5,6,1,7,8]])
pred = torch.tensor([[1,9,3,4,5,6,1,10,8]])
pred_emb = create_vcb_emb(pred, targ)
test_close(compute_bleu_val(CorpusBLEUMetric(), pred_emb, targ), 0.48549)
targ = torch.tensor([[1,2,3,4,5,6,1,7,8],[1,2,3,4,5,6,1,7,8]])
pred = torch.tensor([[1,9,3,4,5,6,1,10,8],[1,9,3,4,5,6,1,10,8]])
pred_emb = create_vcb_emb(pred, targ)
test_close(compute_bleu_val(CorpusBLEUMetric(), pred_emb, targ), 0.48549)
The BLEU metric was introduced in this article to come up with a way to evaluate the performance of translation models. It’s based on the precision of n-grams in your prediction compared to your target. See the fastai NLP course BLEU notebook for a more detailed description of BLEU.
The smoothing used in the precision calculation is the same as in SacreBLEU, which in turn is “method 3” from the Chen & Cherry, 2014 paper.
class
LossMetric
[source]
LossMetric
(attr
,nm
=None
) ::AvgMetric
Create a metric from loss_func.attr
named nm
LossMetrics
[source]
LossMetrics
(attrs
,nms
=None
)
List of LossMetric
for each of attrs
and nms
class CombineL1L2(Module):
def forward(self, out, targ):
self.l1 = F.l1_loss(out, targ)
self.l2 = F.mse_loss(out, targ)
return self.l1+self.l2
learn = synth_learner(metrics=LossMetrics('l1,l2'))
learn.loss_func = CombineL1L2()
learn.fit(2)
[0, 16.63826560974121, 14.52301025390625, 3.3376736640930176, 11.18533706665039, '00:00']
[1, 14.520439147949219, 10.179483413696289, 2.7222838401794434, 7.457200050354004, '00:00']
©2021 fast.ai. All rights reserved.
Site last generated: Mar 31, 2021