- 3.1. Cross-validation: evaluating estimator performance
- 3.1.1. Computing cross-validated metrics
- 3.1.2. Cross validation iterators
- 3.1.3. A note on shuffling
- 3.1.4. Cross validation and model selection
3.1. Cross-validation: evaluating estimator performance
Learning the parameters of a prediction function and testing it on thesame data is a methodological mistake: a model that would just repeatthe labels of the samples that it has just seen would have a perfectscore but would fail to predict anything useful on yet-unseen data.This situation is called overfitting.To avoid it, it is common practice when performinga (supervised) machine learning experimentto hold out part of the available data as a test set X_test, y_test
.Note that the word “experiment” is not intendedto denote academic use only,because even in commercial settingsmachine learning usually starts out experimentally.Here is a flowchart of typical cross validation workflow in model training.The best parameters can be determined bygrid search techniques.In scikit-learn a random split into training and test setscan be quickly computed with the train_test_split
helper function.Let’s load the iris data set to fit a linear support vector machine on it:
>>>
- >>> import numpy as np
- >>> from sklearn.model_selection import train_test_split
- >>> from sklearn import datasets
- >>> from sklearn import svm
- >>> X, y = datasets.load_iris(return_X_y=True)
- >>> X.shape, y.shape
- ((150, 4), (150,))
We can now quickly sample a training set while holding out 40% of thedata for testing (evaluating) our classifier:
>>>
- >>> X_train, X_test, y_train, y_test = train_test_split(
- ... X, y, test_size=0.4, random_state=0)
- >>> X_train.shape, y_train.shape
- ((90, 4), (90,))
- >>> X_test.shape, y_test.shape
- ((60, 4), (60,))
- >>> clf = svm.SVC(kernel='linear', C=1).fit(X_train, y_train)
- >>> clf.score(X_test, y_test)
- 0.96...
When evaluating different settings (“hyperparameters”) for estimators,such as the C
setting that must be manually set for an SVM,there is still a risk of overfitting _on the test set_because the parameters can be tweaked until the estimator performs optimally.This way, knowledge about the test set can “leak” into the modeland evaluation metrics no longer report on generalization performance.To solve this problem, yet another part of the dataset can be held outas a so-called “validation set”: training proceeds on the training set,after which evaluation is done on the validation set,and when the experiment seems to be successful,final evaluation can be done on the test set.
However, by partitioning the available data into three sets,we drastically reduce the number of sampleswhich can be used for learning the model,and the results can depend on a particular random choice for the pair of(train, validation) sets.
A solution to this problem is a procedure calledcross-validation)(CV for short).A test set should still be held out for final evaluation,but the validation set is no longer needed when doing CV.In the basic approach, called k-fold CV,the training set is split into k smaller sets(other approaches are described below,but generally follow the same principles).The following procedure is followed for each of the k “folds”:
A model is trained using
of the folds as training data;the resulting model is validated on the remaining part of the data(i.e., it is used as a test set to compute a performance measuresuch as accuracy).
The performance measure reported by k-fold cross-validationis then the average of the values computed in the loop.This approach can be computationally expensive,but does not waste too much data(as is the case when fixing an arbitrary validation set),which is a major advantage in problems such as inverse inferencewhere the number of samples is very small.
3.1.1. Computing cross-validated metrics
The simplest way to use cross-validation is to call thecross_val_score
helper function on the estimator and the dataset.
The following example demonstrates how to estimate the accuracy of a linearkernel support vector machine on the iris dataset by splitting the data, fittinga model and computing the score 5 consecutive times (with different splits eachtime):
>>>
- >>> from sklearn.model_selection import cross_val_score
- >>> clf = svm.SVC(kernel='linear', C=1)
- >>> scores = cross_val_score(clf, X, y, cv=5)
- >>> scores
- array([0.96..., 1. ..., 0.96..., 0.96..., 1. ])
The mean score and the 95% confidence interval of the score estimate are hencegiven by:
>>>
- >>> print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
- Accuracy: 0.98 (+/- 0.03)
By default, the score computed at each CV iteration is the score
method of the estimator. It is possible to change this by using thescoring parameter:
>>>
- >>> from sklearn import metrics
- >>> scores = cross_val_score(
- ... clf, X, y, cv=5, scoring='f1_macro')
- >>> scores
- array([0.96..., 1. ..., 0.96..., 0.96..., 1. ])
See The scoring parameter: defining model evaluation rules for details.In the case of the Iris dataset, the samples are balanced across targetclasses hence the accuracy and the F1-score are almost equal.
When the cv
argument is an integer, cross_val_score
uses theKFold
or StratifiedKFold
strategies by default, the latterbeing used if the estimator derives from ClassifierMixin
.
It is also possible to use other cross validation strategies by passing a crossvalidation iterator instead, for instance:
>>>
- >>> from sklearn.model_selection import ShuffleSplit
- >>> n_samples = X.shape[0]
- >>> cv = ShuffleSplit(n_splits=5, test_size=0.3, random_state=0)
- >>> cross_val_score(clf, X, y, cv=cv)
- array([0.977..., 0.977..., 1. ..., 0.955..., 1. ])
Another option is to use an iterable yielding (train, test) splits as arrays ofindices, for example:
>>>
- >>> def custom_cv_2folds(X):
- ... n = X.shape[0]
- ... i = 1
- ... while i <= 2:
- ... idx = np.arange(n * (i - 1) / 2, n * i / 2, dtype=int)
- ... yield idx, idx
- ... i += 1
- ...
- >>> custom_cv = custom_cv_2folds(X)
- >>> cross_val_score(clf, X, y, cv=custom_cv)
- array([1. , 0.973...])
Data transformation with held out data
Just as it is important to test a predictor on data held-out fromtraining, preprocessing (such as standardization, feature selection, etc.)and similar data transformations similarly shouldbe learnt from a training set and applied to held-out data for prediction:
>>>
- >>> from sklearn import preprocessing
- >>> X_train, X_test, y_train, y_test = train_test_split(
- ... X, y, test_size=0.4, random_state=0)
- >>> scaler = preprocessing.StandardScaler().fit(X_train)
- >>> X_train_transformed = scaler.transform(X_train)
- >>> clf = svm.SVC(C=1).fit(X_train_transformed, y_train)
- >>> X_test_transformed = scaler.transform(X_test)
- >>> clf.score(X_test_transformed, y_test)
- 0.9333...
A Pipeline
makes it easier to composeestimators, providing this behavior under cross-validation:
>>>
- >>> from sklearn.pipeline import make_pipeline
- >>> clf = make_pipeline(preprocessing.StandardScaler(), svm.SVC(C=1))
- >>> cross_val_score(clf, X, y, cv=cv)
- array([0.977..., 0.933..., 0.955..., 0.933..., 0.977...])
See Pipelines and composite estimators.
3.1.1.1. The cross_validate function and multiple metric evaluation
The cross_validate
function differs from cross_val_score
intwo ways:
It allows specifying multiple metrics for evaluation.
It returns a dict containing fit-times, score-times(and optionally training scores as well as fitted estimators) inaddition to the test score.
For single metric evaluation, where the scoring parameter is a string,callable or None, the keys will be - ['test_score', 'fit_time', 'score_time']
And for multiple metric evaluation, the return value is a dict with thefollowing keys -['test<scorer1_name>', 'test<scorer2name>', 'test<scorer…>', 'fit_time', 'score_time']
return_train_score
is set to False
by default to save computation time.To evaluate the scores on the training set as well you need to be set toTrue
.
You may also retain the estimator fitted on each training set by settingreturn_estimator=True
.
The multiple metrics can be specified either as a list, tuple or set ofpredefined scorer names:
>>>
- >>> from sklearn.model_selection import cross_validate
- >>> from sklearn.metrics import recall_score
- >>> scoring = ['precision_macro', 'recall_macro']
- >>> clf = svm.SVC(kernel='linear', C=1, random_state=0)
- >>> scores = cross_validate(clf, X, y, scoring=scoring)
- >>> sorted(scores.keys())
- ['fit_time', 'score_time', 'test_precision_macro', 'test_recall_macro']
- >>> scores['test_recall_macro']
- array([0.96..., 1. ..., 0.96..., 0.96..., 1. ])
Or as a dict mapping scorer name to a predefined or custom scoring function:
>>>
- >>> from sklearn.metrics import make_scorer
- >>> scoring = {'prec_macro': 'precision_macro',
- ... 'rec_macro': make_scorer(recall_score, average='macro')}
- >>> scores = cross_validate(clf, X, y, scoring=scoring,
- ... cv=5, return_train_score=True)
- >>> sorted(scores.keys())
- ['fit_time', 'score_time', 'test_prec_macro', 'test_rec_macro',
- 'train_prec_macro', 'train_rec_macro']
- >>> scores['train_rec_macro']
- array([0.97..., 0.97..., 0.99..., 0.98..., 0.98...])
Here is an example of cross_validate
using a single metric:
>>>
- >>> scores = cross_validate(clf, X, y,
- ... scoring='precision_macro', cv=5,
- ... return_estimator=True)
- >>> sorted(scores.keys())
- ['estimator', 'fit_time', 'score_time', 'test_score']
3.1.1.2. Obtaining predictions by cross-validation
The function cross_val_predict
has a similar interface tocross_val_score
, but returns, for each element in the input, theprediction that was obtained for that element when it was in the test set. Onlycross-validation strategies that assign all elements to a test set exactly oncecan be used (otherwise, an exception is raised).
Warning
Note on inappropriate usage of cross_val_predict
The result of cross_val_predict
may be different from thoseobtained using cross_val_score
as the elements are grouped indifferent ways. The function cross_val_score
takes an averageover cross-validation folds, whereas cross_val_predict
simplyreturns the labels (or probabilities) from several distinct modelsundistinguished. Thus, cross_val_predict
is not an appropriatemeasure of generalisation error.
- The function
cross_val_predict
is appropriate for: Visualization of predictions obtained from different models.
Model blending: When predictions of one supervised estimator are used totrain another estimator in ensemble methods.
The available cross validation iterators are introduced in the followingsection.
Examples
Receiver Operating Characteristic (ROC) with cross validation,
Parameter estimation using grid search with cross-validation,
3.1.2. Cross validation iterators
The following sections list utilities to generate indicesthat can be used to generate dataset splits according to different crossvalidation strategies.
3.1.2.1. Cross-validation iterators for i.i.d. data
Assuming that some data is Independent and Identically Distributed (i.i.d.) ismaking the assumption that all samples stem from the same generative processand that the generative process is assumed to have no memory of past generatedsamples.
The following cross-validators can be used in such cases.
NOTE
While i.i.d. data is a common assumption in machine learning theory, it rarelyholds in practice. If one knows that the samples have been generated using atime-dependent process, it’s safer touse a time-series aware cross-validation schemeSimilarly if we know that the generative process has a group structure(samples from collected from different subjects, experiments, measurementdevices) it safer to use group-wise cross-validation.
3.1.2.1.1. K-fold
KFold
divides all the samples in
groups of samples,called folds (if, this is equivalent to the Leave OneOut strategy), of equal sizes (if possible). The prediction function islearned using folds, and the fold left out is used for test.
Example of 2-fold cross-validation on a dataset with 4 samples:
>>>
- >>> import numpy as np
- >>> from sklearn.model_selection import KFold
- >>> X = ["a", "b", "c", "d"]
- >>> kf = KFold(n_splits=2)
- >>> for train, test in kf.split(X):
- ... print("%s %s" % (train, test))
- [2 3] [0 1]
- [0 1] [2 3]
Here is a visualization of the cross-validation behavior. Note thatKFold
is not affected by classes or groups.
Each fold is constituted by two arrays: the first one is related to thetraining set, and the second one to the test set.Thus, one can create the training/test sets using numpy indexing:
>>>
- >>> X = np.array([[0., 0.], [1., 1.], [-1., -1.], [2., 2.]])
- >>> y = np.array([0, 1, 0, 1])
- >>> X_train, X_test, y_train, y_test = X[train], X[test], y[train], y[test]
3.1.2.1.2. Repeated K-Fold
RepeatedKFold
repeats K-Fold n times. It can be used when onerequires to run KFold
n times, producing different splits ineach repetition.
Example of 2-fold K-Fold repeated 2 times:
>>>
- >>> import numpy as np
- >>> from sklearn.model_selection import RepeatedKFold
- >>> X = np.array([[1, 2], [3, 4], [1, 2], [3, 4]])
- >>> random_state = 12883823
- >>> rkf = RepeatedKFold(n_splits=2, n_repeats=2, random_state=random_state)
- >>> for train, test in rkf.split(X):
- ... print("%s %s" % (train, test))
- ...
- [2 3] [0 1]
- [0 1] [2 3]
- [0 2] [1 3]
- [1 3] [0 2]
Similarly, RepeatedStratifiedKFold
repeats Stratified K-Fold n timeswith different randomization in each repetition.
3.1.2.1.3. Leave One Out (LOO)
LeaveOneOut
(or LOO) is a simple cross-validation. Each learningset is created by taking all the samples except one, the test set beingthe sample left out. Thus, for
samples, we have differenttraining sets and different tests set. This cross-validationprocedure does not waste much data as only one sample is removed from thetraining set:
>>>
- >>> from sklearn.model_selection import LeaveOneOut
- >>> X = [1, 2, 3, 4]
- >>> loo = LeaveOneOut()
- >>> for train, test in loo.split(X):
- ... print("%s %s" % (train, test))
- [1 2 3] [0]
- [0 2 3] [1]
- [0 1 3] [2]
- [0 1 2] [3]
Potential users of LOO for model selection should weigh a few known caveats.When compared with
-fold cross validation, one builds modelsfrom samples instead of models, where.Moreover, each is trained on samples rather than. In both ways, assuming is not too largeand, LOO is more computationally expensive than-foldcross validation.
In terms of accuracy, LOO often results in high variance as an estimator for thetest error. Intuitively, since
ofthe samples are used to build each model, models constructed fromfolds are virtually identical to each other and to the model built from theentire training set.
However, if the learning curve is steep for the training size in question,then 5- or 10- fold cross validation can overestimate the generalization error.
As a general rule, most authors, and empirical evidence, suggest that 5- or 10-fold cross validation should be preferred to LOO.
References:
http://www.faqs.org/faqs/ai-faq/neural-nets/part3/section-12.html;
T. Hastie, R. Tibshirani, J. Friedman, The Elements of Statistical Learning, Springer 2009
L. Breiman, P. Spector Submodel selection and evaluation in regression: The X-random case, International Statistical Review 1992;
R. Kohavi, A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection, Intl. Jnt. Conf. AI
R. Bharat Rao, G. Fung, R. Rosales, On the Dangers of Cross-Validation. An Experimental Evaluation, SIAM 2008;
G. James, D. Witten, T. Hastie, R Tibshirani, An Introduction toStatistical Learning, Springer 2013.
3.1.2.1.4. Leave P Out (LPO)
LeavePOut
is very similar to LeaveOneOut
as it creates allthe possible training/test sets by removing
samples from the completeset. For samples, this produces train-testpairs. Unlike LeaveOneOut
and KFold
, the test sets willoverlap for.
Example of Leave-2-Out on a dataset with 4 samples:
>>>
- >>> from sklearn.model_selection import LeavePOut
- >>> X = np.ones(4)
- >>> lpo = LeavePOut(p=2)
- >>> for train, test in lpo.split(X):
- ... print("%s %s" % (train, test))
- [2 3] [0 1]
- [1 3] [0 2]
- [1 2] [0 3]
- [0 3] [1 2]
- [0 2] [1 3]
- [0 1] [2 3]
3.1.2.1.5. Random permutations cross-validation a.k.a. Shuffle & Split
The ShuffleSplit
iterator will generate a user defined number ofindependent train / test dataset splits. Samples are first shuffled andthen split into a pair of train and test sets.
It is possible to control the randomness for reproducibility of theresults by explicitly seeding the random_state
pseudo random numbergenerator.
Here is a usage example:
>>>
- >>> from sklearn.model_selection import ShuffleSplit
- >>> X = np.arange(10)
- >>> ss = ShuffleSplit(n_splits=5, test_size=0.25, random_state=0)
- >>> for train_index, test_index in ss.split(X):
- ... print("%s %s" % (train_index, test_index))
- [9 1 6 7 3 0 5] [2 8 4]
- [2 9 8 0 6 7 4] [3 5 1]
- [4 5 1 0 6 9 7] [2 3 8]
- [2 7 5 8 0 3 4] [6 1 9]
- [4 1 0 6 8 9 3] [5 2 7]
Here is a visualization of the cross-validation behavior. Note thatShuffleSplit
is not affected by classes or groups.
ShuffleSplit
is thus a good alternative to KFold
crossvalidation that allows a finer control on the number of iterations andthe proportion of samples on each side of the train / test split.
3.1.2.2. Cross-validation iterators with stratification based on class labels.
Some classification problems can exhibit a large imbalance in the distributionof the target classes: for instance there could be several times more negativesamples than positive samples. In such cases it is recommended to usestratified sampling as implemented in StratifiedKFold
andStratifiedShuffleSplit
to ensure that relative class frequencies isapproximately preserved in each train and validation fold.
3.1.2.2.1. Stratified k-fold
StratifiedKFold
is a variation of k-fold which returns _stratified_folds: each set contains approximately the same percentage of samples of eachtarget class as the complete set.
Here is an example of stratified 3-fold cross-validation on a dataset with 50 samples fromtwo unbalanced classes. We show the number of samples in each class and compare withKFold
.
>>>
- >>> from sklearn.model_selection import StratifiedKFold, KFold
- >>> import numpy as np
- >>> X, y = np.ones((50, 1)), np.hstack(([0] * 45, [1] * 5))
- >>> skf = StratifiedKFold(n_splits=3)
- >>> for train, test in skf.split(X, y):
- ... print('train - {} | test - {}'.format(
- ... np.bincount(y[train]), np.bincount(y[test])))
- train - [30 3] | test - [15 2]
- train - [30 3] | test - [15 2]
- train - [30 4] | test - [15 1]
- >>> kf = KFold(n_splits=3)
- >>> for train, test in kf.split(X, y):
- ... print('train - {} | test - {}'.format(
- ... np.bincount(y[train]), np.bincount(y[test])))
- train - [28 5] | test - [17]
- train - [28 5] | test - [17]
- train - [34] | test - [11 5]
We can see that StratifiedKFold
preserves the class ratios(approximately 1 / 10) in both train and test dataset.
Here is a visualization of the cross-validation behavior.
RepeatedStratifiedKFold
can be used to repeat Stratified K-Fold n timeswith different randomization in each repetition.
3.1.2.2.2. Stratified Shuffle Split
StratifiedShuffleSplit
is a variation of ShuffleSplit, which returnsstratified splits, i.e which creates splits by preserving the samepercentage for each target class as in the complete set.
Here is a visualization of the cross-validation behavior.
3.1.2.3. Cross-validation iterators for grouped data.
The i.i.d. assumption is broken if the underlying generative process yieldgroups of dependent samples.
Such a grouping of data is domain specific. An example would be when there ismedical data collected from multiple patients, with multiple samples taken fromeach patient. And such data is likely to be dependent on the individual group.In our example, the patient id for each sample will be its group identifier.
In this case we would like to know if a model trained on a particular set ofgroups generalizes well to the unseen groups. To measure this, we need toensure that all the samples in the validation fold come from groups that arenot represented at all in the paired training fold.
The following cross-validation splitters can be used to do that.The grouping identifier for the samples is specified via the groups
parameter.
3.1.2.3.1. Group k-fold
GroupKFold
is a variation of k-fold which ensures that the same group isnot represented in both testing and training sets. For example if the data isobtained from different subjects with several samples per-subject and if themodel is flexible enough to learn from highly person specific features itcould fail to generalize to new subjects. GroupKFold
makes it possibleto detect this kind of overfitting situations.
Imagine you have three subjects, each with an associated number from 1 to 3:
>>>
- >>> from sklearn.model_selection import GroupKFold
- >>> X = [0.1, 0.2, 2.2, 2.4, 2.3, 4.55, 5.8, 8.8, 9, 10]
- >>> y = ["a", "b", "b", "b", "c", "c", "c", "d", "d", "d"]
- >>> groups = [1, 1, 1, 2, 2, 2, 3, 3, 3, 3]
- >>> gkf = GroupKFold(n_splits=3)
- >>> for train, test in gkf.split(X, y, groups=groups):
- ... print("%s %s" % (train, test))
- [0 1 2 3 4 5] [6 7 8 9]
- [0 1 2 6 7 8 9] [3 4 5]
- [3 4 5 6 7 8 9] [0 1 2]
Each subject is in a different testing fold, and the same subject is never inboth testing and training. Notice that the folds do not have exactly the samesize due to the imbalance in the data.
Here is a visualization of the cross-validation behavior.
3.1.2.3.2. Leave One Group Out
LeaveOneGroupOut
is a cross-validation scheme which holds outthe samples according to a third-party provided array of integer groups. Thisgroup information can be used to encode arbitrary domain specific pre-definedcross-validation folds.
Each training set is thus constituted by all the samples except the onesrelated to a specific group.
For example, in the cases of multiple experiments, LeaveOneGroupOut
can be used to create a cross-validation based on the different experiments:we create a training set using the samples of all the experiments except one:
>>>
- >>> from sklearn.model_selection import LeaveOneGroupOut
- >>> X = [1, 5, 10, 50, 60, 70, 80]
- >>> y = [0, 1, 1, 2, 2, 2, 2]
- >>> groups = [1, 1, 2, 2, 3, 3, 3]
- >>> logo = LeaveOneGroupOut()
- >>> for train, test in logo.split(X, y, groups=groups):
- ... print("%s %s" % (train, test))
- [2 3 4 5 6] [0 1]
- [0 1 4 5 6] [2 3]
- [0 1 2 3] [4 5 6]
Another common application is to use time information: for instance thegroups could be the year of collection of the samples and thus allowfor cross-validation against time-based splits.
3.1.2.3.3. Leave P Groups Out
LeavePGroupsOut
is similar as LeaveOneGroupOut
, but removessamples related to
groups for each training/test set.
Example of Leave-2-Group Out:
>>>
- >>> from sklearn.model_selection import LeavePGroupsOut
- >>> X = np.arange(6)
- >>> y = [1, 1, 1, 2, 2, 2]
- >>> groups = [1, 1, 2, 2, 3, 3]
- >>> lpgo = LeavePGroupsOut(n_groups=2)
- >>> for train, test in lpgo.split(X, y, groups=groups):
- ... print("%s %s" % (train, test))
- [4 5] [0 1 2 3]
- [2 3] [0 1 4 5]
- [0 1] [2 3 4 5]
3.1.2.3.4. Group Shuffle Split
The GroupShuffleSplit
iterator behaves as a combination ofShuffleSplit
and LeavePGroupsOut
, and generates asequence of randomized partitions in which a subset of groups are heldout for each split.
Here is a usage example:
>>>
- >>> from sklearn.model_selection import GroupShuffleSplit
- >>> X = [0.1, 0.2, 2.2, 2.4, 2.3, 4.55, 5.8, 0.001]
- >>> y = ["a", "b", "b", "b", "c", "c", "c", "a"]
- >>> groups = [1, 1, 2, 2, 3, 3, 4, 4]
- >>> gss = GroupShuffleSplit(n_splits=4, test_size=0.5, random_state=0)
- >>> for train, test in gss.split(X, y, groups=groups):
- ... print("%s %s" % (train, test))
- ...
- [0 1 2 3] [4 5 6 7]
- [2 3 6 7] [0 1 4 5]
- [2 3 4 5] [0 1 6 7]
- [4 5 6 7] [0 1 2 3]
Here is a visualization of the cross-validation behavior.
This class is useful when the behavior of LeavePGroupsOut
isdesired, but the number of groups is large enough that generating allpossible partitions with
groups withheld would be prohibitivelyexpensive. In such a scenario, GroupShuffleSplit
providesa random sample (with replacement) of the train / test splitsgenerated by LeavePGroupsOut
.
3.1.2.4. Predefined Fold-Splits / Validation-Sets
For some datasets, a pre-defined split of the data into training- andvalidation fold or into several cross-validation folds alreadyexists. Using PredefinedSplit
it is possible to use these foldse.g. when searching for hyperparameters.
For example, when using a validation set, set the test_fold
to 0 for allsamples that are part of the validation set, and to -1 for all other samples.
3.1.2.5. Cross validation of time series data
Time series data is characterised by the correlation between observationsthat are near in time (autocorrelation). However, classicalcross-validation techniques such as KFold
andShuffleSplit
assume the samples are independent andidentically distributed, and would result in unreasonable correlationbetween training and testing instances (yielding poor estimates ofgeneralisation error) on time series data. Therefore, it is very importantto evaluate our model for time series data on the “future” observationsleast like those that are used to train the model. To achieve this, onesolution is provided by TimeSeriesSplit
.
3.1.2.5.1. Time Series Split
TimeSeriesSplit
is a variation of k-fold whichreturns first
folds as train set and the thfold as test set. Note that unlike standard cross-validation methods,successive training sets are supersets of those that come before them.Also, it adds all surplus data to the first training partition, whichis always used to train the model.
This class can be used to cross-validate time series data samplesthat are observed at fixed time intervals.
Example of 3-split time series cross-validation on a dataset with 6 samples:
>>>
- >>> from sklearn.model_selection import TimeSeriesSplit
- >>> X = np.array([[1, 2], [3, 4], [1, 2], [3, 4], [1, 2], [3, 4]])
- >>> y = np.array([1, 2, 3, 4, 5, 6])
- >>> tscv = TimeSeriesSplit(n_splits=3)
- >>> print(tscv)
- TimeSeriesSplit(max_train_size=None, n_splits=3)
- >>> for train, test in tscv.split(X):
- ... print("%s %s" % (train, test))
- [0 1 2] [3]
- [0 1 2 3] [4]
- [0 1 2 3 4] [5]
Here is a visualization of the cross-validation behavior.
3.1.3. A note on shuffling
If the data ordering is not arbitrary (e.g. samples with the same class labelare contiguous), shuffling it first may be essential to get a meaningful cross-validation result. However, the opposite may be true if the samples are notindependently and identically distributed. For example, if samples correspondto news articles, and are ordered by their time of publication, then shufflingthe data will likely lead to a model that is overfit and an inflated validationscore: it will be tested on samples that are artificially similar (close intime) to training samples.
Some cross validation iterators, such as KFold
, have an inbuilt optionto shuffle the data indices before splitting them. Note that:
This consumes less memory than shuffling the data directly.
By default no shuffling occurs, including for the (stratified) K fold cross-validation performed by specifying
cv=some_integer
tocross_val_score
, grid search, etc. Keep in mind thattrain_test_split
still returns a random split.The
random_state
parameter defaults toNone
, meaning that theshuffling will be different every timeKFold(…, shuffle=True)
isiterated. However,GridSearchCV
will use the same shuffling for each setof parameters validated by a single call to itsfit
method.To get identical results for each split, set
random_state
to an integer.
3.1.4. Cross validation and model selection
Cross validation iterators can also be used to directly perform modelselection using Grid Search for the optimal hyperparameters of themodel. This is the topic of the next section: Tuning the hyper-parameters of an estimator.