11.5 Spatial CV with mlr
There are dozens of packages for statistical learning, as described for example in the CRAN machine learning task view.Getting acquainted with each of these packages, including how to undertake cross-validation and hyperparameter tuning, can be a time-consuming process.Comparing model results from different packages can be even more laborious.The mlr package was developed to address these issues.It acts as a ‘meta-package’, providing a unified interface to popular supervised and unsupervised statistical learning techniques including classification, regression, survival analysis and clustering (Bischl et al. 2016).The standardized mlr interface is based on eight ‘building blocks’.As illustrated in Figure 11.4, these have a clear order.
Figure 11.4: Basic building blocks of the mlr package. Source: http://bit.ly/2tcb2b7. (Permission to reuse this figure was kindly granted.)
The mlr modeling process consists of three main stages.First, a task specifies the data (including response and predictor variables) and the model type (such as regression or classification).Second, a learner defines the specific learning algorithm that is applied to the created task.Third, the resampling approach assesses the predictive performance of the model, i.e., its ability to generalize to new data (see also Section 11.4).
11.5.1 Generalized linear model
To implement a GLM in mlr, we must create a task containing the landslide data.Since the response is binary (two-category variable), we create a classification task with makeClassifTask()
(for regression tasks, use makeRegrTask()
, see ?makeRegrTask
for other task types).The first essential argument of these make*()
functions is data
.The target
argument expects the name of a response variable and positive
determines which of the two factor levels of the response variable indicate the landslide initiation point (in our case this is TRUE
).All other variables of the lsl
dataset will serve as predictors except for the coordinates (see the result of getTaskFormula(task)
for the model formula).For spatial CV, the coordinates
parameter is used (see Section 11.4 and Figure 11.3) which expects the coordinates as a xy data frame.
library(mlr)
# coordinates needed for the spatial partitioning
coords = lsl[, c("x", "y")]
# select response and predictors to use in the modeling
data = dplyr::select(lsl, -x, -y)
# create task
task = makeClassifTask(data = data, target = "lslpts",
positive = "TRUE", coordinates = coords)
makeLearner()
determines the statistical learning method to use.All classification learners start with classif.
and all regression learners with regr.
(see ?makeLearners
for details).listLearners()
helps to find out about all available learners and from which package mlr imports them (Table 11.2).For a specific task, we can run:
listLearners(task, warn.missing.packages = FALSE) %>%
dplyr::select(class, name, short.name, package) %>%
head
Class | Name | Short name | Package |
---|---|---|---|
classif.binomial | Binomial Regression | binomial | stats |
classif.featureless | Featureless classifier | featureless | mlr |
classif.fnn | Fast k-Nearest Neighbour | fnn | FNN |
classif.gausspr | Gaussian Processes | gausspr | kernlab |
classif.knn | k-Nearest Neighbor | knn | class |
classif.ksvm | Support Vector Machines | ksvm | kernlab |
This yields all learners able to model two-class problems (landslide yes or no).We opt for the binomial classification method used in Section 11.3 and implemented as classif.binomial
in mlr.Additionally, we must specify the link-function, logit
in this case, which is also the default of the binomial()
function.predict.type
determines the type of the prediction with prob
resulting in the predicted probability for landslide occurrence between 0 and 1 (this corresponds to type = response
in predict.glm
).
lrn = makeLearner(cl = "classif.binomial",
link = "logit",
predict.type = "prob",
fix.factors.prediction = TRUE)
To find out from which package the specified learner is taken and how to access the corresponding help pages, we can run:
getLearnerPackages(lrn)
helpLearner(lrn)
The set-up steps for modeling with mlr may seem tedious.But remember, this single interface provides access to the 150+ learners shown by listLearners()
; it would be far more tedious to learn the interface for each learner!Further advantages are simple parallelization of resampling techniques and the ability to tune machine learning hyperparameters (see Section 11.5.2).Most importantly, (spatial) resampling in mlr is straightforward, requiring only two more steps: specifying a resampling method and running it.We will use a 100-repeated 5-fold spatial CV: five partitions will be chosen based on the provided coordinates in our task
and the partitioning will be repeated 100 times:64
perf_level = makeResampleDesc(method = "SpRepCV", folds = 5, reps = 100)
To execute the spatial resampling, we run resample()
using the specified learner, task, resampling strategy and of course the performance measure, here the AUROC.This takes some time (around 10 seconds on a modern laptop) because it computes the AUROC for 500 models.Setting a seed ensures the reproducibility of the obtained result and will ensure the same spatial partitioning when re-running the code.
set.seed(012348)
sp_cv = mlr::resample(learner = lrn, task = task,
resampling = perf_level,
measures = mlr::auc)
The output of the preceding code chunk is a bias-reduced assessment of the model’s predictive performance, as illustrated in the following code chunk (required input data is saved in the file spatialcv.Rdata
in the book’s GitHub repo):
# summary statistics of the 500 models
summary(sp_cv$measures.test$auc)
#> Min. 1st Qu. Median Mean 3rd Qu. Max.
#> 0.686 0.757 0.789 0.780 0.795 0.861
# mean AUROC of the 500 models
mean(sp_cv$measures.test$auc)
#> [1] 0.78
To put these results in perspective, let us compare them with AUROC values from a 100-repeated 5-fold non-spatial cross-validation (Figure 11.5; the code for the non-spatial cross-validation is not shown here but will be explored in the exercise section).As expected, the spatially cross-validated result yields lower AUROC values on average than the conventional cross-validation approach, underlining the over-optimistic predictive performance due to spatial autocorrelation of the latter.
Figure 11.5: Boxplot showing the difference in AUROC values between spatial and conventional 100-repeated 5-fold cross-validation.
11.5.2 Spatial tuning of machine-learning hyperparameters
Section 11.4 introduced machine learning as part of statistical learning.To recap, we adhere to the following definition of machine learning by Jason Brownlee:
Machine learning, more specifically the field of predictive modeling, is primarily concerned with minimizing the error of a model or making the most accurate predictions possible, at the expense of explainability.In applied machine learning we will borrow, reuse and steal algorithms from many different fields, including statistics and use them towards these ends.
In Section 11.5.1 a GLM was used to predict landslide susceptibility.This section introduces support vector machines (SVM) for the same purpose.Random forest models might be more popular than SVMs; however, the positive effect of tuning hyperparameters on model performance is much more pronounced in the case of SVMs (Probst, Wright, and Boulesteix 2018).Since (spatial) hyperparameter tuning is the major aim of this section, we will use an SVM.For those wishing to apply a random forest model, we recommend to read this chapter, and then proceed to Chapter 14 in which we will apply the currently covered concepts and techniques to make spatial predictions based on a random forest model.
SVMs search for the best possible ‘hyperplanes’ to separate classes (in a classification case) and estimate ‘kernels’ with specific hyperparameters to allow for non-linear boundaries between classes (James et al. 2013).Hyperparameters should not be confused with coefficients of parametric models, which are sometimes also referred to as parameters.65Coefficients can be estimated from the data, while hyperparameters are set before the learning begins.Optimal hyperparameters are usually determined within a defined range with the help of cross-validation methods.This is called hyperparameter tuning.
Some SVM implementations such as that provided by kernlab allow hyperparameters to be tuned automatically, usually based on random sampling (see upper row of Figure 11.3).This works for non-spatial data but is of less use for spatial data where ‘spatial tuning’ should be undertaken.
Before defining spatial tuning, we will set up the mlr building blocks, introduced in Section 11.5.1, for the SVM.The classification task remains the same, hence we can simply reuse the task
object created in Section 11.5.1.Learners implementing SVM can be found using listLearners()
as follows:
lrns = listLearners(task, warn.missing.packages = FALSE)
filter(lrns, grepl("svm", class)) %>%
dplyr::select(class, name, short.name, package)
#> class name short.name package
#> 6 classif.ksvm Support Vector Machines ksvm kernlab
#> 9 classif.lssvm Least Squares Support Vector Machine lssvm kernlab
#> 17 classif.svm Support Vector Machines (libsvm) svm e1071
Of the options illustrated above, we will use ksvm()
from the kernlab package (Karatzoglou et al. 2004).To allow for non-linear relationships, we use the popular radial basis function (or Gaussian) kernel which is also the default of ksvm()
.
lrn_ksvm = makeLearner("classif.ksvm",
predict.type = "prob",
kernel = "rbfdot")
The next stage is to specify a resampling strategy.Again we will use a 100-repeated 5-fold spatial CV.
# performance estimation level
perf_level = makeResampleDesc(method = "SpRepCV", folds = 5, reps = 100)
Note that this is the exact same code as used for the GLM in Section 11.5.1; we have simply repeated it here as a reminder.
So far, the process has been identical to that described in Section 11.5.1.The next step is new, however: to tune the hyperparameters.Using the same data for the performance assessment and the tuning would potentially lead to overoptimistic results (Cawley and Talbot 2010).This can be avoided using nested spatial CV.
Figure 11.6: Schematic of hyperparameter tuning and performance estimation levels in CV. (Figure was taken from Schratz et al. (2018). Permission to reuse it was kindly granted.)
This means that we split each fold again into five spatially disjoint subfolds which are used to determine the optimal hyperparameters (tune_level
object in the code chunk below; see Figure 11.6 for a visual representation).To find the optimal hyperparameter combination, we fit 50 models (ctrl
object in the code chunk below) in each of these subfolds with randomly selected values for the hyperparameters C and Sigma.The random selection of values C and Sigma is additionally restricted to a predefined tuning space (ps
object).The range of the tuning space was chosen with values recommended in the literature (Schratz et al. 2018).
# five spatially disjoint partitions
tune_level = makeResampleDesc("SpCV", iters = 5)
# use 50 randomly selected hyperparameters
ctrl = makeTuneControlRandom(maxit = 50)
# define the outer limits of the randomly selected hyperparameters
ps = makeParamSet(
makeNumericParam("C", lower = -12, upper = 15, trafo = function(x) 2^x),
makeNumericParam("sigma", lower = -15, upper = 6, trafo = function(x) 2^x)
)
The next stage is to modify the learner lrn_ksvm
in accordance with all the characteristics defining the hyperparameter tuning with makeTuneWrapper()
.
wrapped_lrn_ksvm = makeTuneWrapper(learner = lrn_ksvm,
resampling = tune_level,
par.set = ps,
control = ctrl,
show.info = TRUE,
measures = mlr::auc)
The mlr is now set-up to fit 250 models to determine optimal hyperparameters for one fold.Repeating this for each fold, we end up with 1250 (250 5) models for each repetition.Repeated 100 times means fitting a total of 125,000 models to identify optimal hyperparameters (Figure 11.3).These are used in the performance estimation, which requires the fitting of another 500 models (5 folds 100 repetitions; see Figure 11.3).To make the performance estimation processing chain even clearer, let us write down the commands we have given to the computer:
- Performance level (upper left part of Figure 11.6): split the dataset into five spatially disjoint (outer) subfolds.
- Tuning level (lower left part of Figure 11.6): use the first fold of the performance level and split it again spatially into five (inner) subfolds for the hyperparameter tuning.Use the 50 randomly selected hyperparameters in each of these inner subfolds, i.e., fit 250 models.
- Performance estimation: Use the best hyperparameter combination from the previous step (tuning level) and apply it to the first outer fold in the performance level to estimate the performance (AUROC).
- Repeat steps 2 and 3 for the remaining four outer folds.
- Repeat steps 2 to 4, 100 times.
The process of hyperparameter tuning and performance estimation is computationally intensive.Model runtime can be reduced with parallelization, which can be done in a number of ways, depending on the operating system.
Before starting the parallelization, we ensure that the processing continues even if one of the models throws an error by setting on.learner.error
to warn
.This avoids the process stopping just because of one failed model, which is desirable on large model runs.To inspect the failed models once the processing is completed, we dump them:
configureMlr(on.learner.error = "warn", on.error.dump = TRUE)
To start the parallelization, we set the mode
to multicore
which will use mclapply()
in the background on a single machine in the case of a Unix-based operating system.66Equivalenty, parallelStartSocket()
enables parallelization under Windows.level
defines the level at which to enable parallelization, with mlr.tuneParams
determining that the hyperparameter tuning level should be parallelized (see lower left part of Figure 11.6, ?parallelGetRegisteredLevels
, and the mlrparallelization tutorial for details).We will use half of the available cores (set with the cpus
parameter), a setting that allows possible other users to work on the same high performance computing cluster in case one is used (which was the case when we ran the code).Setting mc.set.seed
to TRUE
ensures that the randomly chosen hyperparameters during the tuning can be reproduced when running the code again.Unfortunately, mc.set.seed
is only available under Unix-based systems.
library(parallelMap)
if (Sys.info()["sysname"] %in% c("Linux", "Darwin")) {
parallelStart(mode = "multicore",
# parallelize the hyperparameter tuning level
level = "mlr.tuneParams",
# just use half of the available cores
cpus = round(parallel::detectCores() / 2),
mc.set.seed = TRUE)
}
if (Sys.info()["sysname"] == "Windows") {
parallelStartSocket(level = "mlr.tuneParams",
cpus = round(parallel::detectCores() / 2))
}
Now we are set up for computing the nested spatial CV.Using a seed allows us to recreate the exact same spatial partitions when re-running the code.Specifying the resample()
parameters follows the exact same procedure as presented when using a GLM, the only difference being the extract
argument.This allows the extraction of the hyperparameter tuning results which is important if we plan follow-up analyses on the tuning.After the processing, it is good practice to explicitly stop the parallelization with parallelStop()
.Finally, we save the output object (result
) to disk in case we would like to use it another R session.Before running the subsequent code, be aware that it is time-consuming:the 125,500 models took ~1/2hr on a server using 24 cores (see below).
set.seed(12345)
result = mlr::resample(learner = wrapped_lrn_ksvm,
task = task,
resampling = perf_level,
extract = getTuneResult,
measures = mlr::auc)
# stop parallelization
parallelStop()
# save your result, e.g.:
# saveRDS(result, "svm_sp_sp_rbf_50it.rds")
In case you do not want to run the code locally, we have saved a subset of the results in the book’s GitHub repo.They can be loaded as follows:
result = readRDS("extdata/spatial_cv_result.rds")
Note that runtime depends on many aspects: CPU speed, the selected algorithm, the selected number of cores and the dataset.
# Exploring the results
# runtime in minutes
round(result$runtime / 60, 2)
#> [1] 37.4
Even more important than the runtime is the final aggregated AUROC: the model’s ability to discriminate the two classes.
# final aggregated AUROC
result$aggr
#> auc.test.mean
#> 0.758
# same as
mean(result$measures.test$auc)
#> [1] 0.758
It appears that the GLM (aggregated AUROC was 0.78) is slightly better than the SVM in this specific case.However, using more than 50 iterations in the random search would probably yield hyperparameters that result in models with a better AUROC (Schratz et al. 2018).On the other hand, increasing the number of random search iterations would also increase the total number of models and thus runtime.
The estimated optimal hyperparameters for each fold at the performance estimation level can also be viewed.The following command shows the best hyperparameter combination of the first fold of the first iteration (recall this results from the first 5 * 50 model runs):
# winning hyperparameters of tuning step,
# i.e. the best combination out of 50 * 5 models
result$extract[[1]]$x
#> $C
#> [1] 0.458
#>
#> $sigma
#> [1] 0.023
The estimated hyperparameters have been used for the first fold in the first iteration of the performance estimation level which resulted in the following AUROC value:
result$measures.test[1, ]
#> iter auc
#> 1 1 0.799
So far spatial CV has been used to assess the ability of learning algorithms to generalize to unseen data.For spatial predictions, one would tune the hyperparameters on the complete dataset.This will be covered in Chapter 14.