3. Model selection and evaluation3. Model selection and evaluation 3.1. Cross-validation: evaluating estimator performance 3.1.1. Computing cross-validated metrics3.1.2. Cross validation iterators3.1.3. A note on shuffling3.1.4. Cross validation and model selection 3.2. Tuning the hyper-parameters of an estimator 3.2.1. Exhaustive Grid Search3.2.2. Randomized Parameter Optimization3.2.3. Tips for parameter search3.2.4. Alternatives to brute force parameter search 3.3. Metrics and scoring: quantifying the quality of predictions 3.3.1. The scoring parameter: defining model evaluation rules3.3.2. Classification metrics3.3.3. Multilabel ranking metrics3.3.4. Regression metrics3.3.5. Clustering metrics3.3.6. Dummy estimators 3.4. Model persistence 3.4.1. Persistence example3.4.2. Security & maintainability limitations 3.5. Validation curves: plotting scores to evaluate models 3.5.1. Validation curve3.5.2. Learning curve