We cannot know which algorithm will be best for a given problem.
Therefore, we need to design a test harness that we can use to evaluate different machine learning algorithms.
In this tutorial, you will discover how to develop a machine learning algorithm test harness from scratch in Python.
After completing this tutorial, you will know:
- How to implement a train-test algorithm test harness.
- How to implement a k-fold cross-validation algorithm test harness.
Description
A test harness provides a consistent way to evaluate machine learning algorithms on a dataset.
It involves 3 elements:
- The resampling method to split-up the dataset.
- The machine learning algorithm to evaluate.
- The performance measure by which to evaluate predictions.
The loading and preparation of a dataset is a prerequisite step that must have been completed prior to using the test harness.
The test harness must allow for different machine learning algorithms to be evaluated, whilst the dataset, resampling method and performance measures are kept constant.
In this tutorial, we are going to demonstrate the test harnesses with a real dataset.
The dataset used is the Pima Indians diabetes dataset. It contains 768 rows and 9 columns. All of the values in the file are numeric, specifically floating point values.
The Zero Rule algorithm will be evaluated as part of the tutorial. The Zero Rule algorithm always predicts the class that has the most observations in the training dataset.
Tutorial
This tutorial is broken down into two main sections:
- Train-Test Algorithm Test Harness.
- Cross-Validation Algorithm Test Harness.
These test harnesses will give you the foundation that you need to evaluate a suite of machine learning algorithms on a given predictive modeling problem.
1. Train-Test Algorithm Test Harness
The train-test split is a simple resampling method that can be used to evaluate a machine learning algorithm.
As such, it is a good starting point for developing a test harness.
We can assume the prior development of a function to split a dataset into train and test sets and a function to evaluate the accuracy of a set of predictions.
We need a function that can take a dataset and an algorithm and return a performance score.
Below is a function named evaluate_algorithm() that achieves this. It takes 3 fixed arguments including the dataset, the algorithm function and the split percentage for the train-test split.
First, the dataset is split into train and test elements. Next, a copy of the test set is made and each output value is cleared by setting it to the None value, to prevent the algorithm from cheating accidentally.
The algorithm provided as a parameter is a function that expects the train and test datasets on which to prepare and then make predictions. The algorithm may require additional configuration parameters. This is handled by using the variable arguments *args in the evaluate_algorithm() function and passing them on to the algorithm function.
The algorithm function is expected to return a list of predictions, one for each row in the training dataset. These are compared to the actual output values from the unmodified test dataset by the accuracy_metric() function.
Finally, the accuracy is returned.
The evaluation function does make some strong assumptions, but they can easily be changed if needed.
Specifically, it assumes that the last row in the dataset is always the output value. A different column could be used. The use of the accuracy_metric() assumes that the problem is a classification problem, but this could be changed to mean squared error for regression problems.
Let’s piece this together with a worked example.
We will use the Pima Indians diabetes dataset and evaluate the Zero Rule algorithm.
The dataset was split into 60% for training the model and 40% for evaluating it.
Notice how the name of the Zero Rule algorithm zero_rule_algorithm_classification was passed as an argument to the evaluate_algorithm() function. You can see how this test harness may be used again and again with different algorithms.
Running the example above prints out the accuracy of the model.
2. Cross-Validation Algorithm Test Harness
Cross-validation is a resampling technique that provides more reliable estimates of algorithm performance on unseen data.
It requires the creation and evaluation of k models on different subsets of your data, and such is more computationally expensive. Nevertheless, it is the gold standard for evaluating machine learning algorithms.
As in the previous section, we need to create a function that ties together the resampling method, the evaluation of the algorithm on the dataset and the performance calculation method.
Unlike above, the algorithm must be evaluated on different subsets of the dataset many times. This means we need additional loops within our evaluate_algorithm() function.
Below is a function that implements algorithm evaluation with cross-validation.
First, the dataset is split into n_folds groups called folds.
Next, we loop giving each fold an opportunity to be held out of training and used to evaluate the algorithm. A copy of the list of folds is created and the held out fold is removed from this list. Then the list of folds is flattened into one long list of rows to match the algorithms expectation of a training dataset. This is done using the sum() function.
Once the training dataset is prepared the rest of the function within this loop is as above. A copy of the test dataset (the fold) is made and the output values are cleared to avoid accidental cheating by algorithms. The algorithm is prepared on the train dataset and makes predictions on the test dataset. The predictions are evaluated and stored in a list.
Unlike the train-test algorithm test harness, a list of scores is returned, one for each cross-validation fold.
Although slightly more complex in code and slower to run, this function provides a more robust estimate of algorithm performance.
We can tie all of this together with a complete example on the diabetes dataset with the Zero Rule algorithm.
A total of 5 cross validation folds were used to evaluate the Zero Rule Algorithm. As such, 5 scores were returned from the evaluate_algorithm() algorithm.
Running this example both prints these list of scores calculated and prints the mean score.
You now have two different test harnesses that you can use to evaluate your own machine learning algorithms.
Extensions
This section lists extensions to this tutorial that you may wish to consider.
- Parameterized Evaluation. Pass in the function used to evaluate predictions, allowing you to seamlessly work with regression problems.
- Parameterized Resampling. Pass in the function used to calculate resampling splits, allowing you to easily switch between the train-test and cross-validation methods.
- Standard Deviation Scores. Calculate the standard deviation to get an idea of the spread of scores when evaluating algorithms using cross-validation.
Did you try any of these extensions?
Share your experiences in the comments below.Review
In this tutorial, you discovered how to create a test harness from scratch to evaluate your machine learning algorithms.
Specifically, you now know:
- How to implement and use a train-test algorithm test harness.
- How to implement and use a cross-validation algorithm test harness.
No comments:
Post a Comment