Hire a web Developer and Designer to upgrade and boost your online presence with cutting edge Technologies

Monday 15 July 2024

How to Implement Bagging From Scratch With Python

 Decision trees are a simple and powerful predictive modeling technique, but they suffer from high-variance.

This means that trees can get very different results given different training data.

A technique to make decision trees more robust and to achieve better performance is called bootstrap aggregation or bagging for short.

In this tutorial, you will discover how to implement the bagging procedure with decision trees from scratch with Python.

After completing this tutorial, you will know:

  • How to create a bootstrap sample of your dataset.
  • How to make predictions with bootstrapped models.
  • How to apply bagging to your own predictive modeling problems.

    Descriptions

    This section provides a brief description to Bootstrap Aggregation and the Sonar dataset that will be used in this tutorial.

    Bootstrap Aggregation Algorithm

    A bootstrap is a sample of a dataset with replacement.

    This means that a new dataset is created from a random sample of an existing dataset where a given row may be selected and added more than once to the sample.

    It is a useful approach to use when estimating values such as the mean for a broader dataset, when you only have a limited dataset available. By creating samples of your dataset and estimating the mean from those samples, you can take the average of those estimates and get a better idea of the true mean of the underlying problem.

    This same approach can be used with machine learning algorithms that have a high variance, such as decision trees. A separate model is trained on each bootstrap sample of data and the average output of those models used to make predictions. This technique is called bootstrap aggregation or bagging for short.

    Variance means that an algorithm’s performance is sensitive to the training data, with high variance suggesting that the more the training data is changed, the more the performance of the algorithm will vary.

    The performance of high variance machine learning algorithms like unpruned decision trees can be improved by training many trees and taking the average of their predictions. Results are often better than a single decision tree.

    Another benefit of bagging in addition to improved performance is that the bagged decision trees cannot overfit the problem. Trees can continue to be added until a maximum in performance is achieved.

    Sonar Dataset

    The dataset we will use in this tutorial is the Sonar dataset.

    This is a dataset that describes sonar chirp returns bouncing off different surfaces. The 60 input variables are the strength of the returns at different angles. It is a binary classification problem that requires a model to differentiate rocks from metal cylinders. There are 208 observations.

    It is a well-understood dataset. All of the variables are continuous and generally in the range of 0 to 1. The output variable is a string “M” for mine and “R” for rock, which will need to be converted to integers 1 and 0.

    By predicting the class with the most observations in the dataset (M or mines) the Zero Rule Algorithm can achieve an accuracy of 53%.

    You can learn more about this dataset at the UCI Machine Learning repository.

    Download the dataset for free and place it in your working directory with the filename sonar.all-data.csv.

    Tutorial

    This tutorial is broken down into 2 parts:

    1. Bootstrap Resample.
    2. Sonar Dataset Case Study.

    These steps provide the foundation that you need to implement and apply bootstrap aggregation with decision trees to your own predictive modeling problems.

    1. Bootstrap Resample

    Let’s start off by getting a strong idea of how the bootstrap method works.

    We can create a new sample of a dataset by randomly selecting rows from the dataset and adding them to a new list. We can repeat this for a fixed number of rows or until the size of the new dataset matches a ratio of the size of the original dataset.

    We can allow sampling with replacement by not removing the row that was selected so that it is available for future selections.

    Below is a function named subsample() that implements this procedure. The randrange() function from the random module is used to select a random row index to add to the sample each iteration of the loop. The default size of the sample is the size of the original dataset.

    We can use this function to estimate the mean of a contrived dataset.

    First, we can create a dataset with 20 rows and a single column of random numbers between 0 and 9 and calculate the mean value.

    We can then make bootstrap samples of the original dataset, calculate the mean, and repeat this process until we have a list of means. Taking the average of these sample means should give us a robust estimate of the mean of the entire dataset.

    The complete example is listed below.

    Each bootstrap sample is created as a 10% sample of the original 20 observation dataset (or 2 observations). We then experiment by creating 1, 10, 100 bootstrap samples of the original dataset, calculate their mean value, then average all of those estimated mean values.

    Running the example prints the original mean value we aim to estimate.

    We can then see the estimated mean from the various different numbers of bootstrap samples. We can see that with 100 samples we achieve a good estimate of the mean.

    Instead of calculating the mean value, we can create a model from each subsample.

    Next, let’s see how we can combine the predictions from multiple bootstrap models.

    2. Sonar Dataset Case Study

    In this section, we will apply the Random Forest algorithm to the Sonar dataset.

    The example assumes that a CSV copy of the dataset is in the current working directory with the file name sonar.all-data.csv.

    The dataset is first loaded, the string values converted to numeric and the output column is converted from strings to the integer values of 0 to 1. This is achieved with helper functions load_csv(), str_column_to_float() and str_column_to_int() to load and prepare the dataset.

    We will use k-fold cross validation to estimate the performance of the learned model on unseen data. This means that we will construct and evaluate k models and estimate the performance as the mean model error. Classification accuracy will be used to evaluate each model. These behaviors are provided in the cross_validation_split(), accuracy_metric() and evaluate_algorithm() helper functions.

    We will also use an implementation of the Classification and Regression Trees (CART) algorithm adapted for bagging including the helper functions test_split() to split a dataset into groups, gini_index() to evaluate a split point, get_split() to find an optimal split point, to_terminal(), split() and build_tree() used to create a single decision tree, predict() to make a prediction with a decision tree and the subsample() function described in the previous step to make a subsample of the training dataset

    A new function named bagging_predict() is developed that is responsible for making a prediction with each decision tree and combining the predictions into a single return value. This is achieved by selecting the most common prediction from the list of predictions made by the bagged trees.

    Finally, a new function named bagging() is developed that is responsible for creating the samples of the training dataset, training a decision tree on each, then making predictions on the test dataset using the list of bagged trees.

    The complete example is listed below.

    A k value of 5 was used for cross-validation, giving each fold 208/5 = 41.6 or just over 40 records to be evaluated upon each iteration.

    Deep trees were constructed with a max depth of 6 and a minimum number of training rows at each node of 2. Samples of the training dataset were created with 50% the size of the original dataset. This was to force some variety in the dataset subsamples used to train each tree. The default for bagging is to have the size of sample datasets match the size of the original training dataset.

    A series of 4 different numbers of trees were evaluated to show the behavior of the algorithm.

    The accuracy from each fold and the mean accuracy for each configuration are printed. We can see a trend of some minor lift in performance as the number of trees is increased.

    A difficulty of this method is that even though deep trees are constructed, the bagged trees that are created are very similar. In turn, the predictions made by these trees are also similar, and the high variance we desire among the trees trained on different samples of the training dataset is diminished.

    This is because of the greedy algorithm used in the construction of the trees selecting the same or similar split points.

    The tutorial tried to re-inject this variance by constraining the sample size used to train each tree. A more robust technique is to constrain the features that may be evaluated when creating each split point. This is the method used in the Random Forest algorithm.

    Extensions

    • Tune the Example. Explore different configurations for the number of trees and even individual tree configurations to see if you can further improve results.
    • Bag Another Algorithm. Other algorithms can be used with bagging. For example, a k-nearest neighbor algorithm with a low value of k will have a high variance and is a good candidate for bagging.
    • Regression Problems. Bagging can be used with regression trees. Instead of predicting the most common class value from the set of predictions, you can return the average of the predictions from the bagged trees. Experiment on regression problems.

    Did you try any of these extensions?
    Share your experiences in the comments below.

No comments:

Post a Comment

Connect broadband

On the Suitability of Long Short-Term Memory Networks for Time Series Forecasting

 Long Short-Term Memory (LSTM) is a type of recurrent neural network that can learn the order dependence between items in a sequence. LSTM...