Hire a web Developer and Designer to upgrade and boost your online presence with cutting edge Technologies

Friday, 10 May 2024

Data Preparation for Gradient Boosting with XGBoost in Python

 XGBoost is a popular implementation of Gradient Boosting because of its speed and performance.

Internally, XGBoost models represent all problems as a regression predictive modeling problem that only takes numerical values as input. If your data is in a different form, it must be prepared into the expected format.

In this post, you will discover how to prepare your data for using with gradient boosting with the XGBoost library in Python.

After reading this post you will know:

  • How to encode string output variables for classification.
  • How to prepare categorical input variables using one hot encoding.
  • How to automatically handle missing data with XGBoost.

    Label Encode String Class Values

    The iris flowers classification problem is an example of a problem that has a string class value.

    This is a prediction problem where given measurements of iris flowers in centimeters, the task is to predict to which species a given flower belongs.

    Download the dataset and place it in your current working directory with the filename “iris.csv“.

    Below is a sample of the raw dataset.

    XGBoost cannot model this problem as-is as it requires that the output variables be numeric.

    We can easily convert the string values to integer values using the LabelEncoder. The three class values (Iris-setosa, Iris-versicolor, Iris-virginica) are mapped to the integer values (0, 1, 2).

    We save the label encoder as a separate object so that we can transform both the training and later the test and validation datasets using the same encoding scheme.

    Below is a complete example demonstrating how to load the iris dataset. Notice that Pandas is used to load the data in order to handle the string class values.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    Running the example produces the following output.

    Notice how the XGBoost model is configured to automatically model the multiclass classification problem using the multi:softprob objective, a variation on the softmax loss function to model class probabilities. This suggests that internally, that the output class is converted into a one hot type encoding automatically.

    One Hot Encode Categorical Data

    Some datasets only contain categorical data, for example the breast cancer dataset.

    This dataset describes the technical details of breast cancer biopsies and the prediction task is to predict whether or not the patient has a recurrence of cancer, or not.

    Download the dataset and place it in your current working directory with the filename “breast-cancer.csv“.

    Below is a sample of the raw dataset.

    We can see that all 9 input variables are categorical and described in string format. The problem is a binary classification prediction problem and the output class values are also described in string format.

    We can reuse the same approach from the previous section and convert the string class values to integer values to model the prediction using the LabelEncoder. For example:

    We can use this same approach on each input feature in X, but this is only a starting point.

    XGBoost may assume that encoded integer values for each input variable have an ordinal relationship. For example that ‘left-up’ encoded as 0 and ‘left-low’ encoded as 1 for the breast-quad variable have a meaningful relationship as integers. In this case, this assumption is untrue.

    Instead, we must map these integer values onto new binary variables, one new variable for each categorical value.

    For example, the breast-quad variable has the values:

    We can model this as 5 binary variables as follows:

    This is called one hot encoding. We can one hot encode all of the categorical input variables using the OneHotEncoder class in scikit-learn.

    We can one hot encode each feature after we have label encoded it. First we must transform the feature array into a 2-dimensional NumPy array where each integer value is a feature vector with a length 1.

    We can then create the OneHotEncoder and encode the feature array.

    Finally, we can build up the input dataset by concatenating the one hot encoded features, one by one, adding them on as new columns (axis=2). We end up with an input vector comprised of 43 binary input variables.

    Ideally, we may experiment with not one hot encode some of input attributes as we could encode them with an explicit ordinal relationship, for example the first column age with values like ’40-49′ and ’50-59′. This is left as an exercise, if you are interested in extending this example.

    Below is the complete example with label and one hot encoded input variables and label encoded output variable.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    Running this example we get the following output.

    Again we can see that the XGBoost framework chose the ‘binary:logistic‘ objective automatically, the right objective for this binary classification problem.

    Support for Missing Data

    XGBoost can automatically learn how to best handle missing data.

    In fact, XGBoost was designed to work with sparse data, like the one hot encoded data from the previous section, and missing data is handled the same way that sparse or zero values are handled, by minimizing the loss function.

    For more information on the technical details for how missing values are handled in XGBoost, see Section 3.4 “Sparsity-aware Split Finding” in the paper XGBoost: A Scalable Tree Boosting System.

    The Horse Colic dataset is a good example to demonstrate this capability as it contains a large percentage of missing data, approximately 30%.

    Download the dataset and place it in your current working directory with the filename “horse-colic.csv“.

    Below is a sample of the raw dataset.

    The values are separated by whitespace and we can easily load it using the Pandas function read_csv.

    Once loaded, we can see that the missing data is marked with a question mark character (‘?’). We can change these missing values to the sparse value expected by XGBoost which is the value zero (0).

    Because the missing data was marked as strings, those columns with missing data were all loaded as string data types. We can now convert the entire set of input data to numerical values.

    Finally, this is a binary classification problem although the class values are marked with the integers 1 and 2. We model binary classification problems in XGBoost as logistic 0 and 1 values. We can easily convert the Y dataset to 0 and 1 integers using the LabelEncoder, as we did in the iris flowers example.

    The full code listing is provided below for completeness.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    Running this example produces the following output.

    We can tease out the effect of XGBoost’s automatic handling of missing values, by marking the missing values with a non-zero value, such as 1.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    Re-running the example demonstrates a drop in accuracy for the model.

    We can also impute the missing data with a specific value.

    It is common to use a mean or a median for the column. We can easily impute the missing data using the scikit-learn SimpleImputer class.

    Below is the full example with missing data imputed with the mean value from each column.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    Running this example we see results equivalent to the fixing the value to one (1). This suggests that at least in this case we are better off marking the missing values with a distinct value of zero (0) rather than a valid value (1) or an imputed value.

    It is a good lesson to try both approaches (automatic handling and imputing) on your data when you have missing values.

    Summary

    In this post you discovered how you can prepare your machine learning data for gradient boosting with XGBoost in Python.

    Specifically, you learned:

    • How to prepare string class values for binary classification using label encoding.
    • How to prepare categorical input variables using a one hot encoding to model them as binary variables.
    • How XGBoost automatically handles missing data and how you can mark and impute missing values.

    Do you have any questions?
    Ask your questions in the comments and I will do my best to answer.

No comments:

Post a Comment

Connect broadband