Saturday 13 April 2024

Spot-Check Regression Machine Learning Algorithms in Python with scikit-learn

 Spot-checking is a way of discovering which algorithms perform well on your machine learning problem.

You cannot know which algorithms are best suited to your problem before hand. You must trial a number of methods and focus attention on those that prove themselves the most promising.

In this post you will discover 6 machine learning algorithms that you can use when spot checking your regression problem in Python with scikit-learn.

Algorithms Overview

We are going to take a look at 7 classification algorithms that you can spot check on your dataset.

4 Linear Machine Learning Algorithms:

  1. Linear Regression
  2. Ridge Regression
  3. LASSO Linear Regression
  4. Elastic Net Regression

3 Nonlinear Machine Learning Algorithms:

  1. K-Nearest Neighbors
  2. Classification and Regression Trees
  3. Support Vector Machines

Each recipe is demonstrated on a Boston House Price dataset. This is a regression problem where all attributes are numeric (update: download data from here).

Each recipe is complete and standalone. This means that you can copy and paste it into your own project and start using it immediately.

A test harness with 10-fold cross validation is used to demonstrate how to spot check each machine learning algorithm and mean squared error measures are used to indicate algorithm performance. Note that mean squared error values are inverted (negative). This is a quirk of the cross_val_score() function used that requires all algorithm metrics to be sorted in ascending order (larger value is better).

The recipes assume that you know about each machine learning algorithm and how to use them. We will not go into the API or parameterization of each algorithm.

Need help with Machine Learning in Python?

Take my free 2-week email course and discover data prep, algorithms and more (with code).

Click to sign-up now and also get a free PDF Ebook version of the course.

Linear Machine Learning Algorithms

This section provides examples of how to use 4 different linear machine learning algorithms for regression in Python with scikit-learn.

1. Linear Regression

Linear regression assumes that the input variables have a Gaussian distribution. It is also assumed that input variables are relevant to the output variable and that they are not highly correlated with each other (a problem called collinearity).

You can construct a linear regression model using the LinearRegression class.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

Running the example provides an estimate of mean squared error.

2. Ridge Regression

Ridge regression is an extension of linear regression where the loss function is modified to minimize the complexity of the model measured as the sum squared value of the coefficient values (also called the l2-norm).

You can construct a ridge regression model by using the Ridge class.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

Running the example provides an estimate of the mean squared error.

3. LASSO Regression

The Least Absolute Shrinkage and Selection Operator (or LASSO for short) is a modification of linear regression, like ridge regression, where the loss function is modified to minimize the complexity of the model measured as the sum absolute value of the coefficient values (also called the l1-norm).

You can construct a LASSO model by using the Lasso class.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

Running the example provides an estimate of the mean squared error.

4. ElasticNet Regression

ElasticNet is a form of regularization regression that combines the properties of both Ridge Regression and LASSO regression. It seeks to minimize the complexity of the regression model (magnitude and number of regression coefficients) by penalizing the model using both the l2-norm (sum squared coefficient values) and the l1-norm (sum absolute coefficient values).

You can construct an ElasticNet model using the ElasticNet class.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

Running the example provides an estimate of the mean squared error.

Nonlinear Machine Learning Algorithms

This section provides examples of how to use 3 different nonlinear machine learning algorithms for regression in Python with scikit-learn.

1. K-Nearest Neighbors

K-Nearest Neighbors (or KNN) locates the K most similar instances in the training dataset for a new data instance. From the K neighbors, a mean or median output variable is taken as the prediction. Of note is the distance metric used (the metric argument). The Minkowski distance is used by default, which is a generalization of both the Euclidean distance (used when all inputs have the same scale) and Manhattan distance (for when the scales of the input variables differ).

You can construct a KNN model for regression using the KNeighborsRegressor class.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

Running the example provides an estimate of the mean squared error.

2. Classification and Regression Trees

Decision trees or the Classification and Regression Trees (CART as they are known) use the training data to select the best points to split the data in order to minimize a cost metric. The default cost metric for regression decision trees is the mean squared error, specified in the criterion parameter.

You can create a CART model for regression using the DecisionTreeRegressor class.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

Running the example provides an estimate of the mean squared error.

3. Support Vector Machines

Support Vector Machines (SVM) were developed for binary classification. The technique has been extended for the prediction real-valued problems called Support Vector Regression (SVR). Like the classification example, SVR is built upon the LIBSVM library.

You can create an SVM model for regression using the SVR class.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

Running the example provides an estimate of the mean squared error.

Summary

In this post you discovered machine learning recipes for regression in Python using scikit-learn.

Specifically, you learned about:

4 Linear Machine Learning Algorithms:

  • Linear Regression
  • Ridge Regression
  • LASSO Linear Regression
  • Elastic Net Regression

3 Nonlinear Machine Learning Algorithms:

  • K-Nearest Neighbors
  • Classification and Regression Trees
  • Support Vector Machines

Do you have any questions about regression machine learning algorithms or this post? Ask your questions in the comments and I will do my best to answer them.

No comments:

Post a Comment

Connect broadband

Metrics To Evaluate Machine Learning Algorithms in Python

The metrics that you choose to evaluate your machine learning algorithms are very important. Choice of metrics influences how the performanc...