Hire a web Developer and Designer to upgrade and boost your online presence with cutting edge Technologies

Tuesday, 24 December 2024

How to Solve Linear Regression Using Linear Algebra

 Linear regression is a method for modeling the relationship between one or more independent variables and a dependent variable.

It is a staple of statistics and is often considered a good introductory machine learning method. It is also a method that can be reformulated using matrix notation and solved using matrix operations.

In this tutorial, you will discover the matrix formulation of linear regression and how to solve it using direct and matrix factorization methods.

After completing this tutorial, you will know:

  • Linear regression and the matrix reformulation with the normal equations.
  • How to solve linear regression using a QR matrix decomposition.
  • How to solve linear regression using SVD and the pseudoinverse.

    Tutorial Overview

    This tutorial is divided into 6 parts; they are:

    1. Linear Regression
    2. Matrix Formulation of Linear Regression
    3. Linear Regression Dataset
    4. Solve Directly
    5. Solve via QR Decomposition
    6. Solve via Singular-Value Decomposition

      Linear Regression

      Linear regression is a method for modeling the relationship between two scalar values: the input variable x and the output variable y.

      The model assumes that y is a linear function or a weighted sum of the input variable.

      Or, stated with the coefficients.

      The model can also be used to model an output variable given multiple input variables called multivariate linear regression (below, brackets were added for readability).

      The objective of creating a linear regression model is to find the values for the coefficient values (b) that minimize the error in the prediction of the output variable y.

      Matrix Formulation of Linear Regression

      Linear regression can be stated using Matrix notation; for example:

      Or, without the dot notation.

      Where X is the input data and each column is a data feature, b is a vector of coefficients and y is a vector of output variables for each row in X.

      Reformulated, the problem becomes a system of linear equations where the b vector values are unknown. This type of system is referred to as overdetermined because there are more equations than there are unknowns, i.e. each coefficient is used on each row of data.

      It is a challenging problem to solve analytically because there are multiple inconsistent solutions, e.g. multiple possible values for the coefficients. Further, all solutions will have some error because there is no line that will pass nearly through all points, therefore the approach to solving the equations must be able to handle that.

      The way this is typically achieved is by finding a solution where the values for b in the model minimize the squared error. This is called linear least squares.

      This formulation has a unique solution as long as the input columns are independent (e.g. uncorrelated).

      We cannot always get the error e = b – Ax down to zero. When e is zero, x is an exact solution to Ax = b. When the length of e is as small as possible, xhat is a least squares solution.

      — Page 219, Introduction to Linear Algebra, Fifth Edition, 2016.

      In matrix notation, this problem is formulated using the so-named normal equation:

      This can be re-arranged in order to specify the solution for b as:

      This can be solved directly, although given the presence of the matrix inverse can be numerically challenging or unstable.

      Linear Regression Dataset

      In order to explore the matrix formulation of linear regression, let’s first define a dataset as a context.

      We will use a simple 2D dataset where the data is easy to visualize as a scatter plot and models are easy to visualize as a line that attempts to fit the data points.

      The example below defines a 5×2 matrix dataset, splits it into X and y components, and plots the dataset as a scatter plot.

      Running the example first prints the defined dataset.

      A scatter plot of the dataset is then created showing that a straight line cannot fit this data exactly.

      Scatter Plot of Linear Regression Dataset

      Scatter Plot of Linear Regression Dataset

      Solve Directly

      The first approach is to attempt to solve the regression problem directly.

      That is, given X, what are the set of coefficients b that when multiplied by X will give y. As we saw in a previous section, the normal equations define how to calculate b directly.

      This can be calculated directly in NumPy using the inv() function for calculating the matrix inverse.

      Once the coefficients are calculated, we can use them to predict outcomes given X.

      Putting this together with the dataset defined in the previous section, the complete example is listed below.

      Running the example performs the calculation and prints the coefficient vector b.

      A scatter plot of the dataset is then created with a line plot for the model, showing a reasonable fit to the data.

      Scatter Plot of Direct Solution to the Linear Regression Problem

      Scatter Plot of Direct Solution to the Linear Regression Problem

      A problem with this approach is the matrix inverse that is both computationally expensive and numerically unstable. An alternative approach is to use a matrix decomposition to avoid this operation. We will look at two examples in the following sections.

      Solve via QR Decomposition

      The QR decomposition is an approach of breaking a matrix down into its constituent elements.

      Where A is the matrix that we wish to decompose, Q a matrix with the size m x m, and R is an upper triangle matrix with the size m x n.

      The QR decomposition is a popular approach for solving the linear least squares equation.

      Stepping over all of the derivation, the coefficients can be found using the Q and R elements as follows:

      The approach still involves a matrix inversion, but in this case only on the simpler R matrix.

      The QR decomposition can be found using the qr() function in NumPy. The calculation of the coefficients in NumPy looks as follows:

      Tying this together with the dataset, the complete example is listed below.

      Running the example first prints the coefficient solution and plots the data with the model.

      The QR decomposition approach is more computationally efficient and more numerically stable than calculating the normal equation directly, but does not work for all data matrices.

      Scatter Plot of QR Decomposition Solution to the Linear Regression Problem

      Scatter Plot of QR Decomposition Solution to the Linear Regression Problem

      Solve via Singular-Value Decomposition

      The Singular-Value Decomposition, or SVD for short, is a matrix decomposition method like the QR decomposition.

      Where A is the real n x m matrix that we wish to decompose, U is a m x m matrix, Sigma (often represented by the uppercase Greek letter Sigma) is an m x n diagonal matrix, and V^* is the conjugate transpose of an n x n matrix where * is a superscript.

      Unlike the QR decomposition, all matrices have an SVD decomposition. As a basis for solving the system of linear equations for linear regression, SVD is more stable and the preferred approach.

      Once decomposed, the coefficients can be found by calculating the pseudoinverse of the input matrix X and multiplying that by the output vector y.

      Where the pseudoinverse is calculated as following:

      Where X^+ is the pseudoinverse of X and the + is a superscript, D^+ is the pseudoinverse of the diagonal matrix Sigma and V^T is the transpose of V^*.

      Matrix inversion is not defined for matrices that are not square. […] When A has more columns than rows, then solving a linear equation using the pseudoinverse provides one of the many possible solutions.

      — Page 46, Deep Learning, 2016.

      We can get U and V from the SVD operation. D^+ can be calculated by creating a diagonal matrix from Sigma and calculating the reciprocal of each non-zero element in Sigma.

      We can calculate the SVD, then the pseudoinverse manually. Instead, NumPy provides the function pinv() that we can use directly.

      The complete example is listed below.

      Running the example prints the coefficient and plots the data with a red line showing the predictions from the model.

      In fact, NumPy provides a function to replace these two steps in the lstsq() function that you can use directly.

      Scatter Plot of SVD Solution to the Linear Regression Problem

      Scatter Plot of SVD Solution to the Linear Regression Problem

      Extensions

      This section lists some ideas for extending the tutorial that you may wish to explore.

      • Implement linear regression using the built-in lstsq() NumPy function
      • Test each linear regression on your own small contrived dataset.
      • Load a tabular dataset and test each linear regression method and compare the results.

      If you explore any of these extensions, I’d love to know.

      Further Reading

      This section provides more resources on the topic if you are looking to go deeper.

      Books

      API

      Articles

      Tutorials

      Summary

      In this tutorial, you discovered the matrix formulation of linear regression and how to solve it using direct and matrix factorization methods.

      Specifically, you learned:

      • Linear regression and the matrix reformulation with the normal equations.
      • How to solve linear regression using a QR matrix decomposition.
      • How to solve linear regression using SVD and the pseudoinverse.

      Do you have any questions?
      Ask your questions in the comments below and I will do my best to answer.

No comments:

Post a Comment

Connect broadband