Hire a web Developer and Designer to upgrade and boost your online presence with cutting edge Technologies

Tuesday 9 July 2024

How To Implement Logistic Regression From Scratch in Python

 Logistic regression is the go-to linear classification algorithm for two-class problems.

It is easy to implement, easy to understand and gets great results on a wide variety of problems, even when the expectations the method has of your data are violated.

In this tutorial, you will discover how to implement logistic regression with stochastic gradient descent from scratch with Python.

After completing this tutorial, you will know:

  • How to make predictions with a logistic regression model.
  • How to estimate coefficients using stochastic gradient descent.
  • How to apply logistic regression to a real prediction problem.

    Description

    This section will give a brief description of the logistic regression technique, stochastic gradient descent and the Pima Indians diabetes dataset we will use in this tutorial.

    Logistic Regression

    Logistic regression is named for the function used at the core of the method, the logistic function.

    Logistic regression uses an equation as the representation, very much like linear regression. Input values (X) are combined linearly using weights or coefficient values to predict an output value (y).

    A key difference from linear regression is that the output value being modeled is a binary value (0 or 1) rather than a numeric value.

    This can be simplified as:

    Where e is the base of the natural logarithms (Euler’s number), yhat is the predicted output, b0 is the bias or intercept term and b1 is the coefficient for the single input value (x1).

    The yhat prediction is a real value between 0 and 1, that needs to be rounded to an integer value and mapped to a predicted class value.

    Each column in your input data has an associated b coefficient (a constant real value) that must be learned from your training data. The actual representation of the model that you would store in memory or in a file are the coefficients in the equation (the beta value or b’s).

    The coefficients of the logistic regression algorithm must be estimated from your training data.

    Stochastic Gradient Descent

    Gradient Descent is the process of minimizing a function by following the gradients of the cost function.

    This involves knowing the form of the cost as well as the derivative so that from a given point you know the gradient and can move in that direction, e.g. downhill towards the minimum value.

    In machine learning, we can use a technique that evaluates and updates the coefficients every iteration called stochastic gradient descent to minimize the error of a model on our training data.

    The way this optimization algorithm works is that each training instance is shown to the model one at a time. The model makes a prediction for a training instance, the error is calculated and the model is updated in order to reduce the error for the next prediction.

    This procedure can be used to find the set of coefficients in a model that result in the smallest error for the model on the training data. Each iteration, the coefficients (b) in machine learning language are updated using the equation:

    Where b is the coefficient or weight being optimized, learning_rate is a learning rate that you must configure (e.g. 0.01), (y – yhat) is the prediction error for the model on the training data attributed to the weight, yhat is the prediction made by the coefficients and x is the input value.

    Pima Indians Diabetes Dataset

    The Pima Indians dataset involves predicting the onset of diabetes within 5 years in Pima Indians given basic medical details.

    It is a binary classification problem, where the prediction is either 0 (no diabetes) or 1 (diabetes).

    It contains 768 rows and 9 columns. All of the values in the file are numeric, specifically floating point values. Below is a small sample of the first few rows of the problem.

    Predicting the majority class (Zero Rule Algorithm), the baseline performance on this problem is 65.098% classification accuracy.

    Download the dataset and save it to your current working directory with the filename pima-indians-diabetes.csv.

    Tutorial

    This tutorial is broken down into 3 parts.

    1. Making Predictions.
    2. Estimating Coefficients.
    3. Diabetes Prediction.

    This will provide the foundation you need to implement and apply logistic regression with stochastic gradient descent on your own predictive modeling problems.

    1. Making Predictions

    The first step is to develop a function that can make predictions.

    This will be needed both in the evaluation of candidate coefficient values in stochastic gradient descent and after the model is finalized and we wish to start making predictions on test data or new data.

    Below is a function named predict() that predicts an output value for a row given a set of coefficients.

    The first coefficient in is always the intercept, also called the bias or b0 as it is standalone and not responsible for a specific input value.

    We can contrive a small dataset to test our predict() function.

    Below is a plot of the dataset using different colors to show the different classes for each point.

    Small Contrived Classification Dataset

    Small Contrived Classification Dataset

    We can also use previously prepared coefficients to make predictions for this dataset.

    Putting this all together we can test our predict() function below.

    There are two inputs values (X1 and X2) and three coefficient values (b0, b1 and b2). The prediction equation we have modeled for this problem is:

    or, with the specific coefficient values we chose by hand as:

    Running this function we get predictions that are reasonably close to the expected output (y) values and when rounded make correct predictions of the class.

    Now we are ready to implement stochastic gradient descent to optimize our coefficient values.

    2. Estimating Coefficients

    We can estimate the coefficient values for our training data using stochastic gradient descent.

    Stochastic gradient descent requires two parameters:

    • Learning Rate: Used to limit the amount each coefficient is corrected each time it is updated.
    • Epochs: The number of times to run through the training data while updating the coefficients.

    These, along with the training data will be the arguments to the function.

    There are 3 loops we need to perform in the function:

    1. Loop over each epoch.
    2. Loop over each row in the training data for an epoch.
    3. Loop over each coefficient and update it for a row in an epoch.

    As you can see, we update each coefficient for each row in the training data, each epoch.

    Coefficients are updated based on the error the model made. The error is calculated as the difference between the expected output value and the prediction made with the candidate coefficients.

    There is one coefficient to weight each input attribute, and these are updated in a consistent way, for example:

    The special coefficient at the beginning of the list, also called the intercept, is updated in a similar way, except without an input as it is not associated with a specific input value:

    Now we can put all of this together. Below is a function named coefficients_sgd() that calculates coefficient values for a training dataset using stochastic gradient descent.

    You can see, that in addition, we keep track of the sum of the squared error (a positive value) each epoch so that we can print out a nice message each outer loop.

    We can test this function on the same small contrived dataset from above.

    We use a larger learning rate of 0.3 and train the model for 100 epochs, or 100 exposures of the coefficients to the entire training dataset.

    Running the example prints a message each epoch with the sum squared error for that epoch and the final set of coefficients.

    You can see how error continues to drop even in the final epoch. We could probably train for a lot longer (more epochs) or increase the amount we update the coefficients each epoch (higher learning rate).

    Experiment and see what you come up with.

    Now, let’s apply this algorithm on a real dataset.

    3. Diabetes Prediction

    In this section, we will train a logistic regression model using stochastic gradient descent on the diabetes dataset.

    The example assumes that a CSV copy of the dataset is in the current working directory with the filename pima-indians-diabetes.csv.

    The dataset is first loaded, the string values converted to numeric and each column is normalized to values in the range of 0 to 1. This is achieved with the helper functions load_csv() and str_column_to_float() to load and prepare the dataset and dataset_minmax() and normalize_dataset() to normalize it.

    We will use k-fold cross validation to estimate the performance of the learned model on unseen data. This means that we will construct and evaluate k models and estimate the performance as the mean model performance. Classification accuracy will be used to evaluate each model. These behaviors are provided in the cross_validation_split(), accuracy_metric() and evaluate_algorithm() helper functions.

    We will use the predict(), coefficients_sgd() functions created above and a new logistic_regression() function to train the model.

    Below is the complete example.

    A k value of 5 was used for cross-validation, giving each fold 768/5 = 153.6 or just over 150 records to be evaluated upon each iteration. A learning rate of 0.1 and 100 training epochs were chosen with a little experimentation.

    You can try your own configurations and see if you can beat my score.

    Running this example prints the scores for each of the 5 cross-validation folds, then prints the mean classification accuracy.

    We can see that the accuracy is about 77%, higher than the baseline value of 65% if we just predicted the majority class using the Zero Rule Algorithm.

    Extensions

    This section lists a number of extensions to this tutorial that you may wish to consider exploring.

    • Tune The Example. Tune the learning rate, number of epochs and even data preparation method to get an improved score on the dataset.
    • Batch Stochastic Gradient Descent. Change the stochastic gradient descent algorithm to accumulate updates across each epoch and only update the coefficients in a batch at the end of the epoch.
    • Additional Classification Problems. Apply the technique to other binary (2 class) classification problems on the UCI machine learning repository.

    Did you explore any of these extensions?
    Let me know about it in the comments below.

    Review

    In this tutorial, you discovered how to implement logistic regression using stochastic gradient descent from scratch with Python.

    You learned.

    • How to make predictions for a multivariate classification problem.
    • How to optimize a set of coefficients using stochastic gradient descent.
    • How to apply the technique to a real classification predictive modeling problem.

    Do you have any questions?
    Ask your question in the comments below and I will do my best to answer.

No comments:

Post a Comment

Connect broadband

A Gentle Introduction to Long Short-Term Memory Networks by the Experts

 Long Short-Term Memory (LSTM) networks are a type of recurrent neural network capable of learning order dependence in sequence prediction ...