Hire a web Developer and Designer to upgrade and boost your online presence with cutting edge Technologies

Tuesday 1 October 2024

How to Get Reproducible Results with Keras

 Neural network algorithms are stochastic.

This means they make use of randomness, such as initializing to random weights, and in turn the same network trained on the same data can produce different results.

This can be confusing to beginners as the algorithm appears unstable, and in fact they are by design. The random initialization allows the network to learn a good approximation for the function being learned.

Nevertheless, there are times when you need the exact same result every time the same network is trained on the same data. Such as for a tutorial, or perhaps operationally.

In this tutorial, you will discover how you can seed the random number generator so that you can get the same results from the same network on the same data, every time.

Let’s get started.

Tutorial Overview

This tutorial is broken down into 6 parts. They are:

  1. Why do I Get Different Results Every Time?
  2. Demonstration of Different Results
  3. The Solutions
  4. Seed Random Numbers with the Theano Backend
  5. Seed Random Numbers with the TensorFlow Backend
  6. What if I Am Still Getting Different Results?

Environment

This tutorial assumes you have a Python SciPy environment installed. You can use either Python 2 or 3 with this example.

This tutorial assumes you have Keras (v2.0.3+) installed with either the TensorFlow (v1.1.0+) or Theano (v0.9+) backend.

This tutorial also assumes you have scikit-learn, Pandas, NumPy, and Matplotlib installed.

If you need help setting up your Python environment, see this post:

Why do I Get Different Results Every Time?

This is a common question I see from beginners to the field of neural networks and deep learning.

This misunderstanding may also come in the form of questions like:

  • How do I get stable results?
  • How do I get repeatable results?
  • What seed should I use?

Neural networks use randomness by design to ensure they effectively learn the function being approximated for the problem. Randomness is used because this class of machine learning algorithm performs better with it than without.

The most common form of randomness used in neural networks is the random initialization of the network weights. Although randomness can be used in other areas, here is just a short list:

  • Randomness in Initialization, such as weights.
  • Randomness in Regularization, such as dropout.
  • Randomness in Layers, such as word embedding.
  • Randomness in Optimization, such as stochastic optimization.

These sources of randomness, and more, mean that when you run the exact same neural network algorithm on the exact same data, you are guaranteed to get different results.

For more on the why behind stochastic algorithms, see the post:

Demonstration of Different Results

We can demonstrate the stochastic nature of neural networks with a small example.

In this section, we will develop a Multilayer Perceptron model to learn a short sequence of numbers increasing by 0.1 from 0.0 to 0.9. Given 0.0, the model must predict 0.1; given 0.1, the model must output 0.2; and so on.

The code to prepare the data is listed below.

We will use a network with 1 input, 10 neurons in the hidden layer, and 1 output. The network will use a mean squared error loss function and will be trained using the efficient ADAM algorithm.

The network needs about 1,000 epochs to solve this problem effectively, but we will only train it for 100 epochs. This is to ensure we get a model that makes errors when making predictions.

After the network is trained, we will make predictions on the dataset and print the mean squared error.

The code for the network is listed below.

In the example, we will create the network 10 times and print 10 different network scores.

The complete code listing is provided below.

Running the example will print a different accuracy in each line.

Your specific results will differ. A sample output is provided below.

The Solutions

The are two main solutions.

Solution #1: Repeat Your Experiment

The traditional and practical way to address this problem is to run your network many times (30+) and use statistics to summarize the performance of your model, and compare your model to other models.

I strongly recommend this approach, but it is not always possible due to the very long training times of some models.

For more on this approach, see:

Solution #2: Seed the Random Number Generator

Alternately, another solution is to use a fixed seed for the random number generator.

Random numbers are generated using a pseudo-random number generator. A random number generator is a mathematical function that will generate a long sequence of numbers that are random enough for general purpose use, such as in machine learning algorithms.

Random number generators require a seed to kick off the process, and it is common to use the current time in milliseconds as the default in most implementations. This is to ensure different sequences of random numbers are generated each time the code is run, by default.

This seed can also be specified with a specific number, such as “1”, to ensure that the same sequence of random numbers is generated each time the code is run.

The specific seed value does not matter as long as it stays the same for each run of your code.

The specific way to set the random number generator differs depending on the backend, and we will look at how to do this in Theano and TensorFlow.

Seed Random Numbers with the Theano Backend

Generally, Keras gets its source of randomness from the NumPy random number generator.

For the most part, so does the Theano backend.

We can seed the NumPy random number generator by calling the seed() function from the random module, as follows:

The importing and calling of the seed function is best done at the top of your code file.

This is a best practice because it is possible that some randomness is used when various Keras or Theano (or other) libraries are imported as part of their initialization, even before they are directly used.

We can add the two lines to the top of our example above and run it two times.

You should see the same list of mean squared error values each time you run the code (perhaps with some minor variation due to precision on different machines), as follows:

Your results should match mine (ignoring minor differences of precision).

Seed Random Numbers with the TensorFlow Backend

Keras does get its source of randomness from the NumPy random number generator, so this must be seeded regardless of whether you are using a Theano or TensorFlow backend.

It must be seeded by calling the seed() function at the top of the file before any other imports or other code.

In addition, TensorFlow has its own random number generator that must also be seeded by calling the set_random_seed() function immediately after the NumPy random number generator, as follows:

To be crystal clear, the top of your code file must have the following 4 lines before any others;

You can use the same seed for both, or different seeds. I don’t think it makes much difference as the sources of randomness feed into different processes.

Adding these 4 lines to the above example will allow the code to produce the same results every time it is run. You should see the same mean squared error values as those listed below (perhaps with some minor variation due to precision on different machines):

Your results should match mine (ignoring minor differences of precision).

What if I Am Still Getting Different Results?

To re-iterate, the most robust way to report results and compare models is to repeat your experiment many times (30+) and use summary statistics.

If this is not possible, you can get 100% repeatable results by seeding the random number generators used by your code. The solutions above should cover most situations, but not all.

What if you have followed the above instructions and still get different results from the same algorithm on the same data?

It is possible that there are other sources of randomness that you have not accounted for.

Randomness from a Third-Party Library

Perhaps your code is using an additional library that uses a different random number generator that too must be seeded.

Try cutting your code back to the minimum required (e.g. one data sample, one training epoch, etc.) and carefully read the API documentation in an effort to narrow down additional third-party libraries introducing randomness.

Randomness from Using the GPU

All of the above examples assume the code was run on a CPU.

It is possible that when using the GPU to train your models, the backend may be configured to use a sophisticated stack of GPU libraries, and that some of these may introduce their own source of randomness that you may or may not be able to account for.

For example, there is some evidence that if you are using Nvidia cuDNN in your stack, that this may introduce additional sources of randomness and prevent the exact reproducibility of your results.

Randomness from a Sophisticated Model

It is possible that because of the sophistication of your model and the parallel nature of training, that you are getting unreproducible results.

This is very likely caused by efficiencies made by the backend library and perhaps the inability to use the sequence of random numbers across cores.

I have not seen this myself, but see signs in some GitHub issues and StackOverflow questions.

You can try to reduce the complexity of your model to see if this affects the reproducibility of results, if only to narrow down the cause.

I would suggest reading up on how your backend uses randomness and see if there are any options open to you.

In Theano, see:

In TensorFlow, see:

Also, consider searching for other people with the same issue for further insight. Some great places to search include:

Summary

In this tutorial, you discovered how to get reproducible results for neural network models in Keras.

Specifically, you learned:

  • That neural networks are stochastic by design and that the source of randomness can be fixed to make results reproducible.
  • That you can seed the random number generators in NumPy and TensorFlow and this will make most Keras code 100% reproducible.
  • That there are some cases where there are additional sources of randomness and you have ideas on how to seek them out and perhaps fix them too.

Did this tutorial help?
Share your experience in the comments.

Are you still getting unreproducible results with Keras?
Share your experience; perhaps someone else here can help.

No comments:

Post a Comment

Connect broadband

What is the Difference Between a Parameter and a Hyperparameter?

  It can be confusing when you get started in applied machine learning. There are so many terms to use and many of the terms may not be used...