Comparing model performance: Including Max Pooling and Dropout Layers

Jessica YungSelf-Driving Car NDLeave a Comment

In this post I compare the performance of models that use max pooling and dropout in the convolutional layer with those that don’t. This experiment will be on a traffic sign classifier used in Udacity’s Self-Driving Car Nanodegree. The full code is on GitHub.

Recap: Max Pooling and Dropout

Max Pooling: A way of reducing the dimensionality of input (by making assumptions). Max pooling takes the maximum of each non-overlapping region of the input:

Max Pooling. Source: Stanford’s CS231 course (GitHub)

Dropout: Nodes (weights, biases) are dropped out at random with probability 1-p. Only the reduced network is trained on the data at that stage. This is expected to decrease overfitting and improve training time.

(Links to further reading at the end of this post.)

Experiment Specifications

In this example, we train a three-layer convolutional neural network to classify traffic signs. This network consists of one convolutional layer followed by two fully connected layers and an output layer. The code for the network with pooling and dropout is given below. The remaining three networks can be obtained by removing the pooling or dropout code.

We train four networks: one with neither pooling nor dropout, one with only pooling, one with only dropout, and one with both pooling and dropout.

Data: There are 43 classes (types of traffic signs) in total. We have 39209 training samples and 12630 validation samples.

screenshot

Here are some examples of traffic sign images we want to classify. We will use normalised data.

screenshot

screenshot

Original image in the top left, normalised image in the top right.

Carrying out the experiment 

To make our results reproducible, I shuffled the training and test data with random_state=42.

I created copies of the data that were normalised and standardised. I then trained the model for 100 epochs on each version of the data with a batch size of 100.

Results

Differences in convergent validation accuracies

Putting the networks in descending order of accuracy, we have

  1. Pooling and dropout (0.9942)
  2. Pooling with no dropout (0.9902)
  3. Dropout with no pooling (0.9896)
  4. No pooling or dropout (0.9870)

(Numbers in parentheses are means of validation accuracies for each network in epochs 80-100.)

val-acc-combined-epoch10plus.png

This is unsurprising. Adding pooling and dropout makes the network more robust (compare with training accuracy orderings below). Notably, the networks with only pooling or only dropout perform similarly: their validation accuracies differ by only 0.06%.

Differences in convergent training accuracies

The differences in convergent training accuracies are much smaller than differences in validation accuracies. All the convergent accuracies are above 0.995 and within 0.2% of each other. Putting the networks in descending order of accuracy, we have

  1. Pooling with no dropout (0.9971)
  2. No pooling or dropout (0.9965)
  3. Dropout with no pooling (0.9958)
  4. Pooling and dropout (0.9951)

(Numbers in parentheses are means of training accuracies for each network in epochs 80-100.)

acc-epoch-ten-plus.png

Pooling with dropout has the highest convergent validation accuracy but also the lowest convergent training accuracy. The difference between the two is only 0.9%. This suggests there may be worse overfitting in the remaining three networks. We will examine overfitting in each situation in more depth in the next section.

Differences in early training (accuracies in the first 10 epochs)

For completeness, here are the training accuracies for the first 10 epochs. The accuracies increase quickly in the first 3 epochs. The orderings do not deviate wildly from convergent accuracy orderings.

acc-epoch-one-to-ten.png

val-acc-epoch-one-to-ten.png

Differences between training and validation accuracy per network (overfitting)

The differences between training and validation accuracies is much smaller (tighter) for pooling and dropout compared to the other three networks. There is no consistent gap here.

Coming roughly in joint second are only dropout or only pooling, with a consistent gap of about 0.006.

Using neither pooling nor dropout has the highest consistent gap of about 0.01.

Means of training accuracy - validation accuracy in epochs 80-100 (lower gap first):

  1. Pooling and dropout (0.0009)
  2. Dropout but no pooling (0.0061)
  3. Pooling but no dropout (0.0069)
  4. No pooling or dropout (0.0094)

pooling-and-dropout-epoch-5+

pooling-no-dropout-epoch-5+dropout-no-pooling-epoch-5+no-pool-no-dropout-epoch-5+

(Note: Be wary of the differences in y-axis scale across the graphs.)

Differences in training speed

Each network was trained on 31367 samples and validation on 7842 samples in each epoch. The training times per epoch are as follows:

  • No pooling or dropout: 10s
  • Pooling with no dropout: 5s
  • Dropout with no pooling: 11s
  • Pooling and dropout: 5s

Pooling seems to reduce training time by about 50%.

 

Stay tuned for a post explaining code for a Convolutional Neural Network in TensorFlow.

Further reading:

Leave a Reply