In a previous post, we went through the TensorFlow code for a multilayer perceptron. Now we will discuss how we train the model with TensorFlow, specifically in a TensorFlow Session. We will use Aymeric Damien’s implementation in this post. I recommend you skim through the code first and have the code open in a separate window. I have included the key portions … Read More
Comparing model performance: Including Max Pooling and Dropout Layers
In this post I compare the performance of models that use max pooling and dropout in the convolutional layer with those that don’t. This experiment will be on a traffic sign classifier used in Udacity’s Self-Driving Car Nanodegree. The full code is on GitHub. Recap: Max Pooling and Dropout Max Pooling: A way of reducing the dimensionality of input (by … Read More
Explaining TensorFlow code for a Multilayer Perceptron
In this post we go through the code for a multilayer perceptron in TensorFlow. We will use Aymeric Damien’s implementation. I recommend you have a skim before you read this post. I have included the key portions of the code below. 1. Code Here are the relevant network parameters and graph input for context (skim this):
1 2 3 4 5 6 7 8 9 |
# Network Parameters n_hidden_1 = 256 # 1st layer number of features n_hidden_2 = 256 # 2nd layer number of features n_input = 784 # MNIST data input (img shape: 28*28) n_classes = 10 # MNIST total classes (0-9 digits) # tf Graph input x = tf.placeholder("float", [None, n_input]) y = tf.placeholder("float", [None, n_classes]) |
Here is the model … Read More
Comparing Model Performance with Normalised vs standardised input (Traffic Sign Classifier)
In the previous post, we explained (1) what normalisation and standardisation of data were, (2) why you might want to do it and (3) how you can do it. In this post, we’ll compare the performance of one model on unprocessed, normalised and standardised data. We’d expect using normalised or standardised input to give us higher accuracy, but how much better … Read More
Traffic Sign Classifier: Normalising Data
In this post, we’ll talk about (1) what normalising data is, (2) why you might want to do it, and (3) how you can do it (with examples). Background: The Mystery of the Horrifically Inaccurate Model Let me tell you a story. Once upon a time, I trained a few models to classify traffic signs for Udacity’s Self-Driving Car Nanodegree. I first … Read More
18 Game Theory Ideas
Image creds: SMBC Here are 18 game theory-related ideas I came up with in the last Game Theory lecture of term. These are things that I think would be interesting to explore and are suited to (but do not require) people who have elementary knowledge of game theory. Look into Quantum game theory. Create a game theory problems tree. E.g. … Read More
How to use AWS EC2 GPU instances with BitFusion
If you want to train neural networks seriously, you need more computational power than the typical laptop has. There are two solutions: Get (buy or borrow) more computational power (GPUs or servers) or Rent servers online. GPUs cost over a hundred dollars each and top models like the NVIDIA TESLA cost thousands, so it’s usually easier and cheaper to rent … Read More