Many people have asked me what I think about Udacity’s Self-Driving Car Engineer Nanodegree (SDCND). This review aims to help you make a decision as to whether or not to enrol in the first term of Udacity’s SDCND. In short, I think that if you are considering it because you want to work in the self-driving car industry and you … Read More

## Explaining Tensorflow Code for a Convolutional Neural Network

In this post, we will go through the code for a convolutional neural network. We will use Aymeric Damien’s implementation. I recommend you have a skim before you read this post. I have included the key portions of the code below. If you’re not familiar with TensorFlow or neural networks, you may find it useful to read my post on multilayer … Read More

## What do the Kalman Filter Equations Mean? (Part 2: Update)

In my previous post, I explained the Kalman Filter prediction equations in a big-picture way. In this post I will explain the update equations. The equations to focus on are the last two. (We use the results from the first three equations in the last two equations.) Eqn 1: Updating the object state x: x’ is the predicted object state … Read More

## What do the Kalman Filter Equations mean? (Part 1: Prediction)

In the second term of Udacity’s Self-Driving Car Engineer Nanodegree, you start out learning about Kalman Filters. You are given a bunch of equations. What do they mean? In this post I explain the prediction equations (left) in a big-picture way. I explain the update equations in my next post. Predicting the object state x: Equation: x is the object state. … Read More

## Behavioural Cloning: Tips for Tackling Project 3

In this post I list tips that may be helpful for tackling Project 3 of Udacity’s Self-Driving Car Nanodegree, in which you train a neural network to drive a car in a simulator. The neural network learns from data of humans driving the car through the simulator, hence the project name ‘Behavioural Cloning’ – it’s trying to imitate the way … Read More

## Bugger! 4.2: Semicircle lane lines

Recap from the previous post: we’re trying to trace lane lines from a video. We just dealt with our pipeline spitting out zebra stripes, but now we’ve got a semicircle trace to deal with (see feature image above). What to do? Step 1: Plot intermediate steps to locate the bug Recall that our model is supposed to: Un-distort the test image … Read More

## Bugger! 4.1: Zebra Stripes for Lane Lines

In Project 4: Advanced Lane Lines of Udacity’s Self-Driving Car Nanodegree, we use computer vision techniques to trace out lane lines from a video taken from a camera mounted at the front of a car, like so: Result I’m supposed to get In this series, I share some bugs I came across and how I tackled them. The code can … Read More

## Code, Explained: Training a model in TensorFlow

In a previous post, we went through the TensorFlow code for a multilayer perceptron. Now we will discuss how we train the model with TensorFlow, specifically in a TensorFlow Session. We will use Aymeric Damien’s implementation in this post. I recommend you skim through the code first and have the code open in a separate window. I have included the key portions … Read More

## Comparing model performance: Including Max Pooling and Dropout Layers

In this post I compare the performance of models that use max pooling and dropout in the convolutional layer with those that don’t. This experiment will be on a traffic sign classifier used in Udacity’s Self-Driving Car Nanodegree. The full code is on GitHub. Recap: Max Pooling and Dropout Max Pooling: A way of reducing the dimensionality of input (by … Read More

## Explaining TensorFlow code for a Multilayer Perceptron

In this post we go through the code for a multilayer perceptron in TensorFlow. We will use Aymeric Damien’s implementation. I recommend you have a skim before you read this post. I have included the key portions of the code below. 1. Code Here are the relevant network parameters and graph input for context (skim this):

1 2 3 4 5 6 7 8 9 |
# Network Parameters n_hidden_1 = 256 # 1st layer number of features n_hidden_2 = 256 # 2nd layer number of features n_input = 784 # MNIST data input (img shape: 28*28) n_classes = 10 # MNIST total classes (0-9 digits) # tf Graph input x = tf.placeholder("float", [None, n_input]) y = tf.placeholder("float", [None, n_classes]) |

Here is the model … Read More

- Page 1 of 2
- 1
- 2