Ellsberg Paradox We discussed the following version of the Ellsberg paradox in a microeconomics lecture yesterday: There is an urn with 100 red balls and 200 balls that are each either blue or green. (e.g. may have 100 red and 200 blue or 100 red, 199 green and 1 blue.) You have two choices. Choice 1: (a) Receive 1,000 if … Read More

## List Comprehensions in Python

1. What are list comprehensions? List comprehensions construct lists in natural-to-express ways. They can replace map-filter combinations and many for loops. They’re just syntactic sugar. That means they make your code easier to read (and prettier). Example 1: For loops -> List comprehension

1 2 3 4 5 6 7 8 9 10 11 12 |
# Make a list of squares of 1 to 10 >>> squares = [] >>> for i in range(1,11): ... squares.append(i**2) ... >>> squares [1, 4, 9, 16, 25, 36, 49, 64, 81, 100] # List comprehension >>> squares_lc = [i**2 for i in range(1,11)] >>> squares_lc [1, 4, 9, 16, 25, 36, 49, 64, 81, 100] |

Example 2: Map-filter -> List Comprehension

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
# Map and filter >>> doubled_odd_numbers = list(map( lambda n: n * 2, filter(lambda n: n % 2 == 1, range(1,10)) )) >>> doubled_odd_numbers [2, 6, 10, 14, 18] # List comprehension >>> doubled_odd_numbers_lc = [ n * 2 for n in range(1,10) if n % 2 == 1 ] >>> doubled_odd_numbers_lc [2, 6, 10, 14, 18] |

(I will do a post on map and filter and … Read More

## Code, Explained: Training a model in TensorFlow

In a previous post, we went through the TensorFlow code for a multilayer perceptron. Now we will discuss how we train the model with TensorFlow, specifically in a TensorFlow Session. We will use Aymeric Damien’s implementation in this post. I recommend you skim through the code first and have the code open in a separate window. I have included the key portions … Read More

## Comparing model performance: Including Max Pooling and Dropout Layers

In this post I compare the performance of models that use max pooling and dropout in the convolutional layer with those that don’t. This experiment will be on a traffic sign classifier used in Udacity’s Self-Driving Car Nanodegree. The full code is on GitHub. Recap: Max Pooling and Dropout Max Pooling: A way of reducing the dimensionality of input (by … Read More

## Explaining TensorFlow code for a Multilayer Perceptron

In this post we go through the code for a multilayer perceptron in TensorFlow. We will use Aymeric Damien’s implementation. I recommend you have a skim before you read this post. I have included the key portions of the code below. 1. Code Here are the relevant network parameters and graph input for context (skim this):

1 2 3 4 5 6 7 8 9 |
# Network Parameters n_hidden_1 = 256 # 1st layer number of features n_hidden_2 = 256 # 2nd layer number of features n_input = 784 # MNIST data input (img shape: 28*28) n_classes = 10 # MNIST total classes (0-9 digits) # tf Graph input x = tf.placeholder("float", [None, n_input]) y = tf.placeholder("float", [None, n_classes]) |

Here is the model … Read More

## Comparing Model Performance with Normalised vs standardised input (Traffic Sign Classifier)

In the previous post, we explained (1) what normalisation and standardisation of data were, (2) why you might want to do it and (3) how you can do it. In this post, we’ll compare the performance of one model on unprocessed, normalised and standardised data. We’d expect using normalised or standardised input to give us higher accuracy, but how much better … Read More

## Traffic Sign Classifier: Normalising Data

In this post, we’ll talk about (1) what normalising data is, (2) why you might want to do it, and (3) how you can do it (with examples). Background: The Mystery of the Horrifically Inaccurate Model Let me tell you a story. Once upon a time, I trained a few models to classify traffic signs for Udacity’s Self-Driving Car Nanodegree. I first … Read More

## 18 Game Theory Ideas

Image creds: SMBC Here are 18 game theory-related ideas I came up with in the last Game Theory lecture of term. These are things that I think would be interesting to explore and are suited to (but do not require) people who have elementary knowledge of game theory. Look into Quantum game theory. Create a game theory problems tree. E.g. … Read More

## How to use AWS EC2 GPU instances with BitFusion

If you want to train neural networks seriously, you need more computational power than the typical laptop has. There are two solutions: Get (buy or borrow) more computational power (GPUs or servers) or Rent servers online. GPUs cost over a hundred dollars each and top models like the NVIDIA TESLA cost thousands, so it’s usually easier and cheaper to rent … Read More

## Eradicating Unemployment with On-Demand Services?

This week our Macroeconomics lecturer suggested that on-demand services such as Uber and TaskRabbit might eradicate unemployment. He claimed figures showed that unemployed people spent only two hours a week on average looking for jobs, so with Uber etc. they could earn money and work while still having enough time to look for new jobs they’d like to move on … Read More