We discussed the following version of the Ellsberg paradox in a microeconomics lecture yesterday:
There is an urn with 100 red balls and 200 balls that are each either blue or green. (e.g. may have 100 red and 200 blue or 100 red, 199 green and 1 blue.)
You have two choices.
(a) Receive 1,000 if the ball is not red.
(b) Receive 1,000 if the ball is not blue.
(c) Receive 1,000 if the ball is red.
(d) Receive 1,000 if the ball is blue.
Empirical results in the lecture: Over 80% of students in the lecture ended up choosing both (a) and (c).
The argument is that this invalidates the subjective expected utility framework.
- Let $p_R$ and $p_B$ be the probabilities that a chosen ball is red and blue respectively.
- From choosing (a), $(1-p_R)u(1000) > (1-p_B)u(1000) \Rightarrow p_R < p_B$.
- From choosing (c), $p_Ru(1000) > p_Bu(1000) \Rightarrow p_R > p_B$,
The problem with the argument
The reasoning (steps 2 and 3) supposes that if I am presented with two options A and B and choose A, that I strictly prefer A to B. That is not necessarily true. I may prefer A and B equally.
This equal preference for A and B seems consistent with observed behaviour. Among the students sitting near me, many of us found the choice non-obvious because the expected payoffs were the same. We went for the more probability-certain option (which supports the ambiguity aversion reaction) as a way of breaking the tie because we could only choose one option.
Thus I argue that the assertion that a choice indicates a strict preference is false and so the paradox does not invalidate the subjective expected utility framework.
I was surprised that an attack on the strict preference was not one of the responses to the paradox cited later in the lecture.
One reason might be that inference of strict preferences is part of the subjective expected utility framework, in which case I would agree that that step is problematic. This would likely only affect corner cases though.