# I Still Don’t Understand the St Petersburg Paradox

2012/11/09 5 Comments

Let’s consider the following game: You pay a certain amount to play the game. I have a fair coin in my hand. At the beginning of each turn, I flip the coin. If it turns up tails, the next turn begins. If it turns up heads, I give you dollars, where is the turn. So if I get heads on the first turn, you get $2. If I flip the coin and get tails, then another tails, and then heads, then I give you $8. The question is how much would you pay to play this game?

A reasonable measure for this sort of thing is the expected value. When you have a game with the probability of different cash values at the end, the expected value tells you what the average will be if you consider what happens through many of the same game. So it seems like a good choice for how much you should pay, but in this case the expected value is: .* So the expected value here is infinite! Okay, so that means we should spend any amount of money on this game, because no matter what it’s a good deal.

So I claim, but no one will actually do such a thing. Hence, the paradox. Now the resolution to this that I’ve heard and heard was due to Euler (or one of the Bernoulli’s?) was the idea of utility and diminishing marginal value. The idea is that your first trillion dollars is much more valuable to you than your next and you have less value of money the more you get. Now my problem with this solution is that I can modify the game based on your utility function so that the first couple of turns still give a small amount, but the higher turns give an absurd amount of money.

I’ve also heard that martingales and stopping times give a solution to the paradox, but I don’t know enough probability to understand it. So if someone does, would you be willing to give an explanation?

—–

*Notice we can compute the probability because we know if the game ended on the th turn, then the coin tosses had to have gone tails and then heads. So the probability of getting that specific combination is .

It seems reasonable to suppose that utility has a finite upper bound. There is no amount of money that would make me twice the happiness that a quadrillion dollars would give me. If utility is bounded then there are limits to how you can modify the game.

I suppose this is true, but even with this in mind, I feel like I could modify the game so that it gives say $0 for the first 10 turns and so that the expected value is larger than say $100,000. What I mean to say is that there should be some probabilitistic reason as opposed to considerations of the value of money.

Who’s to say no one would play such a game? There are practical limits to how much money someone has and the fact of most people’s risk aversion. But if you made a game show of this and people could get an intuitive sense of how, on average, the game will pay off, could go into debt to play, and could play as often as they wanted, etc., people would play.

If you’re up for reading Durrett, he gives a purely math answer to this question. It’s not easy to follow, but it looks like he lets {X_i} be iid winnings each from a single game of St. Petersburg, and S_n the summed up winnings from playing n games in a row. He then proves S_n/(n log_2 n) –> 1 in probability.

From this he concludes that log_2 n dollars would be a fair price for playing the game n times in a row. In other words, if it costs $10 to buy into a game and both of you are willing to play indefinitely, the expected time before the player turns a profit is 1024 games.

His solution is several chapters before he introduces stopping times, but it could also have the same problem as the martingale strategy; the expected profit is positive in the limit, but the maximum debt might also be unbounded and so it would be impossible to win with a finite line of credit. I don’t think this would be an issue here, since the loss per game is bounded below, while a gambler trying a martingale can double his loss every turn.

Thanks Matt!