HPS 0628 Paradox

Back to doc list

Paradoxes For Probability: Expectations

John D. Norton
Department of History and Philosophy of Science
University of Pittsburgh
http://www.pitt.edu/~jdnorton



The idea that expected value or expected utility is a good guide to the desirability of some option, such as a game, can lead us to paradoxical situations, when the expected utility is badly behaved. We shall see two cases with two different types of bad behavior. First is the St Petersburg Paradox (infinite expected utility). Second is the exchange paradox (ill-defined expected utility).

For a reminder of how expectations arise and are computed see Probability Theory Refresher

Expected Values are Not All That Matters

For many ordinary decision, expected value calculations can serve as a good guide. Choose the option with the greater expected value. They will tell you, for example, that there are no casino games worth playing extensively. For all of them have a negative expected value for each play. That is essential, since otherwise, the casinos could make no assured income. You may win or lose a little at the outset. However over the longer term, you fortune will bleed away as the negative expected values mount. Your loss is the casino's income. In summary:

Your choice:
Play a casino game: expected value is negative.
Do not play a casino game: expected value is zero.

However, there are many cases in which a favorable expectation is not enough to induce you to select that option. Here are several game, each with the same expected value per play.


Game 1 expected value +$1

Outcome heads
probability 1/2
tails
probability 1/2

net payoff +$3 -$1

Game 10 expected value +$1

Outcome heads
probability 1/2
tails
probability 1/2

net payoff +$12 -$10

Game 100 expected value +$1

Outcome heads
probability 1/2
tails
probability 1/2

net payoff +$102 -$100

Game 1000 expected value +$1

Outcome heads
probability 1/2
tails
probability 1/2

net payoff +$1,002 -$1,000

Game 1,000,000 expected value +$1

Outcome heads
probability 1/2
tails
probability 1/2

net payoff +$1,000,002 -$1,000,000

In terms of expected value, the choices are the same in each game:

Play game: expected value +$1
Do not play game: expected value $0

However few of us would treat the decision over whether to play as the same in all the games.

In game 1, the decision is easy. We risk losing $1 on an unfavorable play. However, the greater gains on favorable plays work to our advantage. In repeated plays we will gain, on average, $1. To play is attractive.

As we proceed through the sequence of games, the decision becomes harder. Game 10 still holds some appeal. We will come out ahead on average by $1 per play. This is still attractive, as long as losing $10 on unfavorable plays is not too burdensome. In game 100 and game 1000, the decision becomes more difficult. The thought of losing $100 or $1000 on single unfavorable plays must be balanced against the fact that, on average, we will come out ahead by $1 on each play. Here we must recall that the real outcome of repeated plays is unlikely to be the average exactly. It will more likely deviate from it in gains or losses in some multiples of $100 or $1000.

With game 1,000,000 the averages are no longer of any real interest. That we might gain $1 on average on many plays is surely the last of our considerations. It is appealing to gain $1,000,002 on a winning play. However it is quite unappealing to lose $1,000,000 on a losing play. The short term risk associated with these two outcomes becomes the major factor in our decision. In ten plays we might expect on average a net profit of $10. But that average is unlikely to be realized exactly. It will only happen if there are exactly 5 favorable outcomes in the ten plays. As the plot of probabilities below shows, that net profit of $10 will arise only with a probability of just under 0.25 (An exact calculation show that this precise average arises only with probability 0.246 if there are ten plays.)

More likely, that is, with a probability slightly greater than 0.75, we will be either ahead by millions or behind by millions. If there are 6, 7, 8, 9 or 10 favorable outcomes, we would be ahead by millions. If there are 0, 1, 2, 3 or 4 favorable outcomes, we would be behind by millions. Each is equally likely and we do not know which will happen. Our decision to play or not depends on our appetite for this uncertainty and our willingness and capacity to sustain the possible losses. The expected value of a play has all but disappeared from our decision making.

That there is more to decisions that simple expected value calculations is the basis for the following paradox.

St. Petersburg Paradox

This is one of the earliest paradoxes of probabilistic decision theory. It was published by Daniel Bernoulli in 1738 in the Academy of Science of St. Petersburg. In a now standard formulation, it pertains to a game in which a coin is repeatedly tossed until a head appears. A player of the game is rewarded according to how late in the sequence of tosses a head first appears.

$2

then$4

thenthen$8

etc.

That is, rewards are made on the following schedule:

Toss on which head first appears 1 2 3 4 ...
Reward $2 $4 $8 $16 ...
Probability 1/2 1/4 = 1/22 1/8 = 1/23 1/16 = 1/24 ...
Expected reward contribution $2x1/2 = $1 $4x1/4 = $1 $8x1/8 = $1 $16x1/16 = $1 ...
Expected reward, $∞ = $1 +
$1 +
$1 +
$1 +
...

The outcome is that the player may be rewarded with $2, $4, $8, etc. with the different probabilities shown. As the tosses proceed, it become more and more likely that a head will show and terminate the game. However no matter how many tosses have happened, it is always possible that there could be more. An infinite number of tosses must the considered in this calculation of expected reward, even if those requiring many tosses are extremely unlikely.

The expected reward is computed by summing the products of each reward with its probability. The result is an infinite expected reward

$1 + $1 + $1 + $1 + = $∞

With such a lucrative reward luring in a player, we now ask:

How much should the player pay to be able to play this game? What is a fair entrance fee?

If we follow the guide of expected value as providing the value to the player of game, then the player should find $∞ to be a fair entrance fee.

Since paying $∞ presents some practical obstacles, another way to present the result is that any finite entrance fee no matter how large is favorable to the player. If, for example, the game can be played with an entrance fee of $1,000,000, then the player can compute:

expected value = expected reward - fee  = $∞ - $1,000,000 = $∞

This will be true for an entrance fee of any finite amount:

expected value = expected reward - entrance fee = $∞ - $any finite amount = $∞

Something has gone wrong

It is also possible, but with zero probability, that no head appears in infinitely many tosses (assuming a supertask like acceleration of the tossing). That is an even worse outcome for the player since then the player loses the infinite entrance fee and receives no reward.

It should be clear that something has gone amiss here. The game is designed so that the reward a player can receive is always some finite amount. The first head must appear after some finite number of tosses, else the player receives no reward. It does not matter when that fruitful head first appears: the 10th toss, the 100th toss, the 1000th toss, etc. In each case, the reward actually paid will be some finite, even if large, amount: $210, $2100,$21000, etc.

It follows that, if a player pays an $∞ entrance ticket, then the player is sure to lose, no matter when the head first appears. And worse, since $finite - $∞ = -$∞, the player will surely lose an infinite amount.

Matters are a little better if the player is able to pay only a finite entrance fee. However the expected value criterion reassures the player that any finite entrance fee is favorable. Such a player would be reassured that an entrance fee of $1,000,000,000,000 is favorable to the player. Setting computations of expected utilities aside, this enormous entrance fee would most likely be disastrous. For a player paying it could only gain if an extremely unlikely string of tails only tosses came about. Ten tails in row occurs with probability 1/210, which is 1/1024, and is a quite improbable outcome. A subsequent head would pay the player $2,048, which is well short of the enormous entrance fee just suggested.

Diminishing Utility of Money

An early and still common response to the paradox is to assert that larger amounts of reward money cease to be as valuable to the player as smaller amounts. The first $1 of reward is prized. The next $1 after a $100 is surely still valued. But if the player were to be rewarded with $1,000,000, would an extra $1 still have the same value to the player as the first $1?

This diminution in value is tracked by "utility," which is the true value to the player of the amounts concerned.

For a survey of responses to the paradox that does not include Feller's response, see Martin Peterson, "The St. Petersburg Paradox", The Stanford Encyclopedia of Philosophy (Fall 2020 Edition), Edward N. Zalta (ed.), https://plato.stanford.edu/archives/fall2020/entries/paradox-stpetersburg/ I take the value of 2.9 in the summation from this article.

An early proposal was that the utility--the true value of the rewards--increases only with the square root of the dollar amount. Hence a $1 has a utility √1 and is not reduced. But $100 has a utility of √100 = 10. Hence the sequence of rewards:

$2, $4, $8, $16, ...

has a sequence of utilities:

√2 = 1.414,  √4 = 2, √8 = 2.828, √16 = 4, ...

The expected reward in terms of the players values becomes:

√2/2 + √4/4 + √8/8 + √16/16 + ...   2.9

This resolution is quite unsatisfactory. It seeks to escape the difficulty by changing the problem posed. It tells us how someone who has this diminished value in money might find the problem. But that was not the original problem. We considered someone who values money according to the dollar amount.

The escape is in any case easily evaded simply by increasing the profile of the rewards offered such that the expected reward in value terms is still infinite.

Long and Short Term Decision

While there are many responses to the paradox, in my view, one stands out as most successful. It rests on the distinction between games played in the short term and games played in the long term. By "long term" I mean that there are sufficiently many repetitions of the game that the weak law of large numbers can be brought to bear. Games played for fewer repetitions are being played in the short term.

The weak law of large numbers has been discussed in the Probability Theory Refresher. In brief, it assures us that, in many repeated plays, the frequencies of the various outcomes will approach their probabilities. More precisely it says that we can pick any degree of closeness between the frequencies and the probabilities. Then by repeating the game often enough, we can be sure with arbitrarily high probability that the frequencies lie within that degree of closeness to the probability.

This approach of the frequencies to the probabilities is then imprinted on the expectations. As the frequencies approach the probabilities, the rewards paid by the repeated runs of the game, when averaged over the runs, approaches the expected rewards.

In my view (as explained earlier), expected value is only a useful guide in the longer term case. For it is useful only in so far as it tells us what to expect of the averages in many plays. In the short term, it gives us little.

A familiar example of a short term game is a lottery. The expected return on any lottery ticket is negative. Otherwise, a lottery cannot cover its operating costs, let alone run at a profit. However, when someone buying a $1 ticket is told that the expected return is $0.5, that is unlikely to dissuade them. A lottery ticket purchase is a game played in the short term. The attraction is the possibility of a big win now, not whether the purchase of lottery tickets will be profitable on average in the long run.

The lottery game can become a long term game when the player participates in so many lotteries that there are many losing tickets and some winning ones. More realistically, we might have a well-funded cartel that purchases many tickets over a long period in many lotteries.

For such a cartel, they key concern is whether their expected returns will exceed the cost of the tickets over the long term life of their purchases. That is, the decision of whether to engage in the lotteries will be determined by the expected returns in comparison with the entrance costs.

For most games, such as lotteries, the short term can be converted into the long term merely by playing the game often enough. In the short term, there is little security in the outcome. A player might lose; or a player might gain a lot. If, however, the game is played repeatedly, eventually the losses and gains on average smooth out the peaks and valleys of wins and losses. The averages approach the expected values and the expectations become good guides to the net, long term rewards.


https://commons.wikimedia.org/wiki/File:Car_pictogram.svg

Another more practical example of a short term game is the practice of buying insurance, on, for example, a car. We buy the insurance even though we know that the expected return in the unlikely event of a claim will be less than cost of the policy. It must be so. Otherwise the insurance company could not stay in business. Insuring a single car is like a single play in the insurance game.

If, however, we operated a fleet of many cars, our thinking would change. Insuring the whole fleet is like playing the insurance game many times. We are now operating in the long term. We expect a small number of claims among the cars of our fleet. In deciding whether to buy insurance on each car, we would consider the cost to us of those losses averaged over the whole fleet. Since the insurance company must cover its costs and even make a profit, the total cost to purchase insurance would exceed the total losses that the insurance would make good. In average terms, the loss averaged over each car would be greater than the negative expected return on a policy bought on each car. We would use the expected return to guide our decision not to insure the cars. It is advantageous to "self-insure," that is, to cover any losses out of our existing funds.

Feller's Variable Entrance Fee

William Feller was one of the most significant probability theorists of the 20th century. He presented his analysis in his, An Introduction to Probability Theory and Its Applications. Vol. 1. 3rd ed. New York: John Wiley and sons. 1968 pp. 251-53.

The distinctive feature of long term play is that, on repeated plays, the average net returns eventually stabilize to some constant value that matches the probabilistically expected value. What William Feller recognized is that the St. Petersburg game is an exception to this generic behavior. It does not matter how many times a player plays. The play always has a short term character. The gains never settle down to some stable average. Instead, as the player plays repeated games, the average rewards keep increasing. They are moving towards the infinite expected reward of $∞. They are on the way to stabilization to the average. However they never attain it. The average reward is always finite and thus always infinitely far away from the infinite expectation, no matter how many times the game is played.

It follows that there can be no single entrance free that makes the game fair. For the rewards a player expects depend essentially on how many games the player is willing to play. In Feller's resolution of the paradox, the player is asked to state in advance how many games the player will play. A fair entrance ticket is then computed according to the number of games declared.

The value of that fair entrance ticket increases with the number of games that will be played. When that number becomes very large, Feller found a simple relation between the number of games to be played and the fair entrance ticket. It is:

If a player will play a large number of games, 2n, then the fair entrance fee per game is approximately n.

"Tolerably good"? For an assessment of how well the simple rule works for various values of n, see below.

The approximation is poorer for smaller values of n, but improves as n becomes larger. It is tolerably good when n is as large as 100. In that case, the player undertakes to play 2100 games.The fair entrance fee for each game is $100. That means that the total entrance ticket payment by the player is $100 x 2100. That is a large entrance ticket. What justifies it is that the player will be returned that amount with high probability over the course of play. It is a fair entrance fee since neither the player nor someone offering the game will come out ahead on average.

2100 games is a rather large number of games and not likely to arise in any realist implementation of the game. An adjusted version of Feller's result below applies to smaller numbers of games.

Once the player has secured the agreement on the $100 entrance fee, it is in the best interests of the player to insist that all 2100 games be offered. If they are not offered and the player cannot play all of them, then the entrance fees the player paid for the smaller number of games exceed the fair entrance fee.

The truncated St. Petersburg game

As a preliminary to Feller's analysis, it is convenient to define a truncated St. Petersburg game. In it, there is a maximum number of times that the coin will be tossed. Call it "n." If no head has appeared in these n tosses, then the game is over and the player receives no reward.

Toss on which head first appears 1 2 3 ... n
Reward $2 $4 $8 ... $2n
Probability 1/2 1/4 = 1/22 1/8 = 1/23 ... 1/2n
Expected reward contribution $2x1/2 = $1 $4x1/4 = $1 $8x1/8 = $1 ... $2nx1/2n=$1
Expected reward, $n = $1 +
$1 +
$1 +
...
+ $1

This truncated game behaves like familiar games. A short term player who will play the game only once may not be too concerned with the expected reward. The excitement of a possible win of a large reward of $2n may be enticement enough to play the game.

If the game is played many times, however, the expected reward of $n becomes more significant. The law of large numbers will start to become relevant. For this to happen, the player would need to play many times. The largest reward is 2n. It occurs with the least of all the probabilities, 1/2n. On average we expect it to arise once in 2n games. That gives us a very rough estimate of how many times the game must be played for long term considerations to apply.

If a player plays 2n times, the player would receive, in very rough estimate, rewards of:

$2  in 1/2 the games = 2n/2 games = 2n-1 games
$4  in 1/4 the games = 2n/4 games = 2n-2 games
...
$2n in 1/2n the games = 2n/2ngames  = 1 game

Summing, the total reward is

$2 x 2n-1 + $4 x 2n-2 + ... + $2nx1 = nx$2n

Since we are assuming 2n plays, this total reward would average out to a reward of $n for each game. That is, the expected reward would match, near enough, the average reward actually received in many games.

This last result conforms with the law of large numbers. A decision to play would then be based on a comparison of the entrance fee with the expected reward on each game. If they match, the game would be judged fair. A smaller entrance fee would then be judged favorable to the player. A larger entrance fee would be unfavorable to the player.

The untruncated St. Petersburg game

The core idea behind Feller's analysis is that the entrance fee a player is to be charged for the St. Petersburg game must increase with the number of games the player will play.

Informally we can see why this might be reasonable. A player who plays just once is likely only to win a smaller reward. With great probability, the reward will come after only a few tosses. If, however, the player plays many times, then the chance of a highly profitable long run of tails increases. Because of the structure of the game, a player who does this can gain a significant advantage over the player who plays just once, even in the averages.

To get a sense of how this works, imagine a player who plays 2n games. On average, this player expects to win roughly on the same schedule we saw for the truncated game:

$2  in 1/2 the games = 2n/2 games = 2n-1 games
$4  in 1/4 the games = 2n/4 games = 2n-2 games
...
$2n in 1/2n the games = 2n/2n games  = 1 game

Loosely speaking, the game looks to the player to be just like a play of 2n games of the truncated St. Petersburg game, where the coin tosses are truncated to n tosses only.

This schedule then leads to the same average reward:

$n on average per game among the 2n games played.

Feller gives the result in the reversed form. He assumes N games are played (instead of 2n games here); and the fair entrance fee is then $log2N (since log2(2n ) = $n.

This result sets what is a fair entrance fee. If a player enlists to play 2n games, then a fair entrance fee is $n per game or $nx2n for all 2n games. If the entrance fee is less, the game favors the player who will on average make a net gain. If the entrance fee is greater than $n, then the game is unfavorable to the player, who will on average make a net loss.

Hence the fair entrance fee for the player increases without limit as the number of games to be played increases. This is the perpetual short term behavior of the St. Petersburg game. No matter how many times the game is played, the rewards it returns do not settle down to stable values, as is the familiar behavior of common games. Rather, the average reward per game keeps increasing indefinitely; and they do so in direct proportion with the number of games played, n.

Tightening the result

The above argument is quite imprecise. It is intended only to give a loose sense of what leads to the size of the fair entrance fee proposed by Feller.

Feller gives a precise derivation of a result that justifies this fair entrance fee in the place cited above. He shows that when the number of games played 2n is large, then, with high probability, the average reward the player will accrue per game is close to $n. In a result that mimics the law of large numbers, Feller shows that the closeness of the average reward to $n can be made arbitrarily small and with arbitrarily great probability merely by playing a sufficiently large number n of games.

(What follows are some technical details that may be skipped unless they seem interesting.)

We can tighten the above analysis to bring it closer to Feller's in the following way. The determination of the average reward of $n per game tacitly makes an important assumption. It is that all the first heads in all the 2n games occur within the first n tosses. That is, the assumption is that these first 2n games proceed almost exactly like the truncated game. That is an unrealistic assumption.

A short calculation shows why it is unrealistic. The probability that all heads come within the first n tosses in all the games can be computed fairly easily:

Probability that heads  comes first after n tosses in one game = 1/2n.

Probability that heads comes first within the first n tosses in one game = (1-1/2n).

The introduction of the base of natural logarithms, e = 2.71828..., comes from the limit equation Limm→∞(1-1/m)m = 1/e = 0.3678....  The limit is approached quickly. (1-1/1000)1000=0.36769

Probability that heads comes first within the first n tosses in 2n games
= (1-1/2n) x (1-1/2n) x ...(2n times) ... x (1-1/2n)
= (1-1/2n)2n 1/e = 0.3678....

This shows that the simplifying assumption of the earlier analysis holds only with probability 0.3678...., that is, roughly 1/3rd of the time. Since this value of 1/e is independent of n, this difficulty will persist no matter how many games are played.

It follows that mostly, that is with probability of roughly 0.63, some of the games in a set of size 2n will have the first heads outcome after n tosses. This turns out not be a major problem for the estimate of average rewards. For with high probability, those later heads outcomes will arise in the first few tosses subsequent to the n tosses. They will have a slight effect on the average payout. However as n becomes large, the effect will be negligible.

To see this effect more precisely, consider the probability that all the first heads outcomes arise not in the first n tosses, but in the first n+1 tosses. Following the model above, the probability of this occurrence is

(1-1/2n+1) x (1-1/2n+1) x ...(2n times) ... x (1-1/2n+1)
= (1-1/2n+1)2n =  (1-1/2n+1)2(n+1).(1/2) (1/e)1/2 = 0.6065....

If this case is realized, the average reward is n+1.

This computation can be continued for more coin tosses. The probability that all first heads arise in the first n+2 coin tosses is computed analogously as

(1-1/2n+2) x (1-1/2n+2) x ...(2n times) ... x (1-1/2n+2)
= (1-1/2n+2)2n =  (1-1/2n+1)2(n+2).(1/4) (1/e)1/4 = 0.7788....

If this case is realized, the average reward is n+2.

Proceeding this way, we arrive at the results in the table below:

Number of tosses that contain all first heads Probability average reward
n 1/e = 0.3678... n
n+1 (1/e)1/2 = 0.6065... n+1
n+2 (1/e)1/4 = 0.7788... n+2
n+3 ≈ (1/e)1/8 = 0.8824... n+3
n+4 ≈ (1/e)1/16 = 0.9394... n+4
n+5 ≈ (1/e)1/32 = 0.9692... n+5

To summarize: the simplified analysis of the last section tells us that, with high probability, the average reward when 2n games are played is $n. This then justifies Feller's value of a fair entrance ticket of $n. The difficulty with the simplified analysis is that it requires the case to be one in which all the first heads arise in the first n tosses. We see in the table above that this condition is less probable.

While we cannot assume that the first heads always arise in the first n tosses, the table above shows us that we can assume something close enough to it for our purposes. This is, with increasing probability, we can assume that all the first heads come within the first n tosses or a few after them. If we extend our reach to the first n+5 tosses, the table shows, there is a probability of 0.97 that that we have encompassed all the heads. Then the average reward per game increases just to n+5.

"in ratio." That is we look at the ratio of the average return $(n+5) in comparison with the entrance fee of $n, (n+5)/n = 1 + 5/n. The difference 5/n can be made arbitrarily small by considering sufficiently large n

Feller's result concerns what happens in the limit of very large n. If we consider large n, say n=100, n=1000, n=10,000, then the difference between n and n+5 becomes negligible in ratio when n is large. Thus for large n, Feller's $n entrance fee matches the average rewards expected with high probability.

Playing smaller numbers of games

As you may have already noticed, the number of games under discussion is very large. If we choose n=100, then the number of games to be played is

2100≈ 1.27 x 1030 ≈ 1,270,000,000,000,000,000,000,000,000,000

In practical terms, we should not expect anyone to play this many games! We would, however, need to commit to this many games if we are to use Feller's result of $100 as a reasonable estimate of the fair entrance ticket. (A better estimate using probability 0.97 in the above table is $105, which is 5% larger.)

If our concern is simply to discover if there is a method of determining a fair entrance ticket, the need for this large number of games may not be troublesome. If we are looking for practical advice on how to play the game, then Feller's result is unhelpful. We do get helpful results, however, from a small modification to Feller's analysis. The results needed are in the table above.

A more realistic strategy for playing the St. Petersburg game would be for a player to commit to playing a small number of games, such as 25 = 32 games. We then read off from the table, that, with probability 0.97, all the heads in all the games will occur in the first n + 5 = 5 + 5 = 10 games. The average reward per game is also n + 5 = 5 + 5 = $10.

Commit to play 32 games with entrance fee $10 per game

While these matters should be resolved by well-founded calculations, it is plausible that this entrance ticket is fair. With probability 0.97, all the heads will occur in the first 10 tosses. These are the tosses indicated in the table below. The associated rewards and their probabilities are also shown.

Toss on which heads first appears 1 2 3 4 5 6 7 8 9 10
Reward $2 $4 $8 $16 $32 $64 $128 $256 $512 $1024
Probability 1/2 1/4 1/8 1/16 1/32 1/64 1/128 1/256 1/512 1/1024

Considering the synopsis of likely play displayed in this table, an entrance ticket of $10 seem much more reasonable than the infinite entrance fee or arbitrarily large entrance fee indicated by the original expected reward calculation.

The Exchange Paradox

The paradox

In the exchange paradox, an amount of money is placed in one envelope and twice that amount in a second envelope.


head siluouette from https://commons.wikimedia.org/wiki/File:Head_silhouette.svg

The two envelopes are shuffled thoroughly. Player 1 is given one of them and a second player is given the other envelope. The players will keep the money in their envelopes.

Both players are now given the opportunity to swap their envelopes with that of the other player. Player 1 reasons as follows:

"My envelope has some amount in it: "x," say. The other envelope has either half that amount, x/2, or twice that amount, 2x. If I swap with the other player, I may get either. the probability of each case is equal to 1/2. Hence my expected gain in swapping is:

Expected amount in other envelope
        minus     actual amount in my envelope now

= (1/2) . 2x  + (1/2) . (x/2)  -  x
= (5/4) . x - x
= x/4

That is, if I swap, I have an expected gain of x/4.

Therefore I should swap."

Thus far, the decision problem seems straightforward. The problem, however, is that the game satisfies two conditions:

(symmetry) The game is perfectly symmetric. Both players can reason identically and come to the conclusion that both players profit from swapping.

(zero sum) Any gain one player makes is at the cost of the other player and vice versa. Hence both cannot profit from swapping.

We have a true contradiction:

"Symmetry" and the expected gain computation lead to the conclusion: both players gain if they swap.

"Zero sum" leads to the negation of this conclusion conclusion: It is not the case that both players gain if they swap.

This exchange paradox is set up remarkably easily. This ease is deceptive. It has produced an extended literature with a range of reactions and solutions proposed. I will not try to survey them here. Rather I will pursue what I think is the core difficulty. Its identification resolves the contradiction.

The paradox derives from the failure of the expected gain to be a well-defined quantity. Here the difficulties go one step beyond what we saw in the St. Petersburg paradox. There the expected value criterion failed since the expected value took on an infinite value. In that case, at least, the expected value had a definite, if awkward, value. In the exchange paradox, the problem is worse. The expectation has no well-defined value at all.

If we remove the expected value computation from the paradox, we no longer have both players expecting to gain in a swap. Without that, both conditions of symmetry and zero sum can be maintained. They merely now entail that neither player gains in swapping.

The infinite checkerboard

The example of the infinite checkerboard illustrates how the expected gain in the exchange paradox can have no well-defined value.

Consider an infinite checkerboard. The amounts +1 and -1 are placed on alternating squares as shown.

+1 -1 +1 -1 +1 -1 +1 -1 ...
-1 +1 -1 +1 -1 +1 -1 +1 ...
+1 -1 +1 -1 +1 -1 +1 -1 ...
-1 +1 -1 +1 -1 +1 -1 +1 ...
+1 -1 +1 -1 +1 -1 +1 -1 ...
-1 +1 -1 +1 -1 +1 -1 +1 ...
+1 -1 +1 -1 +1 -1 +1 -1 ...
-1 +1 -1 +1 -1 +1 -1 +1 ...
... ... ... ... ... ... ... ... ...

What is the total amount, net, on the checkerboard? One might think that an answer is given simply by adding up all the amounts in the squares. The difficulty is that there are different ways of adding the amounts and these different ways give different results, or no result at all.

For example, we can add up the amounts by pairs. we first sum adjacent amounts in the blue squares and in the green squares, as shown. Then we sum all those sums.

+1 -1 +1 -1 +1 -1 +1 -1 ...
-1 +1 -1 +1 -1 +1 -1 +1 ...
+1 -1 +1 -1 +1 -1 +1 -1 ...
-1 +1 -1 +1 -1 +1 -1 +1 ...
+1 -1 +1 -1 +1 -1 +1 -1 ...
-1 +1 -1 +1 -1 +1 -1 +1 ...
+1 -1 +1 -1 +1 -1 +1 -1 ...
-1 +1 -1 +1 -1 +1 -1 +1 ...
... ... ... ... ... ... ... ... ...

Then the total amount is

(1 - 1) + (1 - 1) + (1 - 1) +
= 0 + 0 + 0 + ... =0

We might also add them by the paired diagonals shown:

sum on diagonal








+1 +1 -1 +1 -1 +1 -1 +1 +1 ...
+1 = -2+3 -1 +1 -1 +1 -1 +1 +1 +1 ...
+1 -1 +1 -1 +1 +1 +1 -1 ...
+1 = -4+5 -1 +1 -1 +1 +1 +1 -1 +1 ...
+1 -1 +1 -1 +1 -1 +1 +1 ...
+1 = -6+7 -1 +1 -1 +1 -1 +1 +1 +1 ...
+1 -1 +1 -1 +1 +1 +1 -1 ...
...
-1 +1 -1 +1 +1 +1 -1 +1 ...
... ... ... ... ... ... ... ... ...

Then the total amount is

1 + (-2 + 3) + (-4 + 5) + (-6 + 7) + ...
= 1 + 1 + 1 + 1 + 1 + ... = ∞

Here is a closely related addition by a different pairing of the diagonals:

sum on diagonal








-1 = +1-2 +1 -1 +1 -1 +1 -1 +1 1 ...
1 +1 -1 +1 -1 +1 -1 +1 ...
-1 = +3-4 +1 -1 +1 -1 +1 -1 +1 1 ...
1 +1 -1 +1 -1 +1 -1 +1 ...
-1 = +5-6 +1 -1 +1 -1 +1 -1 +1 1 ...
1 +1 -1 +1 -1 +1 -1 +1 ...
-1 = +7-8 +1 -1 +1 -1 +1 -1 +1 1 ...
1 +1 -1 +1 -1 +1 -1 +1 ...
... ... ... ... ... ... ... ... ... ...

We now have a quite different result:

(1 - 2) + (3 - 4) + (5 - 6) + ...
= - 1 - 1 - 1 - 1 - ... =  -∞

These summations so far give definite results, even if different. Another summation gives no definite result. Again we sum along the diagonals:

sum on diagonal









+1
+1 -1 +1 -1 +1 -1 +1 -1 ...
-2
-1 +1 -1 +1 -1 +1 -1 +1 ...
+3
+1 -1 +1 -1 +1 -1 +1 -1 ...
-4
-1 +1 -1 +1 -1 +1 -1 +1 ...
+5
+1 -1 +1 -1 +1 -1 +1 -1 ...
-6
-1 +1 -1 +1 -1 +1 -1 +1 ...
+7
+1 -1 +1 -1 +1 -1 +1 -1 ...
-8
-1 +1 -1 +1 -1 +1 -1 +1 ...
...
... ... ... ... ... ... ... ... ...

The summation is:

1 - 2 + 3 - 4 + 5 - 6 + ...

This summation converges to no definite result. Rather the partial sums alternate between +1 and -1 indefinitely:

1 =  1
1 - 2 = -1
1 - 2 + 3 = 1
1 - 2 + 3 - 4 = -1
1 - 2 + 3 - 4 + 5 = 1
1 - 2 + 3 - 4 + 5 - 6 = -1
...

In short, there is no definite amount that represents the summation of all the amounts on the checkerboard.

However there is a trap lurking in the example. If we were to implement only one of the summation schemes indicated above, we might erroneously come to the conclusion that the summation has a definite value. For example, the first summation by individual pairs seems natural since we know that there are as many +1 and -1. So we might favor the first summation to zero. That would be a mistaken conclusion since it neglects all the other summation schemes. That one summation seems natural is no basis for dismissing the rest.

No Expectations

The trap just sketched for the checkerboard problem is the same trap that leads to a paradox in the exchange problem.

To see it, we need to specify the problem more precisely. The easy analysis just assumes that the first player's envelope has "x" in it. A computation is carried out over x and then a dangerous assumption is made: whatever we find for some specific x, say $16, will be true for a probabilistic aggregation of all possible values of x. This dangerous assumption is false.

For a specific value of x, the expected gain on swapping is well defined. It is

(1/2) x $32 + (1/2) x $8 - $16 = $4

We shall see that, if we allow x to have a sufficient range of values to meet the conditions of the problem, then the expectation ceases to be well-defined.

It is essential for the problem that the range of values that x can take is infinite. For otherwise, there will be a greatest value for x. That would appear as a special case in the computation of expectations. If a player has this maximum value, then expectation of a gain on x/4 for all x on swapping. If the player has the maximum value, then swapping leads to an assured loss. (This loss then cancels out any expected gains arising from smaller values of x.)

The specification of the problem does not indicate just what range of amounts may be in the envelopes. A set that is sufficient to meet the conditions of the problem is:

Possible amounts in each envelope: $1, $2, $4, $8, $16, ...

With these possible amounts, the outcome space consists of all adjacent pairs of amounts:

player 1 has $1, player 2 has $2
player 1 has $2, player 2 has $1
player 1 has $2, player 2 has $4
player 1 has $4, player 2 has $2
player 1 has $4, player 2 has $8
player 1 has $8, player 2 has $4
     and so on indefinitely.

All of these pairs are equally probable.

We can represent these pair comprising the outcome space as columns in a table.

Player 1 1 2 2 4 4 8 8 16 16 32 ...
Player 2 2 1 4 2 8 4 16 8 32 16 ...

We can now compute what each player would gain, were the player to swap:

Player 1 1 2 2 4 4 8 8 16 16 32 ...
Player 2 2 1 4 2 8 4 16 8 32 16 ...
Player 1 gains in a swap +1 -1 +2 -2 +4 -4 +8 -8 +16 -16 ...
Player 2 gains in a swap -1 +1 -2 +2 -4 +4 -8 +8 -16 +16 ...

Consider the expected gain that player 1 would make if player 1 swaps. The gains corresponding to each outcome are in the third row: +1, -1, +2, -2, +4, -4, ... Each of these has the same probability since each pair in the outcome space has equal probability. If that probability is some small amount ε:

Expected gain to player 1 in a swap
= ε ( 1 - 1 + 2 - 2 + 4 - 4 + 8 - 8 + 16 - 16 + ... )

Digression: A serious complication in this computation is that the probability distribution for this outcome space cannot be normalized to unity. That is, we have a countable infinity of outcomes each with probability ε. If we set ε>0, then summing an infinity of them leads to an infinite total probability for the whole outcome space. It violates the normalization axiom that the probability of the total outcome space is one. If we set ε=0, then the summation gives a total probability of zero, which again violates the normalization axiom.

This tedious solution has been calculated in my paper, John D. Norton, "When the Sum of Our Expectations Fails Us: The Exchange Paradox." Pacific Philosophical Quarterly, 78 (1998), pp.34-58.

A careful but tedious solution to the problem is to replace the assumption of equal probability of each outcome by an assumption of diminishing probability as the amounts in the pairs become large, such that all the probabilities sum to one.

This tedious solution works, but makes the computations messy without adding any illumination. While is it strictly impermissible, we can proceed with the above formula if we assume that the probability ε is greater than zero, but infinitesimally small, so that normalization of the probability measure is not compromised. This liberty does not affect the basic ideas of the analysis and I will proceed with it.

The infinite summation fails to have a well-defined value in the same way that the summation in the checkerboard problem fails to have a definite value. That is, we can compute the sum in different ways and get different results.

The simplest summation ends up concluding that no player has an advantage. That is we group the terms such that they pair-wise cancel to zero:

Expected gain to player 1 in a swap
= ε ( (1 - 1) + (2 - 2) + (4 - 4) + (8 - 8) + (16 - 16) + ... )
= ε ( 0 + 0 + 0 + 0 + 0 + ...) = 0

The analysis attributed to player 1 above, however, groups the terms differently. That analysis considered what happens if the player has an amount x. The two possible outcomes are then that the other player has x/2 and 2x. for example, in case player 1 has x=$2, then the other player may have $1 or $4.  Applying this to all possible values of x, player 1 groups the terms in the summation as:

Expected gain to player 1 in a swap
= ε ( 1 + (- 1 + 2) + (- 2 + 4) + (- 4 + 8) + (- 8 + 16) +  ... )
= ε ( 1 + 1 + 2 + 4 + 8 + ...) = ε x ∞ > 0

With this summation, player 1 concludes that there is a positive expected gain in swapping so that it is desirable.

Player 2 can carry out the same calculation on the fourth row of the table, which records player 2's gains. It will correspond to a different summation of the expected gain of  player 1. The order of the gains in the table makes it easy to represent player 1's groupings, but less clear for player 2's. Player 2's grouping in the summation correspond to the boldface pairs:

( 1 - 1 + 2 - 2 + 4 - 4 + 8 - 8 + 16 - 16 + ... )
( 1 - 1 + 2 - 2 + 4 - 4 + 8 - 8 + 16 - 16 + ... )
( 1 - 1 + 2 - 2 + 4 - 4 + 8 - 8 + 16 - 16 + ... )
( 1 - 1 + 2 - 2 + 4 - 4 + 8 - 8 + 16 - 16 + ... )
...

Grouping these terms together, player 2's summation of player 1's expectation sums to:

Expected gain to player 1 in a swap
= ε ( -1 + (1 - 2) + (2 - 4) + (4 - 8) + (8 - 16) + ... )
= ε ( -1 - 1 - 2 - 4 - 8 - ...) = ε x -∞ < 0

Thus player 2 computes the expected gain for player 1, if they swap, to be negative, which means that there is a positive expected gain for player 2.

These three ways of carrying out the summation are just three of infinitely many ways that it could be summed. These different ways can produce any result at all or, as with the case of the checkerboard, no definite result. We conclude that the expected value of a swap can provide no guide to the players since the quantity is not well-defined.

The two conditions that remain well-defined are "symmetry" and "zero sum." Together, it follows from them alone that there is no gain in swapping.

To see this derived formally, call "gain 1" what player 1 gains in swapping and "gain 2" what player 2 gains in swapping. We have:

(from symmetry)    gain 1 = gain 2
(zero sum)             gain 1 + gain 2 = 0

This simple problem in algebra is quickly solved. Substituting the first equation into the second we recover 2xgain 1 = 0 and 2xgain 2 =0. That is

                             gain 1 = gain 2 = 0

To Ponder

The expected value criterion is popular but need not always be the best applicable for a given decision problem. What other guides might we use instead?

Which of these guides are well-suited to games played in the short term? While purchasing a lottery ticket inevitably involves an action with a negative expected value, are there other criteria that can tell us which lotteries are better or worse for players?

The expected value is appropriate for games played in the long term. Is it always the criterion that should be used in this case? What other criteria might still apply?


August 14, December 4, 2021

Copyright, John D. Norton