Sleeping Beauty problem

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

The Sleeping Beauty problem is a puzzle in decision theory in which an ideally rational epistemic agent is to be woken once or twice according to the toss of a coin, and asked her degree of belief for the coin having come up heads.

The problem was originally formulated in unpublished work by Arnold Zuboff (this work was later published as "One Self: The Logic of Experience"[1]), followed by a paper by Adam Elga[2] but is based on earlier problems of imperfect recall and the older "paradox of the absentminded driver". The name Sleeping Beauty for the problem was first used in extensive discussion in the Usenet newsgroup rec.puzzles in 1999.[3]

The problem

Sleeping Beauty volunteers to undergo the following experiment and is told all of the following details: On Sunday she will be put to sleep. Once or twice, during the experiment, Beauty will be awakened, interviewed, and put back to sleep with an amnesia-inducing drug that makes her forget that awakening. A fair coin will be tossed to determine which experimental procedure to undertake: if the coin comes up heads, Beauty will be awakened and interviewed on Monday only. If the coin comes up tails, she will be awakened and interviewed on Monday and Tuesday. In either case, she will be awakened on Wednesday without interview and the experiment ends.

Any time Sleeping Beauty is awakened and interviewed she will not be able tell which day it is or whether she has been awakened before. During the interview Beauty is asked: "What is your belief (subjective probability, credence) for the proposition that the coin landed heads?"

Solutions

This problem continues to produce ongoing debate.

Thirder position

The thirder position argues that the probability of heads is 1/3. Adam Elga argued for this position originally[2] as follows: Suppose Sleeping Beauty is told and she comes to fully believe that the coin landed tails. By even a highly restricted principle of indifference, her credence that it is Monday should equal her credence that it is Tuesday since being in one situation would be subjectively indistinguishable from the other. In other words, P(Monday | Tails) = P(Tuesday | Tails), and thus

P(Tails and Tuesday) = P(Tails and Monday).

Consider now that Sleeping Beauty is told upon awakening and comes to fully believe that it is Monday. She knows the experimental procedure doesn't require the coin to actually be tossed until Tuesday morning, as the result only affects what happens after the Monday interview. Guided by the objective chance of heads landing being equal to the chance of tails landing, it should therefore hold that P(Tails | Monday) = P(Heads | Monday), and thus

P(Tails and Tuesday) = P(Tails and Monday) = P(Heads and Monday).

Since these three outcomes are exhaustive and exclusive for one trial, the probability of each is one-third by the previous two steps in the argument.

Halfer position

David Lewis responded to Elga's paper with the position that Sleeping Beauty's credence that the coin landed heads should be 1/2.[4] Sleeping Beauty receives no new non-self-locating information throughout the experiment because she is told the details of the experiment. Since her credence before the experiment is P(Heads) = 1/2, she ought to continue to have a credence of P(Heads) = 1/2 since she gains no new relevant evidence when she wakes up during the experiment. This directly contradicts one of the thirder's premises, since it means P(Tails | Monday) = 1/3 and P(Heads | Monday) = 2/3.

Nick Bostrom argues that Sleeping Beauty does have new evidence about her future from Sunday: "that she is now in it," but does not know whether it is Monday or Tuesday, so the halfer argument fails.[5] In particular, she gains the information that it is not both Tuesday and Heads was flipped.

Double Halfer position

The double halfer position[6] argues that both P(Heads) and P(Heads | Monday) equal 1/2. Mikal Cozic,[7] in particular, argues that context-sensitive propositions like "it is Monday" are in general problematic for conditionalization and proposes the use of an imaging rule instead, which supports the double halfer position.

Operationalization

The Sleeping Beauty puzzle reduces to an easy and uncontroversial probability theory problem as soon as we agree on an objective procedure how to assess whether Beauty's subjective credence is correct. Such an operationalization can be done in different ways: By offering Beauty a bet; more elaborately by setting up a Dutch book; or by repeating the experiment many times and collecting statistics. For any such protocol, the outcome depends on how Beauty's Monday responses and her Tuesday responses are combined.

Consider long-run average outcomes. Suppose the experiment were repeated 1,000 times. It is expected that there would be about 500 heads and 500 tails. So Beauty would be awoken 500 times after heads on Monday, 500 times after tails on Monday, and 500 times after tails on Tuesday.

  • If Beauty herself collects statistics about the coin tosses (in a way that is not obstructed by memory erasure when she is put back to sleep), she would register one-third of heads. If this long-run average should equal her credence, then she should answer P(Heads) = 1/3.
  • However, being fully aware about the experimental protocol and its implications, Beauty may reason that she is not requested to estimate a statistics of the circumstances of her awakenings, but a statistics of coin tosses that precede all awakenings. She would therefore answer P(Heads) = 1/2.

It's even simpler with bets: If Beauty and the experimenter agree that bets from her different awakenings are cumulative, then a heads quota of 1/3 would be fair. If on the other hand Tuesday bets are to be discarded (being dummy bets, undertaken only to keep Monday and Tuesday awakenings indistinguishable for Beauty), then the fair quota would be 1/2.

All this seems to be consensual among philosophers. Therefore the Sleeping Beauty problem is not about mathematical probability theory. Rather, the question is whether subjective probability or credence are well-defined concepts, and how they must be operationalized.

Connections to other problems

Nick Bostrom argues that the thirder position is implied by the Self-Indication Assumption.

Credence about what precedes awakenings is a core question in connection with the anthropic principle.

Variations

Lua error in package.lua at line 80: module 'strict' not found. The days of the week are irrelevant, but are included because they are used in some expositions. A non-fantastical variation called The Sailor's Child has been introduced by Radford Neal. The problem is sometimes discussed in cosmology as an analogue of questions about the number of observers in various cosmological models.

The problem does not necessarily need to involve a fictional situation. For example, computers can be programmed to act as Sleeping Beauty and not know when they are being run; consider a program that is run twice after tails is flipped and once after heads is flipped.

Extreme Sleeping Beauty

This differs from the original in that there are one million and one wakings if tails comes up. It was formulated by Nick Bostrom.

See also

References

  1. Lua error in package.lua at line 80: module 'strict' not found.(subscription required)
  2. 2.0 2.1 Lua error in package.lua at line 80: module 'strict' not found.
  3. Lua error in package.lua at line 80: module 'strict' not found.
  4. Lua error in package.lua at line 80: module 'strict' not found.
  5. Lua error in package.lua at line 80: module 'strict' not found.
  6. Lua error in package.lua at line 80: module 'strict' not found.
  7. Lua error in package.lua at line 80: module 'strict' not found.

Other works discussing the Sleeping Beauty problem

  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Neal, R. (2006). Puzzles of Anthropic Reasoning Resolved Using Full Non-indexical Conditioning, preprint
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Titelbaum, M. (2013). Quitting Certainties, 210–229, 233–237, 241–249, 250, 276–277
  • Lua error in package.lua at line 80: module 'strict' not found.

External links