In thinking about the self-indication assumption, let’s consider some experiments.
Experiment 1a
Suppose there are 1 million balls in an urn. 1 ball is orange and the rest are blue.
The algorithm goes like this: flip a coin. If heads, Large World wins and 999,999 balls will be randomly selected from the urn. If tails, Small World wins and 1 ball will be drawn from the urn.
Once the ball(s) have been drawn, we are told whether the orange ball was drawn.
Prior probability of Large World: P(heads)=0.5
Posterior probability of Large World: P(heads|orange ball)≈1 and P(heads|orange ball not drawn)≈0
So, knowledge about whether the orange ball was drawn tells us a great deal about what world we are in.
Experiment 1b
Suppose there are 1 million balls in an urn. All of the balls are blue.
The algorithm goes like this: flip a coin. If heads, Large World wins and 999,999 balls will be randomly selected from the urn and then painted orange. If tails, Small World wins and 1 ball will be drawn from the urn and then painted orange.
Once the ball(s) have been drawn, we are told whether a ball that has subsequently been painted orange was drawn.
Prior probability: P(heads)=0.5
Posterior probability: P(heads|at least one blue ball painted orange)=P(heads)=0.5
Because regardless of the result of the coin flip at least one ball would be painted orange, knowing that at least one ball was painted orange tells us nothing about the result of the coin flip. So in this experiment, the prior probability equals the posterior probability.
Experiment 2a
1,000,000 people are in a giant urn. Each person is labeled with a number (number 1 through number 1,000,000).
A coin will be flipped. If heads, Large World wins and 999,999 people will be randomly selected from the urn. If tails, Small World wins and 1 person will be drawn from the urn.
Ahead of time, we label person #5,214 as special. After the coin flip, and after the sample is selected, we are told whether special person #5214 was selected.
Prior probability of Large World: P(heads)=0.5
Posterior probability of Large World: P(heads|person #5,214 selected)≈1 and P(heads|person #5,214 not selected)≈0
Experiment 2b
1,000,000 people are in a giant urn. Each person is labeled with a number (number 1 through number 1,000,000).
A coin will be flipped. If heads, Large World wins and 999,999 people will be randomly selected from the urn. If tails, Small World wins and 1 person will be drawn from the urn.
After the coin flip, and after the sample is selected, we are told that person #X was selected (where X is an integer between 1 and 1,000,000).
Prior probability of Large World: P(heads)=0.5
Posterior probability of Large World: P(heads|person #X selected)=P(heads)=0.5
Regardless of whether the coin landed heads or tails, we knew we would be told about some person being selected. So, the fact that we were told that someone was selected tells us nothing about which world we are in.
Self-indication assumption (SIA)
Recall that the SIA is
Given the fact that you exist, you should (other things equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist.
“Given the fact that you exist…” Why me? Because I was already selected. I am that ball that was painted orange. I am person #X. I only became the special ball and the special number after I was selected.
The mistake of the SIA is the data were generated from experiments like 1b and 2b, but is treated as if it’s from 1a and 2a.
—-
update: an even more detailed argument here
In experiment 2b, do you think a person who is created (and before they know how many other people there are) should think heads and tails are equally likely?
Robert,
A person created in 2b *should* think heads and tails are equally likely, but they probably wouldn’t think that. It’s just egotism.
“out of all of the people that could have been selected, *I* was selected. That is significant. That is too unlikely in Small World.”
But of course, no matter who was selected, they would have said the same thing. If person #111 was selected they would have said “the probability of Small World given that I, special person #111 was selected, is too small. We must be in Large World.” And if person #521,456 was selected, they would have said “the probability of Small World given that I, special person #521,456 was selected, is too small. We must be in Large World.” All we know is that someone who thinks they are special was selected. But that had a probability of 1 of occurring, which means we have no new information.
[…] Roy argues that the self indication assumption is equivalent to such reasoning, and thus wrong. For the self […]
I’m not sure if I understand the lesson you take to follow from 2b.
(1) If you, an external observer, will be informed about the number of one of the selected people (say, by randomly picking one of the selected people and reporting his/her number), then obviously you have no reason to prefer Large World over Small World. (You’re bound to find out about some number or other; if your probability function treats each of these as confirming Large World, then your probability function is irrational.)
(2) But the epistemic situation of someone who was in the urn (conscious and all) finds herself selected, that is strong evidence for Large World. Before selection, this person’s conditional probabilities should be P(I will be selected | Large World) = 0.999999 and P(I will be selected | Small World) = 0.000001. Being selected then counts as evidence for Large World.
You’ve made it clear that you agree with (1). But do you disagree with (2)? Why?
I agree with (2). I don’t really see a difference between that and my scenario 2a.
[…] kind of reasoning leads to bad inference, such as the self-indication assumption or the doomsday argument. The wikipedia version of the doomsday argument is: “supposing the […]