Feeds:
Posts
Comments

Archive for April, 2010

Lack of new information

if I observe an event that had a probability of 1 of occurring, I have no new knowledge

That is:  If P(B)=1, then P(A|B)=P(A)

More generally:

if I observe an event that had the same probability of occurring for all hypotheses under consideration, then I have no new knowledge about those hypotheses

Suppose A can take values a1,…,aK. If P(B|A=a1)=…=P(B|A=aK), then P(A|B)=P(A).

The Sleeping Beauty Problem

Consider the Sleeping Beauty problem (quoting wikipedia):

Sleeping Beauty volunteers to undergo the following experiment. On Sunday she is given a drug that sends her to sleep. A fair coin is then tossed just once in the course of the experiment to determine which experimental procedure is undertaken. If the coin comes up heads, Beauty is awakened and interviewed on Monday, and then the experiment ends. If the coin comes up tails, she is awakened and interviewed on Monday, given a second dose of the sleeping drug, and awakened and interviewed again on Tuesday. The experiment then ends on Tuesday, without flipping the coin again. The sleeping drug induces a mild amnesia, so that she cannot remember any previous awakenings during the course of the experiment (if any). During the experiment, she has no access to anything that would give a clue as to the day of the week. However, she knows all the details of the experiment.

Each interview consists of one question, “What is your credence now for the proposition that our coin landed heads?”

When she is asked the interview question, she knows the details of the experiment, and that she has been woken up at least one time.

Let A represent the result of the coin flip (1=heads, 0=tails).

Let  B=1 if sleeping beauty has been woken up at least one time, and B=0 otherwise.

P(A=1)=1/2

P(B=1|A=1)=P(B=1|A=0)=1.  (regardless of whether heads or tails was selected, there was a probability of 1 that Beauty would be woken up, and would have no memory of whether she had been woken up in the past)

Thus, P(A=1|B=1)=P(A=1)=1/2.

So, the fact that she was woken up does not make it more likely that the flip came up tails.  The fact that she was woken up and asked a question provided her with no new information about heads or tails.

However…

Loss functions

Beauty doesn’t like being wrong.  If it landed heads she wants her guess to be p=1 (where p is her guess for the probability of heads).  The farther away from that optimal value she is the bigger the loss.

Let’s make that idea more concrete by adding the following twist to the problem:  suppose every time she is interviewed, she has to pay $|p-A|.

Beauty, being a rationalist, will want to minimize her expected loss.     If heads comes up, she will lose $|p-1| on Monday.  If tails comes up, she will lose $|p-0| on Monday and $|p-0| on Tuesday.  Thus, she will choose p to minimize (1/2)*|p-1|+(1/2)*(|p-0|+|p-0|).

The value of p that minimizes her expected loss is p=0.  That’s my surprise solution to the Sleeping Beauty problem – she should say she’s sure it’s tails.

If instead you use squared error loss, i.e., you minimize (1/2)(p-1)2+(1/2)*2*(p-0)2, then you get the popular p=1/3 solution.  But why squared error loss?

Conclusion

If Beauty sticks to probability laws, and updates based on evidence, she should guess p=0.5.

But, if  Beauty counts being wrong on Monday and Tuesday as twice as disturbing as being wrong just on Monday, then she should guess p=0.  (this isn’t really her credence for heads, just the value that she views as optimal (minimizing loss))

Advertisements

Read Full Post »

The Dunning-Kruger effect was recognized by Bertrand Russell in the 1930s: “..the stupid are cocksure while the intelligent are full of doubt.”  Dunning and Kruger showed that the poorest performers grossly overestimate how well they perform relative to their peers.  They also found that the top performers underestimate their superiority.

While many people have noticed this phenomena, what I found interesting is the finding that “improving the skills of participants…helped them recognize the limitations of their abilities.”  This suggests the effect can be seen both between different people, and with the same person at different times.

I certainly have experienced this.  When I was 20 years old I held plenty of demonstrably false beliefs, and thought I knew far more than I did.  I had stronger opinions when I knew less.

However, I suspect that not everyone improves their competence and recognizes their previous errors.  I think it’s crucial that people not lock-in to their beliefs at a young age (or any age really).  The key way to lock-in is to strongly tie your identity to your beliefs.  If your identity is tied to your beliefs, then it’s much harder to be open to changing your mind.  If you realize you were wrong, not only do you have to admit you made a mistake, but you basically have to undergo a change in identity.  Most likely the people you affiliate with also have their identity tied to those same beliefs, and they might not be too supportive of your epiphany.

Paul Graham suggested keeping your identity small, because “people can never have a fruitful argument about something that’s part of their identity.”  He said that “what makes politics and religion such minefields is that they engage so many people’s identities.” Yes, politics is the mind-killer

Read Full Post »

In An American Tragedy, Dreiser writes:

For in some blind, dualistic way both she and Asa insisted, as do all religionists, in disassociating God from harm and error and misery, while granting Him nevertheless supreme control.  They would seek for something else — some malign, treacherous, deceiving power which, in the face of God’s omniscience and omnipotence, still beguiles and betrays — and find it eventually in the error and perverseness of the human heart, which God has made, yet which He does not control, because He does not want to control it.

Religionists tend to credit God with the good things and blame the bad things on external influences (e.g., demons). I suppose this is a type of group-serving bias (where the group is people who believe in God).  Similarly, people tend to give themselves credit for success and blame bad outcomes on external influences (self-serving bias).

I suspect that most of the time we are not even aware we are doing it.  This is probably another example of self-deception having a fitness advantage.  One theory is “humans deceive themselves in order to better deceive others and thus have an advantage over them.”  Here, if we deceive ourselves we gain confidence (either in our belief about God or in our ability).  We also end up signaling our confidence and ability to others, potentially increasing our value to them as someone to associate with.

“Only the unimaginative carpenter fails to blame his tools.”  —Errol Morris

Read Full Post »

In thinking about the self-indication assumption, let’s consider some experiments.

Experiment 1a

Suppose there are 1 million balls in an urn.  1 ball is orange and the rest are blue.

The algorithm goes like this:  flip a coin.  If heads, Large World wins and 999,999 balls will be randomly selected from the urn.  If tails, Small World wins and 1 ball will be drawn from the urn.

Once the ball(s) have been drawn, we are told whether the orange ball was drawn.

Prior probability of Large World: P(heads)=0.5

Posterior probability of Large World: P(heads|orange ball)≈1 and P(heads|orange ball not drawn)≈0

So, knowledge about whether the orange ball was drawn tells us a great deal about what world we are in.

Experiment 1b

Suppose there are 1 million balls in an urn.  All of the balls are blue.

The algorithm goes like this:  flip a coin.  If heads, Large World wins and 999,999 balls will be randomly selected from the urn and then painted orange.  If tails, Small World wins and 1 ball will be drawn from the urn and then painted orange.

Once the ball(s) have been drawn, we are told whether a ball that has subsequently been painted orange was drawn.

Prior probability:  P(heads)=0.5

Posterior probability:  P(heads|at least one blue ball painted orange)=P(heads)=0.5

Because regardless of the result of the coin flip at least one ball would be painted orange, knowing that at least one ball was painted orange tells us nothing about the result of the coin flip.  So in this experiment, the prior probability equals the posterior probability.

Experiment 2a

1,000,000 people are in a giant urn.  Each person is labeled with a number (number 1 through number 1,000,000).

A coin will be flipped.  If heads, Large World wins and 999,999 people will be randomly selected from the urn.  If tails, Small World wins and 1 person will be drawn from the urn.

Ahead of time, we label person #5,214 as special.  After the coin flip, and after the sample is selected, we are told whether special person #5214 was selected.

Prior probability of Large World: P(heads)=0.5

Posterior probability of Large World: P(heads|person #5,214 selected)≈1 and P(heads|person #5,214 not selected)≈0

Experiment 2b

1,000,000 people are in a giant urn.  Each person is labeled with a number (number 1 through number 1,000,000).

A coin will be flipped.  If heads, Large World wins and 999,999 people will be randomly selected from the urn.  If tails, Small World wins and 1 person will be drawn from the urn.

After the coin flip, and after the sample is selected, we are told that person #X was selected (where X is an integer between 1 and 1,000,000).

Prior probability of Large World: P(heads)=0.5

Posterior probability of Large World: P(heads|person #X selected)=P(heads)=0.5

Regardless of whether the coin landed heads or tails, we knew we would be told about some person being selected.  So, the fact that we were told that someone was selected tells us nothing about which world we are in.

Self-indication assumption (SIA)

Recall that the SIA is

Given the fact that you exist, you should (other things equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist.

“Given the fact that you exist…”  Why me?  Because I was already selected.  I am that ball that was painted orange.  I am person #X.  I only became the special ball and the special number after I was selected.

The mistake of the SIA is the data were generated from experiments like 1b and 2b, but is treated as if it’s from 1a and 2a.

—-

update:  an even more detailed argument here

Read Full Post »

When I was running today, I started with the wind to my back.  However, I didn’t even notice it was windy until I turned around and started running into the wind.  I wonder if this is a general principle:  that we are more likely to notice something that is harming us than we are something that is helping us.

Read Full Post »

Rare events

Suppose you drew 5 cards and they were all hearts.  You might ask “what’s the probability of drawing 5 hearts at random from a 52 card deck?”  Well, sure, that’s easy enough to calculate.  The probability is 0.000495.  Wow!  Such a rare event!  Must be your lucky day.

But…  the reason you asked about 5 hearts is because that’s what you experienced.  You peeked at the data, and then asked your question. I am sure if you would have gotten 5 clubs you would have asked about that.  Or if you had gotten a straight.

So, one way to rephrase the question is as follows:  “what’s the probability of drawing 5 cards at random from a 52 card deck that, upon viewing these cards, would have gotten my attention and prompted me to ask a question about probability?”  I’m confident that any flush or straight would have gotten your attention, and four of a kind as well.  So let’s stick with those.  The probability of drawing either a flush, a straight or four of a kind is 0.006.  While this is still a very rare event, it’s about 10 times higher than that of getting 5 hearts.

My existence

I have heard it argued that

the probability that you are aware right now,  when your existence could have ended billions of years ago, or could have come into being billions of years in the future – this probability is so small, so insignificant, that it is practically non-existent… this could only mean the existence of God.

However, the probability calculation is incorrect.   Let’s define the event A as follows:

A:  I exist now out of all of the possible times I could have existed

The  argument is that P(A) is essentially 0.  However, the argument ignores the fact that I already do exist right now, which is why I am asking the question. I am asking a question based on data that I have already seen.  We have to condition on that data.  Therefore,  let’s define the event B as:

B:  I exist right now

What we are interested in is not P(A), but P(A|B).  We have to condition on B, because B is the reason we are asking the question.  It’s the data we peeked at.   Well, it turns out that P(A|B)=1.  Thus, it’s not a rare event and certainly cannot be an argument in favor of any religious beliefs.

Self-indication assumption

The self-indication assumption (SIA) is

Given the fact that you exist, you should (other things equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist.

Katja Grace presents a simple example of SIA:

For instance if you were born of an experiment where the flip of a fair coin determined whether one (tails) or two (heads) people were created, and all you know is that and that you exist, SIA says heads was twice as likely as tails.

For simplicity, let’s assume this action only happened once (one coin flip).  Thus, there is only one or two people in the world, depending on whether it was tails or heads.  Let’s assume if we are in two person world, we don’t see the other person.

Let’s define the event A as:

event A:  the coin came up tails (i.e., the one person world)

SIA reasoning is that since there were 3 possible people including myself, and I was selected, the probability that I’m on 1 person world is 1/3.

The flaw here is the focus on me existing.  I already exist, so it’s cheating to write questions about me existing after seeing that I exist.  Like the card player who formed the hypothesis after seeing the cards, we’re asking the wrong question.

Instead, let’s define the event B as:

event B:  at least one person exists

Event B is what we really want to condition on.   We want to condition on an arbitrary person existing — there is nothing special about me in this scenario (unless you cheat and use the data you peeked at).   Well, in either of these two worlds (heads or tails) there will exist someone that is wondering which world they are in.  So, the fact that there is someone wondering which world they are in tells us no information about which world we are in.  That is, P(A|B)=P(A)=1/2.

So, in my opinion SIA is wrong.  The fact that I exist tells me nothing about the number of observers.

Doomsday argument

The doomsday argument seems flawed for the same reason.  It basically says that the fact that you exist is evidence that the race will die out soon.

It’s correct that if you could randomly draw a human from the N that will exist, you will probably pick one that is towards the tail of the distribution (when the population is greatest).  However, we cannot think of ourselves as a random draw.  It’s peeking at the data. We already exist.

It should be obvious the argument is flawed based on the fact it always comes to the same conclusion.  Suppose, for example, the total number of humans to ever exist (past and future) will total N.  Every human, numbers 1, 2, … , N, will at some point exist, and could wonder if humans will face extinction soon.  So, if we condition on the fact that right now there is at least one human asking that question, we have no information about whether that human is close to number N.  All information about possible extinction would have to rely on other sources.

Read Full Post »

While humans tend to have more self-control than other species, they still place irrationally high value on rewards in the present.  For example, many people would prefer $50 today over $55 next week.  However, they would tend to prefer $55 53 weeks from now over $50 52 weeks from now.  We discount the penalty for the delay the farther into the future it is.  This preference reversal is another type of asymmetry (just like we are more likely to blame someone for unintended harm than we are to commend them for unintended good.  link)

This suggests that different neural systems might be involved with short-term and long-term decision making, and they compete with each other.

For example, McClure et al 2004 Science abstract:

When humans are offered the choice between rewards available at different points in time, the relative values of the options are discounted according to their expected delays until delivery. Using functional magnetic resonance imaging, we examined the neural correlates of time discounting while subjects made a series of choices between monetary reward options that varied by
delay to delivery. We demonstrate that two separate systems are involved in such decisions. Parts of the limbic system associated with the midbrain dopamine system, including paralimbic cortex, are preferentially activated by decisions involving immediately available rewards. In contrast, regions of the lateral prefrontal cortex and posterior parietal cortex are engaged uniformly by intertemporal choices irrespective of delay. Furthermore, the relative engagement of the two systems is directly associated with subjects’ choices, with greater relative fronto-parietal activity when subjects choose longer term options.

And from the paper itself:

In Aesop’s classic fable, the ant and the grasshopper are used to illustrate two familiar, but disparate, approaches to human intertemporal decision making. The grasshopper luxuriates during a warm summer day, inattentive to the future. The ant, in contrast, stores food for the upcoming winter. Human decision makers seem to be torn between an impulse to act like the indulgent grasshopper and an awareness that the patient ant often gets ahead in the long run.

This all seems to be related to near-far hypocrisy.  When near and far goals contradict, we have trouble resisting the dopamine high that comes with immediate reward.  We apparently are more grasshopper than ant.  So, someone might really love their spouse but still cheat on them. Someone might really care about their favorite charity, but still buy that gold-plated television that they could get on sale if they ordered immediately.

Decisions affected by immediate rewards bias often result in buyer’s remorse.  We recognize the irrationality of the decision shortly after the immediate gratification.  In general, it seems like we are more rational when in far mode.

What to do about it?  Well, when faced with a decision, why not impose a rationality test.   For example, you could ask yourself, “would I still make the same decision if I couldn’t get the reward until next month [or some other appropriate time in the future]?”

Read Full Post »