Feeds:
Posts
Comments

Archive for the ‘cognitive biases’ Category

One of my favorite episodes of 99% Invisible was episode 78: No Armed Bandit. In the episode Natasha Dow Schüll discussed the evolution of casinos and addiction psychology. At one point, she talks about how penny slots are very popular because people (a) can afford to bet many lines at once and (b) ‘win’ almost every time:

When you’re betting 300 pennies on 100 lines, you’re gonna win back a portion of those. For the first time in history, you’re not betting a token and then losing it all or doubling it tripling it. That’s a really volatile setup. This is much less volatile. You’re spreading your bet across a hundred lines and it’s really a lot safer if you want to see it that way. Because chances are you’re going to win something on some of those lines. And it just so happens that these machines are designed when you win back something to give you all the winning stimuli that comes along with a real win. And so one researcher has aptly called this a false win or losses disguised as wins. ’cause you’re putting in 45 coins and ‘winning’ 9 back, that’s a pretty radical net loss. But yet, there’s little diddies that are being played. It’s virtually no different than when you really do win.

I feel like false wins are a metaphor for much of life (and possibly for life itself if you’re feeling extra cynical).

But I also think this can happen on a group level. Consider the idea of ‘following your dreams.’ By this I mean when a youngish person invests everything they have into pursuing some long-shot goal. For example, maybe they drop out of college to try to become a famous musician, move to LA to try to get into movies, try to turn a casual interest in magic into becoming the next David Blaine, or try to become a best selling author. Think of every person who does this as a coin. If every person who did this had their story told, we would be aware of how often they failed to achieve their goals and how often they wished they hadn’t spent so many years ignoring the advice of people who told them they weren’t good enough. We would also be aware of the details of each success story. If it was a TV show and each episode was one person’s story, we might have to watch 1000 episodes detailing failures before observing one success (I have no idea what the true ratio of failures:successes is). In that case, we would be aware of whether each coin won or lost. We would feel these losses, and might not be so encouraging of mildly talented people pursuing their dreams. Alternatively, suppose pretty much the only stories we heard about were the successes. The famous musician is interviewed, looks into the camera and tells young people to always follow their dreams. The best selling author tells the story of how they kept writing no matter how many times they were rejected — and how it paid off. In this scenario, we have some awareness that there must be a bunch of people who didn’t succeed, but we don’t hear their stories. We only hear the winning sounds and flashing lights of the false win.

 

Read Full Post »

Marcia: “It’s unethical to pay people so little money to clean your house.”

Jan: “They need the money. They believe they are better off with this job than without it. I am paying them something. You are paying them nothing and calling me unethical.”

Marcia: “But they should not be in a position where a low paying house cleaning job is their best option.”

This is an example of a fallacy that I will call ‘appeal to a priori counterfactual worlds [1].’ Marcia’s argument that Jan is being unethical is that the world should be different than it is. However, Marcia’s counterfactual world (where everyone has good employment options) is not something that Jan can create. Jan did not choose a world of exploitation over a world of equality. Jan lives in a world where some people do have very few options and made an arrangement that both parties believe is mutually beneficial [2].

——

[1] I will briefly describe what I mean by a priori counterfactual, in case it is not clear. Suppose I have to make a decision between option A and option B. Before I make the decision, we can imagine that there are two potential outcomes — how the world will be if I choose option A and how the world will be if I choose option B. Once I make the decision, one world becomes factual and the other becomes counterfactual. But both of these worlds were possible a priori (before I made the decision). Jan could choose to hire someone to clean her house, or should could choose something else. At the time she was making the decision about whether or not to hire someone, there was no potential outcome that would involve a world where everyone could get enjoyable, high paying jobs. So this world that Marcia envisions, however appealing you might find it, was a priori counterfactual.

[2] I’m not saying whether Jan is being ethical or not. I’m just saying that Marcia’s argument against Jan is flawed.

Read Full Post »

Relying on resemblance?

This is from Kahneman’s Thinking, Fast and Slow:

As you consider the next question, please assume that Steve was selected at random from a representative sample:

An individual has been described by a neighbor as follows: “Steve is very shy and withdrawn, invariably helpful but with little interest in people or in the world of reality. A meek and tidy soul, he has a need for structure, and a passion for detail.” Is Steve more likely to be a librarian or a farmer?

Apparently, most people, when presented with this question, answer librarian. Kahneman concludes that this is because of a resemblance heuristic (a common stereotype of librarians is being shy and orderly). He argues that the correct answer is farmer, because there are a lot more farmers than librarians.

While his conclusion might be correct, I argue that there are several other alternative explanations.

Imprecise questions

What does it mean that Steve was selected at random from a representative sample? Does he mean that Steve was randomly selected from the population of adults in the US? One could interpret his question as: “is it more likely that a person, selected at random from the population of shy, withdrawn, helpful, meek people who have a passion for details, but little interest in the world of reality, works as a librarian or a farmer?”

If that is the correct interpretation, why does he make it personal by saying that a neighbor was describing Steve? If a neighbor is choosing to gossip about someone, that might offer clues into what the person’s profession is. Are librarians or farmers more likely to be gossiped about?

If a neighbor is talking about someone being shy and needing structure, and that is the stereotype of a librarian, then it very well might be more likely that Steve really is a librarian (precisely because people notice stereotypical traits in people and like to talk about it). So people who said librarian very well might be correct. Further, it could be that the neighbor is wrong about Steve. That is the neighbor’s opinion about him. Maybe the neighbor assumes those things about him because he is a librarian. Maybe it is the neighbor who made an error.

One could interpret his question as: “is it more likely that a person who is described by a neighbor as shy, withdrawn, helpful, meek, with a passion for details, but little interest in the world of reality, works as a librarian or a farmer?” What is the correct answer to that question? Does anyone know?

Social questions

Most of the time, when we are asked a question, it is in a social setting. In those settings, most people are not very precise and we have to interpret what they mean. If you always interpret questions literally, you will often make errors. In a social setting, if someone asked me the question about the farmer vs librarian, I would probably interpret their question as follows:  “if you compare the percentage of librarians who are shy, withdrawn, helpful, meek, with a passion for details, but little interest in the world of reality, with the percentage of farmers who have those traits, which one do you think is higher?”  I think this is what most people would mean if they asked a question like the one Kahneman posed.

Kahneman made his question social and imprecise by describing a gossiping neighbor. People are used to the imprecision that comes with being asked questions in social settings. They interpret these questions and answer the question that they think the person is asking. Because of that, it is unclear whether librarian or farmer is truly the correct answer (it depends on what question the reader was intending to answer).

In research settings, questions are typically more precise. Researchers often spend a large amount of time editing survey questions to make them as precise as possible. The problem here is that it’s a research question that is as sloppy as a question in a social setting. So what can one conclude from it?

Read Full Post »

The arguments that follow would apply to any action X that I have done on a regular basis, but recently decided that it would be net beneficial to other living things if I reduced how often I did that behavior.  However, I will use a hypothetical example to make the points.

Suppose that I object to factory farming on the grounds that they create a great deal of suffering for non-human animals, such as cows, pigs, and chickens. Suppose also that over the past few years I have eaten meat on about 60% of days.  I decide that, if I greatly reduce the amount I spend on products from factory farms, I will slightly reduce the demand for these products.  I am also aware that, that alone, will really not accomplish much.  So I also plan to try to influence other people into doing the same.

Thus, I have two goals:  (1) reduce how much I spend on factory farm products and (2) influence friends.

It seems to me that (1) and (2) are not orthogonal.  That is the issue I want to explore.

If I eat meat on about 60% of days, that would be about 219 days per year.  Suppose I decide to reduce my daily probability of eating meat to 0.05, which would be about 18 days per year.  I would be reducing my meat eating days by about 200 per year, which seems like quite a large number.   I could call myself a High Probability Vegetarian (meaning that there is a high probability that I will be a vegetarian on a given day).

However, high probability ‘-ians’ can be viewed as hypocrites.  I have noticed this tendency of people to dismiss someone’s argument if they having any behavior that could be interpreted as hypocritical.  “Al Gore warns us about global warming, but he flies in a private jet!”  “You say that consumerism is bad, but you own an iPhone!”  “That politician says they support public schools, but their child goes to private school!”  “You say you are anti-war, but you don’t refuse to pay taxes (which funds the war)!”  “That Republican is against illegal immigration, but he employs undocumented workers!”

Ideally, we could separate the person from the argument.  To the degree that we link them, I think we are searching for excuses to reject their argument.  (this person is making me feel guilty…their arguments are good…but i don’t want to change…oh, look, they’re a hypocrite! yay! i’m off the hook).  This is the logical fallacy known as appealing to hypocrisy.

Perhaps this could be avoided if the person is up front about their personal life.   For example, it seems perfectly reasonable to me to hold the following two positions:  that factoring farming is unethical (in its current form) and that reducing how much I spend on factory farming products will have no impact on the amount that animals suffer.  There are two issues that should be separate.  One is the ethics of the thing.  The other is what are strategies to change the things deemed unethical.  The anti-factoring farming person could say “I think factory farming is bad because of reasons XYZ.  However, I don’t think my personal spending habits have any impact in an economy this large.”   At least by acknowledging your lack of a particular action up front, no one will dismiss you when they catch you not being virtuous enough.

I find this rather unsatisfying, because it sounds as if there is no hope for change.  Is it really necessary to propose a solution to be taken seriously?  It seems to me that if you are passionate about a cause, people expect you to do something about it.  I actually think that that is reasonable, but making a a good argument is doing something about it.  In fact, it very well might have a bigger impact than other actions that would prevent you from being dismissed.  The catch here is that, if good arguments are good actions, but good arguments will be dismissed if you are viewed as a ‘hypocrite,’ then you might have to do things that you think will have no direct impact, in order for your arguments to have the impact that you want them to have.

Read Full Post »

In this entry, unless otherwise noted, humanism will refer to the belief that humans have special status (i.e., superiority) among species (in the same spirit as the way sexism refers to views about the sexes, and racism refers to views about races).

Science has gradually chipped away at humanism.  Evidence for heliocentrism, evolution, the cognitive map of bees, super organisms, the evolution of culture, and evidence against dualism and free will, to name some examples, have had a big impact.   However, humanism still persists in various ways throughout our culture.

Consider language.  Here are some humanist words/concepts:

  • natural‘ – If humans build a skyscraper it’s unnatural, but if bees build a beehive it’s natural.  If humans clean a new environment with antibacterial soap, it’s unnatural, but if Jewel Wasps do it it’s natural (note: ants also make antibiotics).  And so on.
    All living and non-living things affect the environment around them.  Humans have their own niches in that regard (in terms of how we do it), but so does everything else.
  • ‘humanist’ / ‘humanism’ – Sometimes people use the word ‘humanism’ as a synonym for being nice.  That definition of humanism is itself humanist (the bad kind), because it suggests that humans have some special ability for kindness.
  • ‘animals’ – The word ‘animals’ often implies only non-human animals.

Humanist thinking also includes greatly overestimating how many things are uniquely human.

It’s great to see people like Neil Shubin trying to get people to see the evolution of living things as a small part of the evolution of the universe.  Humanism will be difficult to defeat, however, because we have egos interacting with paradigm shift resistance.

Read Full Post »

Hopefully it is not controversial to say that most humans have BS detectors that do not work very well.  How often, for example, does someone tell you something that you immediately know isn’t true (which can be demonstrated with two seconds of googling or going to snopes)?

I think it is very difficult for logical brain people to understand that when social brain people say they believe X, they are not saying that they’ve given it a lot of thought, have looked at the evidence, and decided that X was true.  Saying that they believe X is telling you what social group they belong to — it’s throwing up a gang sign.

Given that humans have large social brains, perhaps it is not surprising that having good BS detectors is not important.  To bond with your in-group, it’s important to trust them.

However, it is not hard to imagine people having good BS detectors and signaling trust.  If you think of it in terms of multilevel selection, you could reap the group benefits by signalling agreement, while enjoying the individual benefits by not believing in non-sense.   So why doesn’t that seem to be how our brains evolved?

I think the problem here is that in cases where the BS detector would benefit you individually, pretending to agree with the group would harm them (costing you the group benefit).  For example, if your group says that everyone should eat berries that you know are poison, you will not get the group benefits if your group members all die (while you secretly spit out the berries).  On the other hand, if their beliefs are more benign (like belief in a rain god), you would not get much (if any) individual benefit from awareness that what they believe is false.

Without multilevel selection pressure, I think that group benefits of bonding via trust trump individual benefits of being a critical thinker.  Thus, broken BS detectors.

Read Full Post »

There are plenty of examples of people following orders to commit what are widely considered immoral acts.   Some argue that people do so because they follow the crowd, are afraid to defy authority, or believe they are not responsible (lost agency).  However, I wonder if in many cases people identify with the authority figures (and the group the authority figure presides over) and adopt their beliefs.

Jim Emerson discuss this in his article on good and evil in superhero movies:

It’s so easy to claim that Evil People just decide to Do Evil because they are Evil (totally unlike the rest of us!). But the truth is, many Nazi war criminals and those ordinary people who actively or passively collaborated with them weren’t all, as the cliché has it, “just following orders.” They believed the horrors of genocide served what they saw as a greater purpose: maintaining the purity of their beloved Germany, their race and their empire. So, as difficult and terrible as it might be…, the Final Solution was, they believed, a noble calling in the long run. …They weren’t monsters — they were people like you and me who found themselves capable of doing monstrous things in the name of a Great Cause in which their faith was pure and fervent and unshakeable.

Emerson also pointed to Alex Haslam’s appearance on Radiolab, in which he argued that participants in the Milgram experiment identified with the group (and authority figure) that were carrying out the experiment:

They’re engaged with the task. They’re trying to be good participants. They’re trying to do the right thing. They’re not doing something because they have to; they’re doing it because they think they ought to.

Read Full Post »

Older Posts »