Feeds:
Posts
Comments

Archive for the ‘statistics’ Category

Divorce

Causal effects of divorce?

It’s very common for people to cite statistics about how children from divorced families do worse, on average, than children from parents who stayed married. Outcomes for the children from these families might include things like depression, teen pregnancy, high school graduation, college diploma, arrests, or (their own) marital success.

Most of these statistics simply compare children from divorced parents with children whose parents remained married (possibly controlling for some factors, such as SES, age, and race). However, this is really not the right comparison.

Denote by Z the divorce variable. This is just a yes/no indicator function. If Z=1 the couple gets divorced and if Z=0 they don’t get divorced.

Let Y be an outcome of interest. For example, Y could be whether or not the child ends up graduating from HS, income level at age 30, happiness level at age 25, whether or not they get arrested by age 30, etc. Just imagine some kind of thing that we care about that we think might be affected by divorce.

So, typical statistics on outcomes of divorce involve comparing average values of Y|Z=1 with average values of Y|Z=0, where the vertical bar can be read as ‘given’ or ‘conditional on’. But these are two populations, and differences in Y might not have anything to do with divorce (i.e., this is not a causal comparison).

Instead, we might want to consider what would have happened if the people who got divorced did not get divorced. For that, we will need potential outcome notation. Denote by Y(z) the outcome that would have occurred had Z been z. So, we might be interested in average differences between Y(1)|Z=1 and Y(0)|Z=1. The second term is counterfactual in that we do not observe Y(0) for anyone who did get divorced. We could think about ways of estimating it. However, I will argue that this is not what we really want.

Divorce tax

People generally perceive that divorce is a bad thing, especially if the people getting divorced have children. I will focus here strictly on the married/divorced with children scenario.

I sometimes hear people argue that we should make it harder for people to get divorced, that there should be more social stigma attached to it, etc. The collection of penalties for getting divorced might include the following: legal fees; reduction in disposable income (the parents will now likely have to pay for two places to live, rather than one, etc); loss of some relationships (might lose contact with members of your ex’s family; some friends might stop talking to you); the kids might be extra stressed and might act out in various ways, making parenting more difficult; feelings of guilt or shame; stress from divorce and/or custody negotiations/hearings; sadness at loss of relationship. Let’s call this collection of penalties the divorce tax.

The divorce tax can be increased or decreased. Laws could be passed to change how difficult it is to get divorced (making it either more or less difficult). Fees could be changed. The level of social pressure to stay married could change.

Denote by R the divorce tax. For simplicity, think of this as a univariate severity measure (for example, with larger values meaning more of a divorce tax). This is our policy-like variable, as it is something that can be moved. We could increase R by making divorce more shameful, or expensive, or just harder to obtain. We could reduce R by taking away divorce stigma, making it easier, or making it cheaper.

Now consider the effect of R on Z. Using potential outcomes notation, we have Z(r), which is the indicator of divorce if we set R=r. Thus, if Z(r)=Z(r’), then the change in divorce penalty from r to r’ would not affect whether or not the couple got divorced. If, instead, Z(r)=1 and Z(r’)=0, then changing the divorce tax from r to r’ saved this marriage.

The current divorce tax is R=r. Should we change it to r’, where r’>r (i.e., should we make it more difficult to get divorced)? Or should we change it to r*, where r*<r?

Note that, if r’ is close to r, then, for most people Z(r)=Z(r’) (we wouldn’t expect a small change in divorce tax to affect many people). So what we would really like to do is focus on the cases where the change in R does affect Z. That is, a comparison between the average value of Y when we set R=r versus R=r’, among people for which Z(R=r)\ne Z(r’)

In other words, picture the subset of married people who are having problems and would get a divorced under the current divorce tax, but wouldn’t if the tax were at the higher level r’. Would their children fare better under divorce tax r’? Keep in mind that we are restricting to couples who are having serious enough problems that they would get divorced at the current divorce tax level. Is staying together good for those couples?

I made a few simplifications in the above for clarity. I should mention, however, that we would want to capture a time element in several ways. It could be that the higher divorce tax just delays divorce ( for example, think of people who stayed together strictly for the kids, and then got divorced after their youngest turned 18). Is delaying divorce good?

Other effects of divorce tax 

There is an additional challenge that I haven’t yet addressed. The divorce tax itself could directly affect the outcome (not through its effect on divorce).

Consider people who would get divorced under either divorce tax level r or r’. Their outcome Y(Z) might differ, depending on whether R=r or R=r’, even though Z(r)=Z(r’). The higher divorce tax might not prevent the couple from getting divorced, but it might make their lives worse (more stress; bigger financial burden; more shame).

We therefore might expect that lowering R could improve the lives of those who would have gotten divorced anyway. Thus, any discussion of what the right level of R is should consider both the costs and benefits.

Traditional marriage

There seems to be a desire by some to get back to traditional marriage, where divorce was extremely rare and people who did get divorced experienced great shame (i.e., a high level of R). By traditional marriage most people are thinking of farming era up until, say, the early 1900s. However, the purpose of marriage was much different back then. People needed to stay married to survive. They needed children for labor. As Sarah Perry put it

..children were essentially the property of their parents. Their labor could be used for the parents’ good, and they were accustomed to strict and austere treatment. Parents had claims not only to their children’s labor in childhood, but even to their wealth in adulthood. To put it crudely, marrying a wife meant buying a slave factory, and children were valuable slaves.

In situations where spouses and children are not necessary for survival, marriage becomes much more about romance, connectedness, personal growth, amusement, and companionship. For a large part of the population, the commitment is more about assurances of being loved than about assurances of being financially cared for. (although there are still plenty of people who are married only because they can’t afford not to be)

I don’t think people who are sad about the increasing divorce rates are really longing for marriage as it used to be. It seems to me they want the best of both — romantic love that never ends. This, however, would be a new state — not something we would be returning to.

Discourage marriage?

Current US culture is one where marriage is pretty strongly encouraged. When you date someone, it’s not uncommon to get questions about whether and when you will get married. People get very excited about the possibility of planning a wedding celebration. However, if we believe that divorce is bad, then an alternative way to potentially reduce the divorce rate is to discourage marriage (or less strongly encourage it). One could argue that too much divorce just means that too many people got married. I like this idea, because I am not entirely comfortable with the idea of making commitments for your distant future self due to the consent problem.

Read Full Post »

Researchers often dichotomize a continuous-valued variable of interest because it is easier to understand and explain (and you don’t have to worry about pesky little problems like whether or not the relationship is linear).

Consider the following example: suppose the population of interest is people hospitalized due to some specific type of infection. Suppose the research question of interest is whether the time until appropriate antibiotics are given affects the risk of death. The belief is that antibiotics should be given quickly after diagnosis, and delays of even several hours can greatly increase risk of death.

So, we have data on how long it took to give the appropriate antibiotics to the patient, and we have data on whether the patient died due to the infection. Let’s imagine that we will analyze the data using regression models.

We could fit a model with time as a continuous variable. If you include it as a linear term in the model, then the results are interpreted in terms of how specific increases in time affect the risk or odds of death. You would also need to think about whether linearity is reasonable. You could use splines or something similar to allow for non-linearity, but interpretation gets more difficult.

Alternatively, you could dichotomize. Suppose that about half of the time patients are given antibiotics within 3 hours of diagnosis. Researcher might be tempted to fit a model with an indicator variable for the exposure (time until antibiotics less than 3 hours? yes or no).

So, we go ahead and fit that model. We see that times greater than 3 hours increase risk of death. We publish the results. Now there are published results that are easy to understand: if you take longer than 3 hours to give antibiotics, you are putting patients are risk!

But…  suppose 3 hours starts to get in people’s heads as some critical time value. Like, 3 hours is some magic number. Maybe if you routinely give antibiotics within the first 30 minutes, you now feel a little less pressure to hurry that much. If it takes an hour or even 2, you are still within the magic range. So, perhaps, the publication of the article and subsequent media attention will lead to a shift in the distribution of times towards 3 hours, with fast units slowing down and slow units speeding up.

If, instead, you had showed a true ‘dose-response,’ then every minute of delay would be viewed as important, and hopefully there would be a general trend toward faster times.

Read Full Post »

At the end of a year, people like to make lists of top movies, books, etc.  What I plan to do instead is write about the things I learned each year. So, here are some brief highlights of things I learned in 2011:

  • Epigenetics, toolkit genes, genetic switches and how most conversations about heritability are flawed.  I learned a lot about imprinted genes from Charlene Lewis (especially BDNF), about toolkit genes from reading Sean Carroll’s Endless Forms Most Beautiful (which I highly recommend) and about all of these topics from (some of) Robert Sapolsky’s lectures on human behavioral biology (which are fantastic, and free on youtube and itunes).
  • Social belonging sits atop the hierarchy of needs.  Sister Y introduced this idea with her blog here: “the need for social belonging is more pressing than the need for food.”  I have noticed that people are far more likely to want to kill (themselves or someone else) when they have been socially shamed, rejected, or ostracized.  NYU Psychology Professor James Gilligan noted:”The emotional cause that I have found just universal among people who commit serious violence, lethal violence is the phenomenon of feeling overwhelmed by feelings of shame and humiliation. I’ve worked with the most violent people our society produces who tend to wind up in our prisons. I’ve been astonished by how almost always I get the same answer when I ask the question—why did you assault or even kill that person? And the answer I would get back in one set of words or another but almost always meaning exactly the same thing would be, ‘Because he disrespected me,’ or ‘He disrespected my mother,’ or my wife, my girlfriend, whatever.”

    In the same program, Pieter Spierenburg pointed out that murder in defense of your reputation used to be viewed as a pretty minor offense: “Originally around 1300 the regular punishment for an honourable killing would be a fine or perhaps a banishment, whereas punishment for a treacherous murder would be execution.”

  • Evidence in favor of our promiscuous past, the most interesting of which is sperm competition.  I was introduced to this topic in Sex at Dawn.
  • Life cycles of parasites.  I learned about this from Robert Sapolsky and This Week in Parasitism.  I particularly love Toxoplasma and fish tapeworm.
  • Lead and crime.  There are a lot of theories about why crime has declined since the 1990s.  These theories include:  legalization of abortion, tougher sentencing, end of crack epidemic, etc.  But I think the most interesting one is the reduction in lead exposure.  Total lead exposure was a non-decreasing function  from 1900 to 1970.  Lead exposure from gasoline increased sharply from 1930 to 1970.   We know that lead exposure, especially chronic exposure, has neurotoxic effects.  It can be particularly damaging to the frontal lobe.  Thus, we would expect that kids who were exposed to lead would be more likely to engage in impulse crimes when they are young adults.   Jessica Reyes documented the link between lead exposure and crime in the US in this paper.   The graph below, taken from her paper, overlays the lead exposure curve and crime rate curve (with a 22 year lag for lead exposure, because 22 is the average age at which violent crimes are committed, so we would expect childhood exposure to lead to have the largest impact approximately 20 years later):

    I think this is pretty compelling, and a fascinating story.  The League of Nations banned lead pain in 1922, but the US failed to adopt the measure.  The US didn’t take serious action until the 1970s.  To this day, lead paint exposure is a serious problem for people living in old homes in large cities.  I would love to see the lead exposure / crime link investigated using data from other countries.
  • Religion. I learned about the history of god, its relation to changes in civilization (how transitions from polytheism to monotheism paralleled changes from foraging to farming, egalitarianism to hierarchy), lots of cool, related neuroscience, etc.  This is work in progress.  Hopefully I will have more to say about it next year.
  • I found Sister Y’s views on nature very insightful.

Read Full Post »

Tink Thomson on The Umbrella Man in Errol Morris’ short film:

The only person under any umbrella in all of Dallas standing right at the location where all the shots come into the limousine.  Can anyone come up with a non-sinister explanation for this?

It does seem weird.  People will naturally ask themselves informal questions such as “could that just be a coincidence?”  We can make the question increasingly formal:

What is the probability that the only person in Dallas holding an umbrella would be standing where the President was shot?

Or even better:

If we were to randomly place the Umbrella Man in one of the locations where a person was standing along the parade route in Dallas, what is the probability that he would end up right where the President was shot?

You will notice that our minds turn a retrospective observation (“hey, there was a guy with an umbrella standing next to the limo.  That’s weird.”) into a prospective randomization question (“if we randomly place the Umbrella Man…”).

We are only asking about the probability of the Umbrella Man standing there, because we already observed that he was standing there.  The observation drove the question.

I suspect that had the President been shot in a different location, we would have identified someone in the crowd that did something that seemed too weird to just be a coincidence.  That’s part of the reason why conspiracy theories are so seductive — there is always some observation that is hard to explain with chance.

I have made this point before, but this example is better than the ones I came up with.

Read Full Post »

As discussed previously, participants in randomized trials are typically blinded to treatment assignment.  This differs from the non-trial setting, where blinding patients to treatment would be considered unethical.  It is unclear the extent to which uncertainty about treatment assignment affects outcomes.  Most randomized trials are not designed to deal with this issue.

Informed consent laws prevent researchers from lying to patients about treatment assignment.  However, we can, to a large extent, affect what people believe about treatment assignment via the allocation probability.   For example, if subjects are informed that there is a 50% chance they will receive a placebo, they should believe that they have about a 50% chance of receiving placebo.  Alternatively, if we tell them that 99.999% of subjects will receive the active drug, they should be pretty confident that they will receive the active drug.  In the latter example, we will obtain something pretty close to the counterfactual we want (Y0,100%) on 0.001% of subjects.   Of course, we would need an enormous sample size to observe many people like that.  Thus, there are the usual tradeoffs between bias and efficiency.

My suggestion is to randomize subjects to one of several arms that have different allocation probabilities.  Assuming the causal effects are a smooth function of the allocation probability, we could extrapolate to obtain estimates of E(Y1,100% -Y0,100%).

For details, see here, or email for reprint (nequal1@gmail.com).

Read Full Post »

Consider the situation where there are two treatments, T=0 or 1.  Let the variable B denote the subject’s confidence (as a percentage) that they have been assigned treatment T=1.  Finally, let the potential outcome Yt,b be the outcome that would be observed if the subject was actually assigned treatment t and were b% confident that they were assigned T=1.

For example, Y1,100% is the outcome that would be observed if the subject was assigned treatment 1 and was sure that they were assigned treatment 1.  Similarly,   Y0,0% is the outcome that would be observed if the subject was assigned treatment 0 and was sure that they were not assigned treatment 1.

I would argue that the causal effect we are most often interested in is  Y1,100% -Y0,100%   That is, the potential outcome if the subject was assigned treatment 1 and was sure they were assigned treatment 1, minus the potential outcome if the subject was assigned treatment 0 but falsely believed they were assigned treatment 1.

To illustrate the idea, imagine that treatment 1 is an active drug and treatment 0 is a placebo.  We are interested in what would happen if the subject believed they were assigned the active drug and did receive the active drug, versus the case where they were assigned placebo but believe it was the active drug.  The difference in these potential outcomes should tell us the effect of the active drug that is not strictly due to knowing that they are taking an active drug.

Using this notation, we can also formally define the placebo effect as Y0,100% -Y0,0% (the difference in potential outcomes if given a placebo, but on the one hand believe it’s an active drug and on the other had know that it’s a placebo).

The problem is that informed consent laws prevent us from directly observing Y1,0% or  Y0,100%  (because it would require lying to subjects about what treatments they are given).  Typically in randomized trials, only one of the following two potential outcomes is observed for each subject: Y1,50% or Y0,50%.  It is unclear how similar a contrast such as  Y1,50% -Y0,50% will be to the contrast we want, Y1,100% -Y0,100%

Thus, most randomized trials with human subjects are not even designed to obtain the variables that we are most interested in.

Read Full Post »

The primary criticism of observational studies is that there is no way to know the extent of unmeasured confounding.

Randomized controlled trials (RCTs) have their own limitations.  They often exclude patients with co-morbid conditions and select the most adherent patients using a pre-randomization run-in phase.

However, there is another problem with RCTs, one that is not widely recognized.  Quoting myself in a forthcoming paper (link to abstract (email me for reprint)):

In RCTs patients have uncertainty about what treatment they are receiving. A patient receiving an active drug or therapy might falsely believe that they are receiving the placebo or sham therapy. Outside of the RCT environment, a patient who is prescribed a drug by their physician will be sure that they are receiving the active drug. We would expect placebo effects to be stronger if patients were unaware that they might be given a placebo. Similarly, we might expect active treatments to be more effective if there was no uncertainty about treatment receipt. While there has been great emphasis about the importance of concealing treatment assignment, this concealment creates uncertainty within the patient about treatment assignment.

Treatment uncertainty could affect subjects’ behavior (such as adherence) and subjective well being.  Given the evidence about placebo effects, it’s not unreasonable to speculate that these uncertainty effects could be substantial.  Further, treatment uncertainty also might also affect who is willing to participate in the studies.  For example, patients’ who want the newest therapy might be unwilling to risk getting randomized to  placebo.

In the next post, I will formalize these ideas.  In the final post of this series, I will propose a solution.

Read Full Post »

Older Posts »