Archive for February, 2014

I saw a poster in a school gymnasium with the words “good things happen to people who try.”

Of course, good (and bad) things happen to people at every level of the trying continuum, so that is hardly saying anything.

One could interpret the poster less literally as: “good things happen at a higher rate to people who try compared to people who don’t try.” However, that is a non-interventional statement, and hardly motivating. (i.e., people who try might be different than people who don’t try in many ways, and it is those ways, and not the trying, that causes good things to happen to those people at a higher rate)

So, we can fix the interpretation a little more: “if you change from not trying to trying, good things are more likely to happen to you.”  But that implies that the thing you are trying to accomplish, the thing that can be aided with effort, is a good thing. Because certainly someone who is self-destructive could try harder to self-harm. Similarly, sometimes we want to accomplish things that we think are good, but are really not.

So, my proposed revised poster: “achieving specific goals is more likely to happen if you increase your level of effort; whether or not that is a good thing depends on the degree to which you select goals that are good for you”

Read Full Post »

Researchers often dichotomize a continuous-valued variable of interest because it is easier to understand and explain (and you don’t have to worry about pesky little problems like whether or not the relationship is linear).

Consider the following example: suppose the population of interest is people hospitalized due to some specific type of infection. Suppose the research question of interest is whether the time until appropriate antibiotics are given affects the risk of death. The belief is that antibiotics should be given quickly after diagnosis, and delays of even several hours can greatly increase risk of death.

So, we have data on how long it took to give the appropriate antibiotics to the patient, and we have data on whether the patient died due to the infection. Let’s imagine that we will analyze the data using regression models.

We could fit a model with time as a continuous variable. If you include it as a linear term in the model, then the results are interpreted in terms of how specific increases in time affect the risk or odds of death. You would also need to think about whether linearity is reasonable. You could use splines or something similar to allow for non-linearity, but interpretation gets more difficult.

Alternatively, you could dichotomize. Suppose that about half of the time patients are given antibiotics within 3 hours of diagnosis. Researcher might be tempted to fit a model with an indicator variable for the exposure (time until antibiotics less than 3 hours? yes or no).

So, we go ahead and fit that model. We see that times greater than 3 hours increase risk of death. We publish the results. Now there are published results that are easy to understand: if you take longer than 3 hours to give antibiotics, you are putting patients are risk!

But…  suppose 3 hours starts to get in people’s heads as some critical time value. Like, 3 hours is some magic number. Maybe if you routinely give antibiotics within the first 30 minutes, you now feel a little less pressure to hurry that much. If it takes an hour or even 2, you are still within the magic range. So, perhaps, the publication of the article and subsequent media attention will lead to a shift in the distribution of times towards 3 hours, with fast units slowing down and slow units speeding up.

If, instead, you had showed a true ‘dose-response,’ then every minute of delay would be viewed as important, and hopefully there would be a general trend toward faster times.

Read Full Post »