Feeds:
Posts
Comments

Archive for October, 2014

If someone has a terminal illness — one that will result in a lot of suffering — it seems like pretty much an ethical Pareto improvement to allow them to have the choice to end their life a little earlier. But then I read this horrifying article where the author pleads with Brittany to suffer more. This was done from a Jesus loves you and has a plan for you perspective. I don’t want to talk much about the specific article, because the poor logic and factual errors are as bad as the morality. But as an example, consider the statement “In your choosing your own death, you are robbing those that love you with the such tenderness, the opportunity of meeting you in your last moments and extending you love in your last breaths.” Obviously, it is only if you choose the time of your death that you can guarantee that your loved ones will be there with you, if that is important to you. Another example: “For two thousand years doctors have lived beside the beautiful stream of protecting life and lovingly meeting patients in their dying with grace.” Except that’s not what happens (in most cases). Apparently this person hasn’t read who by very slow decay.

I know, I know, christians have a different perspective. They believe God wrote the bible and took a stand against assisted suicide. So let’s temporarily accept the idea that the bible is the word of God. Did God really have much to say about this issue?

Suppose you have a terminal illness and there is a pill that will almost surely reduce your pain, but might slightly shorten your life by a tiny amount. To be specific, let’s say there is a 1% chance it will shorten your life by one day. What does the bible say about that? Since it’s not deterministic and you are not technically choosing your date of death, is it okay?

Suppose instead there is a 99% chance it will shorten your life by a day. Okay? Or consider a pill that will either kill you within 24 hours (1 percent chance) or not affect your lifespan at all (99% chance). Is this okay? What if there is a 99% chance it will kill you in a day? What if a person chooses no treatment rather than some life-extending (but unpleasant) treatment?

There are so many real-world, everyday trade-offs between quality of life and lifespan. Does the bible draw a clear line?

Read Full Post »

Tenure 

(I’m going to begin with a brief discussion of the role of tenure at research universities, because that is what I am familiar with. Tenure elsewhere, such as at public schools, might be a completely different story.)

Viewed from afar (by people who are not professors at research universities): “Tenure needs to be abolished. What happens is professors work hard until they get tenure. After that, they have no incentive to produce. But because they are full professors, they make more money than the people who don’t have tenure and who are still very productive. They also get grants just based on reputation, even though they are no longer productive. In fact, a lot of times they hire low wage postdocs to do the real work. It’s a messed up system.” (note that this person seems to know more about academia than the typical person, so you can imagine even farther-fetched arguments)

Viewed from near (by me): The possibility of tenure (a tenure track position) is primarily used as something to attract the most promising young researchers. Thus, it should primarily be judged by the difference in performance between the tenure track hires and the best non-tenure track people that could have been hired, had tenure track not been an option. The people who get these positions and eventually earn tenure tend to: (a) work very long hours — a lot of the really successful faculty seem to not need a lot of sleep (it is very common to send out an email at, say, 1am, and get several replies within an hour, even on the weekend) and have a lot of energy; (b) do more work than they are officially getting paid for (many research projects, grant writing, teaching, committees, mentoring students, giving talks, writing and revising papers, dealing with IRBs, reviewing grants, refereeing for journals, serving on editorial boards); (c) have many high impact publications and develop their own research team; (d) enjoy what they are doing. So, the people who get tenor tend to be passionate about research, very driven, high energy, etc. Once they get tenure that doesn’t just disappear. This is what they like doing.

In my experience (~14 years), these are the differences that I see in people before and after they get tenure: (a) might be a little more outspoken, but not dramatically so (and not most people); (b) might be more likely to turn down an opportunity (if they are already full funded, which they probably are); (c) have much more administrative responsibilities (heads of committees, etc); (d) more mentoring; (e) more leadership roles in general, such as PI on training grant. But all of these are things that have more to do with getting more senior than it does with getting tenure. I can’t think of any cases where a very productive person became less productive after tenure, in a way that wouldn’t be explained by the shift towards more administrative effort that comes with experience.

So when I hear people talk about tenure as some kind of disincentive, I know they must not be familiar with what actually takes place. I also recognize that the view from afar makes perfect sense. I see why people would think that tenure would make someone lazy.

Electric cars

I remember watching the documentary “Who Killed the Electric Car?” It is about how all of the different stakeholders worked really hard to kill the electric car (GM invested millions in the development of these cars, but didn’t really want them to succeed!). It was very clearly taking a far view of the auto industry. Not long after viewing the film, I happened to talk to someone who worked in the auto industry at the time. They explained to me all of the ways that the documentary got the story completely wrong. It was clear that him hearing an outsider talk about the industry was a similar experience to me hearing outsiders talk about academia. It is so easy to construct stories about organized awfulness if you aren’t actually there.

Big Pharma and FDA

It makes sense that the FDA and big pharma want to push vaccines on the public for profit if you look at it from a distance and need to construct a story. But if you work with people in pharma and/or the FDA, you see that how things actually work are just way less interesting than that.

Conflicts of Interest

I hear plenty of stories about how, if a professor takes funding from industry, then clearly you cannot trust the research. Again, this makes sense as viewed from afar. In reality, except in very rare/extreme circumstances, the world is much more boring. In a typical situation, a professor might be offered about $1000 to teach a one day course in their area of expertise for some company. This isn’t a bribe to get the professor to try to make the companies products look good (keep in mind that research professors make anywhere from 70,000 to 250,000+ per year, so $1000 isn’t exactly a powerful bribe (nor is it the goal)). In other cases, a company might want to hire the professor to help them (or lead) a research study involving one of their products. In that case, the university will assist with the legal contract. It will be spelled out that the paper will be published, regardless of the results and, obviously, payment will given for the work, not the results. University legal is very thorough about this stuff. The way it typically works is everyone agrees to a protocol, the data are analyzed (following the protocol) by people at the university. And the results are published. The world of huge bribes to influence research is just not a prominent aspect of university culture, in my experience.

Knowing is half the battle: There are a lot of organizations that I am not at all familiar with. I should be very careful to not construct some story about what takes place there.

Read Full Post »

Extremely young humans aren’t able to effectively advocate for themselves. They might not have the language, cognitive ability, knowledge, or physical ability to communicate their needs, leave a dangerous situation, or alert authorities if they are being harmed.

Older adults are often in a similar situation. For example, a nursing home resident might not be able to march over to the local police department to report abuse by a care taker.

Both the very young and very old are dependent, to a large degree, on living in a culture where their welfare is valued.

So what about care culture? It is people who are between the ages of ~20 and ~70 that really shape the culture. They are the voters, the business leaders, the educators, the activists, etc. 100% of these culture-shapers have been very young humans in the past, and none of the people I am talking about have yet been a very old human. Thus, they have already finished with one of these two periods of time where their well-being is dependent on others. They (possibly) still have the other in front of them.

Based strictly on the information above, I would have predicted the culture would more strongly advocate for the old than the young. Your future self will likely directly benefit from a culture that takes care of older adults, but will not directly benefit from caring for the young. Yet, Scott’s description of nursing homes (quoted below) is pretty consistent with things that I have heard:

Most of the doctors I have talked to agree most nursing homes are terrible. I get a steady trickle of psychiatric patients who are perfectly happy to be in the psychiatric hospital but who freak out when I tell them that they seem all better now and it’s time to send them back to their nursing home, saying it’s terrible and they’re abused and neglected and they refuse to go. I very occasionally get elderly patients who have attempted suicide solely because they know doing so will get them out of their nursing home.

At the same time, the west is basically baby worship culture. We are horrified by any mistreatment of children. There is cultural pressure for parents to make their kids the central focus of their lives.

If we are going to care more for one group than the other, why babies/kids?

Some ideas:

  • Darwinian reasons: more important to care for young because they, and not older adults, can get genes into the next generation. (I don’t love this argument)
  • Middle-age adults are the ones who would primarily be responsible for the care of older adults (their parents, etc). These middle age adults might have already raised kids, and now feel done with care-taking. So they just allow themselves to imagine that nursing homes aren’t that bad. (I kind of like this argument)
  • People don’t think much about their own future as older adults who will need help. They just hope it won’t happen to them, but don’t give it much thought. (probably a contributing factor)

Other ideas?

Read Full Post »