Feeds:
Posts
Comments

Posts Tagged ‘robin hanson’

I like this post by Megan McArdle.  I agree with many of her philosophical principles, and think it’s a good idea for us to figure out what our principles are, before arguing about health care plans.  Do we disagree on our principles, or do we disagree about which plan will most effectively satisfy these principles?  Here are a few from her list:

  • We have some obligations to future generations, if not necessarily future individuals within those generations.  Extreme thought experiment to clarify the principle:  we cannot strip mine the earth and leave them to die.
  • States have an absolute right to tax their citizens to provide public goods, i.e. goods that are broadly beneficial but non-excludable.  They have a right to enact other laws, such as public health rules, to achieve similar ends.  Both rights are constrained by the basic rights of their citizens.  You may perhaps quarantine Typhoid Mary.  You may not shoot her.
  • Societies have a right to organize themselves to improve the justice of their income distribution.  That organization may include taxation. It may also include property rights, or outlawing behavior like blackmail.
  • Just income distribution is not just a matter of relative position, but also of how the income is acquired, and absolute need.  I do not have any moral claim whatsoever on a dime of Warren Buffett’s fortune, because I have a perfectly adequate lifestyle.  I still wouldn’t have any claim on his fortune if he suddenly got 100 times richer, provided that he acquired that money through means that we regard as licit.
  • Societies should strive to organize themselves so that everyone in the society can, if they desire, acquire the means to provide their basic needs.
  • There is no per-se right to health care, since “health care” is not a thing, but a shifting collection of goods and services with amorphous boundaries.  Health care is a subset of the modern “basic needs” package, and therefore falls under broader distributional justice claims.  No matter what your distributional justice intuitions are, it would be perfectly acceptable, if impractical, to give very sick people the cash required to treat their cancer, and let them blow it on a trip around the world.
  • Taxation should strive to equalize the personal cost of taxation among all members of society, not the dollar amount or the percentage of income.  That is, it is appropriate for Warren Buffet to pay a higher percentage of his income in taxes for shared public goods than I do, because the personal cost of taking 25% of his income is much lower than the personal cost of taking 25% of mine.

Let’s say that we agree with those principles, in particular, that middle and upper class folks should help pay for lower income people to have their basic needs met.

I might add that goods or services for which evidence of benefit is lacking should not count as contributing towards basic needs being met, even if, unknown to us, that is the case.  For example,  middle and upper class folks shouldn’t be forced to pay for unproven treatments (there could be exceptions to this).

When it comes to health care, we spend way more than other nations.  Bob Somerby keeps reminding us about these statistics:

Total spending on health care, per person, 2007
United States: $7290
Canada: $3895
France: $3601
Germany: $3588
United Kingdom: $2992
Italy: $2686
Spain: $2671
Japan: $2581 (2006)

We’re paying more than twice as much, every year, per person.  Yet, we do not seem to have better outcomes.  What is really important is to understand where we are wasting money (because clearly we are).  Unlike other countries, we lose some money to insurance companies making a profit.  We also make it very easy for pharmaceutical companies to extend patent life (e.g., by finding a new indication), which keeps the cost of drugs high.  Finally, I suspect that we overconsume health care products.

It was recently estimated that 46% of treatments have unknown effectiveness.   There are all kinds of ways that treatments might look more effective in research publications than they really.  See here, here and here for a few examples.  This suggests to me that we are probably spending way too much on useless treatments.  Based largely on the RAND experiment, Robin Hanson argued that medical spending could be cut in half.  The RAND experiment found that people randomized to the full health coverage group spent 40% more on health care, but did not have better outcomes.  While variations in health care spending do not seem to explain differences in outcomes, other types of variations do (lifestyle, environment).  Phillip Longman has a very interesting essay on the topic:

A child born today can expect to live a full 30 years longer than one born in 1900. Improvements in medicine, however, played a surprisingly small role in this achievement. Public health experts agree that it contributed no more than five of those 30 years.This may seem counterintuitive given the attention society pays to medical breakthroughs. But the changes in living and working conditions over the last century are the real reason. American cities at the turn of the last century stank of coal dust, manure, and rotting garbage. Most people still used latrines and outhouses. As recently as 1913, industrial accidents killed 23,000 Americans annually. Milk and meat were often spoiled; the water supply untreated. Trichinellosis, a dangerous parasite found in meat, infected 16 percent of the population, while food-borne bacteria such as salmonella, clostridium, and staphylococcus killed millions, especially children, 10 percent of whom died before their first birthday.

During the first half of the 20th century, living and working conditions improved vastly for most Americans. Workplace fatalities dropped 90 percent. This, combined with public health measures such as mosquito control, quarantines, and food inspections, led to dramatic declines in premature death. In 1900, 194 of every 100,000 U.S. residents died from tuberculosis. By 1940, before the advent of any effective medical treatment, reductions in over-crowded tenements combined with quarantine efforts had reduced the death rate by three-fourths.

Consider the startling difference in mortality between Utah and Nevada. These two contiguous states are similar in demographics, climate, access to health care, and average income. Yet Nevada’s infant mortality rate is 40 percent higher than Utah’s, and Nevada adults face an increased likelihood of premature death. As health-care economists Victor Fuchs and Nathan Rosenberg have pointed out, it’s hard not to attribute much of that difference to the fact that 70 percent of Utah’s population follows the strictures of the Mormon Church, which requires abstinence from tobacco, alcohol, premarital sex, and divorce. Nevada, with its freewheeling, laissez-faire culture, has the highest incidence of smoking-related death in the country; Utah the lowest. Utah has the nation’s highest birthrate, but the lowest incidence of unwed teenage mothers. Culture and behavior seem to trump access to health care in improving human life span.

So, if we believe that we should take care of people’s basic needs, and that our current health care system is inefficient, and that we consume too much health care, this suggests to me a fairly simple solution:  the government provides all of its citizens with a minimal health care package (a type of single payer system).   This includes coverage for catastrophic events and emergency care in general.  If you have a broken leg, you can get it taken care of for free.  If you’re in a car accident, your ambulance ride and life saving surgery won’t cost you.  No one will ever go into bankruptcy over health care expenses again.  I’d imagine the coverage would also cover annual physicals and some treatments that are known to be effective.  But I’m picturing something far less comprehensive than coverage currently provided by most employers.  Should taxpayer dollars pay for Prozac when it doesn’t seem to be better than placebo?  Should we pay for cholesterol lowering drugs when the number needed to treat is 100 or more (i.e., if you give 100 people the drug 1 of them will benefit, on average)?  I don’t think so.

Employers could offer as a benefit to employees additional health insurance that would cover more treatments.  Similarly, those that could afford it could choose to purchase additional health insurance.

Of course, we live in a world of lobbyists.  How we would prevent lobbyists from getting their favorite useless treatment covered by a government insurance plan is a major challenge.

Another valid concern about a single payer system is that innovation will suffer.  For example, Megan McArdle says:

As long as people don’t know that there are cancer treatments they’re not getting, they’re happy.  Once they find out, satisfaction plunges.  But the reason that people in Britain know about things like herceptin for early stage breast cancer is a robust private market in the US that experiments with this sort of thing.

So in the absence of a robust private US market, my assumption is that the government will focus on the apparent at the expense of the hard-to-measure.  Innovation benefits future constituents who aren’t voting now.  Producing it is very expensive.  On the other hand, cutting costs pleases voters this instant.

We should be concerned about a decrease in incentives for innovation.  It could be that a system that covers everyone saves lives now, but at the cost of innovation.  This could cost future generations many more life years.

However, two things to consider.  If we adopt a plan like I proposed, there would still be plenty of people purchasing private insurance.  Thus, there would still be financial incentives for new drugs and devices.  Further, the government currently does fund innovative research.  NIH pays for the basic science, and then Pharma uses the knowledge to develop new drugs.  The gov’t certainly could let companies compete for research money for device and drug development, in the same way that researchers currently compete for NIH dollars.  In my opinion, the old model of funding research without a plan for how the findings could lead to the creation of valuable products is pretty much dead anyway.

There are real tradeoffs between different policies, and we shouldn’t pretend that that is not the case.

Read Full Post »

This is a great post (link).  I highly recommend reading the whole thing.  Here’s a clip:

…Early in our lives we search for a story that fits well with our abilities and opportunities.  In our unstable youth we adjust this story as we learn more, but we reduce those changes as we start to make big life choices, and want to appear stable to our new associates.  But we have real doubts about whether we choose our identity well, doubts that increase as we continue to get more info about our skills and opportunities.

We express our doubt about our chosen identity, and our hope for a better one, as a concern that we haven’t discovered who we “really are.”  We expect many of our associates would tolerate one big identity change even when we are older, if we express it as “finally discovering who we really are.”

Read Full Post »

Here are a few things that I enjoyed reading, but haven’t found the time to write about.

Faith is a post-agricultural concept.  Before you have chiefdoms where the priests are a branch of government, the gods aren’t good, they don’t enforce the chiefdom’s rules, and there’s no penalty for questioning them.

And so the Untheist culture, when it invents science, simply concludes in a very ordinary way that rain turns out to be caused by condensation in clouds rather than rain spirits; and at once they say “Oops” and chuck the old superstitions out the window; because they only got as far as superstitions, and not as far as anti-epistemology.

Of course the Untheists are not inventing new rules to refute God, just applying their standard epistemological guidelines that their civilization developed in the course of rejecting, say, vitalism.  But then that’s just what we rationalist folk claim antitheism is supposed to be, in our own world: a strictly standard analysis of religion which turns out to issue a strong rejection – both epistemically and morally, and not after too much time.  Every antitheist argument is supposed to be a special case of general rules of epistemology and morality which ought to have applications beyond religion – visible in the encounters of science with vitalism, say.

Conscientiousness, i.e., not being lazy, matters about as much as intelligence, i.e., not being stupid.   And it is similarly heritable, i.e., genetic, it is more correlated with gender, and probably similarly correlated with race, class, and ethnicity.  Yet stupidity seems a far more sensitive topic.

The Anne Frank House at Prinsengracht, which has become a tourist attraction and a symbol of Dutch resistance, should also serve as a reminder of Dutch complicity. It’s just that we prefer to remember the past as human triumph rather than human failure.

And so, after the war, the Dutch wrapped themselves in the cloak of Anne Frank and pretended that they, too, were innocent. As such, Van Meegeren becomes not just the story of the self-deception and duplicity of one man, but of an entire nation.

There may be yet one more principle at work – something very simple. The bigger the lie, the more willing we are to believe it.

  • Machine Minds (Michael Vassar’s contribution to Forbes’ AI series)

Dogs care greatly about our welfare. Cats and coyotes care much less. This isn’t because dogs are smarter, dumber or more kindly treated than cats or coyotes. Many types of minds are possible–some care about humans but most are indifferent. What we care about is determined by our structure, which was created by evolution. What artificial minds care about will be determined by their structure, which we will design.

Unfortunately for us, the consequences of an artificial mind’s interests may not be obvious to us before we create it. Evolution caused us to like sex because in nature sex typically leads to offspring. It didn’t anticipate birth control, and so this adaptation fails to achieve its purpose 100% of the time in a modern environment. Likewise, an artificial mind designed to care whether humans smile might force humans to smile through means other than joy and laughter once it gained the means to. Choice of preferences is tricky business.

 

Read Full Post »

I think it is important to expose kids to a lot of different ideas.  If you are Christian, you shouldn’t prevent your kids from hearing about other religions.  Similarly, as an atheist, I want my kids to be exposed to many different beliefs, with as little prejudice as possible. I don’t presume to be right.  I want them to make up their own mind.

There is another reason to expose your kids to different religious ideas.  Teenage volunerability:

4. To avoid the “teen epiphany.” Here’s the big one. Struggles with identity, confidence, and countless other issues are a given part of the teen years. Sometimes these struggles generate a genuine personal crisis, at which point religious peers often pose a single question: “Don’t you know about Jesus?” If your child says, “No,” the peer will come back incredulously with, “YOU don’t know JESUS? Omigosh, Jesus is The Answer!” Boom, we have an emotional hijacking. And such hijackings don’t end up in moderate Methodism. This is the moment when nonreligious teens fly all the way across the spectrum to evangelical fundamentalism.

A little knowledge about religion allows the teen to say, “Yeah, I know about Jesus”—and to know that reliable answers to personal problems are better found elsewhere.

Christian ‘youth groups’ are a scary thing.  They attempt to get to kids when they are most vulnerable.  While I want my kids to be exposed to different ideas, I don’t want that first exposure to come from some cult trying to recruit them.

There is still the question of how much exposure we should give to ideas that we don’t agree with.  Robin Hanson asks

So is the principle here that parents should go beyond their simple judgment when choosing to what to expose our kids?  For example, should we let polygamists argue for their way of life directly to our kids?  Should we let pedophiles argue their case directly to our kids?  Or is the principle here that we know we are right and those other parents are wrong, obligating us to make those parents give their kids what we judge best?

We should distinguish here between exposing children to facts about what some people believe and to exposing them to people who are going to try to persuade them.  I want my kids to know what different groups believe and why they believe it.  I don’t necessarily want them to hear the sales pitch directly, though.  At least, not until they have developed their skills as judges of evidence.

I want my children to be well trained as rational thinkers.  I imagine exercises where they have to find the flaw in an argument. Or maybe we could play the Paranoid Debating game.   I’d like them to know about many of the different biases that we are all susceptible to.

I want my children to always be open to the possibility that they are wrong.  No matter what you believe, there are people out there who: (a) are  smarter and more experienced than you; (b) have a belief that contradicts your own; and (c) are just as confident that they are right as you are.  If you keep that in mind, you’ll be more open to different ideas.  It also helps if you think of belief as probability that can be updated, not as a binary fixed state.

Read Full Post »

Charles Bukowski wrote a great short story explaining why he doesn’t write about politics.   In the story, he noted that:

the difference between a Democracy and a Dictatorship is that in a Democracy you vote first and take orders later; in a Dictatorship you don’t have to waste your time voting.

Robin Hanson has suggested that people prefer voting for their rulers, in part, because it’s higher status:

I’ve been saying for years that people prefer democracy mainly because they think it raises their social status – being ruled by a king makes you lower status relative to people who “rule themselves.”  We can’t quite fool ourselves into thinking a king is just a “steward”, but we apparently can think we really rule because we elect our rulers.

In this country, I don’t think we are really even ruled by the people we elect.  Sure, the President and congress do have some power. I believe that we are better off with Obama than we would have been with McCain.  However, I think for the most part these people are figureheads.  Or maybe puppets is a better metaphor.  They do have some power, but they’re not really the ones calling the shots.  The people who write policy, the people who have real influence, the people who help decide who can get elected, they’re not voted on.  We didn’t vote to have Pharmaceutical companies shape the prescription drug program.  We didn’t vote to give AIPAC power.  We didn’t vote to give Sandy Weill enough influence over congress to repeal Glass-Steagall.

Do we even want direct democracy?  It appears to me that we are okay with being ruled, as long as we call it Democracy.

Bukowski also said:

are there good guys and bad guys? some that always lie, some that never lie? are there good governments and bad governments?  no, there are only bad governments and worse governments.

He spends some time pointing out examples of how easily citizens of any country can become convinced that their country is killing for freedom, democracy and/or humanity (take your pick!). And you are probably thinking “but our country does only kill for those reasons.”  And people in other countries think the same thing about their governments.  In any case, the people we are killing are just like us, but we don’t see them that way.  We see them as evil.  The Iraqi’s were killing babies in incubators!  They’re not like us!  Or we see them as victims of their government.  We defended South Vietnam by invading South Vietnam.  The Soviet Union invaded Afghanistan in self-defense. We invaded Panama because Noriega wanted to get our kids high on drugs, instead of high on life.  And so on through a million excuses that should be transparent.  But if there is one industry that is more advanced than any other, it’s P.R.

Read Full Post »

Robin Hanson recently wrote this interesting post about status prudes:

Societies also vary in how “prudish” they are about status talk.  Social status, a shared perception of individual quality, is central to every society.  In some societies, like high school or the ghetto culture as depicted on The Wire, it is mostly OK to directly jockey for status; you can tell someone you are better than them, or that they have a loser car.  In contrast “egalitarian” societies  discourage such talk; such jabs must be made indirectly enough to allow plausible deniability.

It seems to me that people of low social status have nothing to lose by directly challenging someone of equal or higher status.  Alternatively, people of the highest social status have strong incentive to discourage lower status folks from challenging them.  If you were to directly challenge a high social status person, you would be viewed as not sophisticated enough to belong with their crowd.  This prudishness becomes internalized.  You know that to become high status, you must find less impolite ways to move up the ranks.

(In the US, status prudishness seems to be strongly correlated with status itself.  Are there societies where that is not the case?)

What are the implications?  Our prudishness causes us to value people who are effective at signaling positive attributes, whether or not they actually have these attributes.  Thus, we reward style, political savvy and high status accomplishments.  While there is certainly a positive correlation between the ability to signal attributes and possessing the attributes, the correlation is not 1.  For example, people who attend Harvard Law or are good at social networking might be more talented, on average, than others.  However, the more prudish we are, the more we rely on this type of indirect evidence. 

This all naturally leads to the most talented people not necessarily doing the most important work.  I’d imagine it causes an increase in nepotism and less fluidity between the classes.  For example, a child of high status parents has major advantages in obtaining things that we use to infer ability, such as attending top schools.   

As Robin pointed out:
Relative to societies where most people have a say in ranking folks around then, status-prudish societies tend to delegate this ranking task to elites.  This may in fact produce a better society, but it seems odd to call this more “egalitarian”; people still end up being ranked, and the power to set those rankings is concentrated more into elite hands.

Read Full Post »

Striving for rationality requires looking carefully at your decision making code.  We are all at risk for making suboptimal decisions as a result of biases embedded in our brainware.  How do we identify when we have been influenced by these said biases?  I think it is useful to ask yourself various questions related to why you believe what you believe.  Consider three examples.

Judging action independent of motive

Everyone is the hero of their own story.  There are very few human beings who intentionally do something cruel.  There are plenty of cruel acts, but most of them are committed by people who have convinced themselves that they are doing good.  

We observe what we consider to be cruel, unfriendly or selfish actions on a regular basis.  Yet, the people who carry out these actions are often unaware that they are doing it.  This leads to an obvious question:  am I one of those people?  

Now, you might argue that I am talking about morality, not rationality.  However, if you have deceived yourself, if you put too much weight on your intentions and not enough on your actions, then you are not in a position to make a rational decision.  It is only in the face of awareness that rationality has a chance.  

This leads to a test of rationality.  Ask yourself the following:

Would the majority of people who observed my actions without knowing my motives view my actions as unkind (or worse)?  

If the answer to that question is ‘yes,’ you better think carefully about (a) why your motives matter or (b) why you are right and the majority is wrong.

Avoiding the attractiveness of standing out

In Notes from the Underground, the main character argues that people will behave irrationally as a way of demonstrating their free will.  Quoting wikipedia:  “…one cannot avoid the simple fact that anyone at any time can decide to act against what is considered good, and some will do so simply to validate their existence and to protest that they exist as individuals.”  This is explicitly argued in the book in Section VIII of Part I:

 But I repeat for the hundredth time, there is one case, one only, when man may consciously, purposely, desire what is injurious to himself, what is stupid, very stupid–simply in order to have the right to desire for himself even what is very stupid and not to be bound by an obligation to desire only what is sensible. Of course, this very stupid thing, this caprice of ours, may be in reality, gentlemen, more advantageous for us than anything else on earth, especially in certain cases. And in particular it may be more advantageous than any advantage even when it does us obvious harm, and contradicts the soundest conclusions of our reason concerning our advantage–for in any circumstances it preserves for us what is most precious and most important–that is, our personality, our individuality.

It is not just that we want to demonstrate our free will, but we also want to be noticed.  In Part II of the book, the main character describes an incident at the tavern:

I was standing by the billiard-table and in my ignorance blocking up the way, and he wanted to pass; he took me by the shoulders and without a word–without a warning or explanation–moved me from where I was standing to another spot and passed by as though he had not noticed me. I could have forgiven blows, but I could not forgive his having moved me without noticing me.

We want to stand out.  We want to be noticed.  As Bertrand Russell said in his Nobel lecture:  “Look at me is one of the most fundamental desires of the human heart.”

This leads to another rationality test.  When your beliefs differ from those of the vast majority of people, you should ask yourself whether you simply are expressing your free will (demonstrating your individuality). Ask yourself:  if the majority of people felt the way I do about this, would I find my viewpoint as attractive?  If not, you should seriously consider how much value you are putting on validating your existence.

Avoiding the attractiveness of being part of a group

Related to the above is the attraction of being part of a group that thinks alike.  In-group bias can affect our judgement, not just of those in our group, but of those outside of our group.  As Robin Hanson put it

We feel a deep pleasure from realizing that we believe something in common with our friends, and different from most people.  We feel an even deeper pleasure letting everyone know of this fact.  

This leads to a rationality test.  Ask yourself how much pleasure you get from the fact that your beliefs are in agreement with other members of your group, and differ from members of other groups. How much does that pleasure influence your judgement? Try imagining that your friends had a different opinion.  Would that affect how strongly you feel about the subject?  

In general, getting pleasure from belief should be cause for alarm.

Read Full Post »

Older Posts »