Algorithms work better than doctors

I came across this fantastic paper after reading about Meehl‘s 1954 book, “Clinical vs. Statistical Prediction: A Theoretical Analysis and a Review of the Evidence”.  It’s a meta-analysis (review of studies) of the evidence that mechanical prediction techniques like fixed questionnaires or algorithm generate superior predictions compared with those made by clinicians, regardless of their level of expertise.

In plain words, it suggests that simple questionnaires and algorithms arrive in more accurate predictions compared with clinicians, doctors or psychologists.  I can’t believe this is not mandatory reading in every university.

The abstract:

The process of making judgments and decisions requires a method for combining data. To compare the accuracy of clinical and mechanical (formal, statistical) data-combination techniques, we performed a meta-analysis on studies of human health and behavior. On average, mechanical-prediction techniques were about 10% more accurate than clinical predictions. Depending on the specific analysis, mechanical prediction substantially outperformed clinical prediction in 33%-47% of studies examined. Although clinical predictions were often as accurate as mechanical predictions, in only a few studies (6%-16%) were they substantially more accurate. Superiority for mechanical-prediction techniques was consistent, regardless of the judgment task, type of judges, judges’ amounts of experience, or the types of data being combined. Clinical predictions performed relatively less well when predictors included clinical interview data. These data indicate that mechanical predictions of human behaviors are equal or superior to clinical prediction methods for a wide range of circumstances.

The full article is available here:


One thought on “Algorithms work better than doctors”

  1. Hmmm… very interesting. The authors’ conclusion that human intuition is woefully bigoted and an appalling statistician is unsurprising, but it’s great to see such a thorough analysis exactly because we’re biased to believe the opposite (e.g. It seems to me that the sooner individuals and society eat a little humble pie and start to treat data with the respect it deserves we will all be better off! Key question for me: Is this bias cultural or biological? How can we change it?

    That said, one critique of this article is that it does not consider the impact on study participants. E.g., hypothesis, equally depressed patients would have worse outcomes if diagnosed (“predicted”) by a machine rather than a human due to the (culturally perceived?) relative lack of empathy in a computer.

    A final throw away line and my quick summary of the article: “Quantitative method used to show superiority of quantitative method over qualitative method”. Is there an important implicit methodological assumption here, or could you build an argument with Cox-Jaynes and the paper’s analysis to show that it doesn’t matter?

Add a comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s