It’s one of those intellectual nuggets you tuck away for dinner parties: A recent psychological study finds that people are more likely to express politically conservative values after seeing a U.S. flag. And this one: People are more inclined to support inequality when exposed to money.
Throw these scientific factoids at your conservative neighbor during the next block barbeque. You’re reactionary because you’re bought and paid for, Harry. And these American flag napkins just reinforce your worldview.
The problem with both studies, however, is that other researchers have been unable to replicate them. Worse, this problem may extend to a great many findings from psychological studies.
If you doubt that this matters, consider that as I write this, major publications are gleefully reporting a new study by a Cornell psychology PhD who has found that casual sex can be very good for the emotional well-being of young people. Maybe she’s right, maybe she’s wrong, but I’m wagering nobody will bother to find out, which means you can choose to believe her findings if it suits you.
This is true, increasingly, about a great many things, and it’s a bigger problem than you might think. But more on that later.
Back to the unverifiable research. How widespread is it?
Sticking with the field of psychology, consider an anonymous survey of 2,000 research psychologists, which found that two-thirds selectively reference studies that support their claims, while omitting studies that undermine their hypotheses. A majority, meanwhile, have excluded data from studies when doing so shifted the conclusions in a direction that they preferred. Interesting new findings are more likely to get published, after all.
The crisis isn’t limited to psychology; a major economics journal spent four years requesting replications of significant findings, only to abandon its call because nobody responded.
Here’s a clue about why: a recent analysis of 186 empirical economics studies found that only fourteen had results that could be replicated—in most cases because their authors simply do not make their data available. A lot of those scholars are advising governments about fiscal and monetary policy, and teaching our nation’s future rulers, but otherwise it’s probably no big deal that their research is almost entirely unsubstantiated.
What might be a bigger deal is that similar problems are emerging in medical research.
Stanford epidemiologist John Ioannidis published a paper a few years back titled: “Why Most Published Research Findings Are False,” and estimated for The Atlantic that as much as ninety percent of the research on which doctors rely is flawed. Newer analysis indicates, for example, that all manner of treatments once in newspaper headlines (e.g., Vitamin E to ward off heart disease, estrogen to guard against dementia) yield either no benefit, or, in some frightening cases, the opposite effect.
Some researchers have developed statistical models indicating that Ioannidis overestimates the problem, but even if they’re correct, they still can’t explain why so much research can’t being replicated.In response to the replication void, pharmaceutical companies Amgen and Bayer deployed teams of scientists to assess several dozen prominent medical research claims, but could substantiate just a few. Worse still, Amgen was only able to secure cooperation from some scientists whose research its teams wanted to analyze by first signing nondisclosure forms. The binding agreements forbid Amgen researchers from publicizing any results that contradict the original scientists. Amgen admits its researchers could not replicate most of the studies they assessed, but in some cases that important information will never see the light of day.
To give you a sense of the potential magnitude of this problem, in addition to Ioannidis’s claims, a series of recent articles in the top-tier medical journal Lancet contend that 85% of the $200 billion spent annually worldwide on medical research is wasted on flawed or inept research. Eighty-five percent is not a typo.
Why all the bad research? Inadequate samples, the reality that positive findings are more likely to get published and garner additional research funds, perhaps even the nature of statistical analysis itself, which Ioannidis argues will yield a great many false positives simply because there are vastly more untrue hypotheses than true ones. (That last contention is challenged by those who argue that traditional statistical methods tend to be conservatively biased against false positives.)
But when the Mayo Clinic has to abandon, as it did not very long ago, an estimated ten years of cancer research because it was based on the work of a scientist recently exposed for falsifying his research data, there’s a problem. The problem is compounded when all the incentives line up in favor of new, flashy results, and against the trudging, case-building work that has always been a necessary part of science.
There are encouraging signs—inter-university consortiums of scientists with funding to test the validity of prior research findings, for example. Replication, as it turns out, requires the same nuance, judgment, and attention to critical variables as original research. One Nobel Prize winner recently advocated, in light of this, that replication should not be allowed unless the scientists attempting it work with the original scientists. There are too many nuanced factors, he says, that impinge upon the results. Science is art as well as formula, seems to be his claim.
The problem runs deeper than some unreliable research, however. It runs deeper than the skewed incentives, the arrogance that leads researchers to inhibit fact-checking, and the breathlessness with which opinion-shapers lay hold of tentative findings—when it suits them—as gospel with which to bludgeon the rest of us into a greater and more pleasing social conformity.
To be continued tomorrow.
Tony Woodlief lives outside Wichita, Kansas, and is the author of a spiritual memoir, Somewhere More Holy. His essays on faith and parenting have appeared in The Wall Street Journal, The London Times, and WORLD Magazine. His short stories, two of which have been nominated for Pushcart prizes, have been published in Image and Ruminate. His website is www.tonywoodlief.com.