According to the scientific method, experimental results need to be replicable. That is, for a study to be scientifically valid, different researchers performing the same experiment should get the same results. But it turns out, a surprisingly large amount of published scientific research can’t be reproduced, which casts doubt on their findings. This is called the Replication Crisis.
Up to this point, the major offenders have been in the fields of psychology and other social sciences. (See my post on the subject.) In one test, only 39% of the published studies could be replicated. That’s not surprising. Put two chemicals together and you will get consistent results every time. But human beings are not inert objects. They have agency. Human subjects are far more complicated in their individuality, personalities, and decision-making. You can’t effectively apply scientific controls to all of the human factors. No wonder a psychological experiment on one group of subjects may not yield the same results when applied to another group of different individuals.
But now researchers are finding a replication crisis in the harder sciences. A big offender is in the field of medicine. Especially unreliable is research related to diet. (This is why you read one day that coffee/wine/meat/fill in the blank causes cancer. Then you later read that coffee/wine/meet/fill in the blank prevents cancer.)
From Ivan Couronne, Beware those scientific studies — most are wrong, researcher warns:
A few years ago, two researchers took the 50 most-used ingredients in a cook book and studied how many had been linked with a cancer risk or benefit, based on a variety of studies published in scientific journals.
The result? Forty out of 50, including salt, flour, parsley and sugar. “Is everything we eat associated with cancer?” the researchers wondered in a 2013 article based on their findings.
Their investigation touched on a known but persistent problem in the research world: too few studies have large enough samples to support generalized conclusions.
But pressure on researchers, competition between journals and the media’s insatiable appetite for new studies announcing revolutionary breakthroughs has meant such articles continue to be published.
“The majority of papers that get published, even in serious journals, are pretty sloppy,” said John Ioannidis, professor of medicine at Stanford University, who specializes in the study of scientific studies.
This sworn enemy of bad research published a widely cited article in 2005 entitled: “Why Most Published Research Findings Are False.”
Since then, he says, only limited progress has been made.
But, again, human beings are complicated physically, as well as psychologically. Some people have allergies; others don’t. Some have immune systems that have been exposed to some pathogens, and others don’t. Variations in sex, age, weight, and other factors can yield different reactions.
According to a 2016 poll of 1,500 scientists reported in the journal Nature, 70% of them had failed to reproduce at least one other scientist’s experiment (50% had failed to reproduce one of their own experiments). Failure to reproduce results differs among disciplines (percentages in brackets represent failure to reproduce own results):
- chemistry: 87% (64%),
- biology: 77% (60%),
- physics and engineering: 69% (51%),
- Earth sciences: 64% (41%).
In 2009, 2% of scientists admitted to falsifying studies at least once and 14% admitted to personally knowing someone who did. In medicine, there were found to be 67% (56%). Misconducts were reported more frequently by medical researchers than others.
The article by John Ioannidis cited above says there is a common pattern of poor research design, incorrect statistical analysis, and inadequate peer review. And the profession itself gets in the way, as journals prefer to publish “cutting edge research” and the “publish or perish” mentality in academia, where most scientific research takes place, pushes scientists away from the unrewarding replication of other scientists’ work and encourages short-cuts, and sometimes the falsifying of data.
None of this should diminish science. It just reminds us that scientists are human.
Photo by National Eye Institute (Laboratory Experiment) [CC BY 2.0 (https://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons