You may remember recently that the world of academic psychology was called properly into question. One of the main findings dealt with was whether many psych studies are repeatable. A study that is reproduced (called replications) to get the same or similar results is more robust. Single studies may just be flukes or things like the File Drawer Effect (or Publication Bias) may be taking effect.
Last August a team of 270 scientists tried to reproduce 100 peer-reviewed studies. Only 39% of studies held up. That is to say that 61% of all social psych studies are potentially wrong.
The meta-analysis of these second studies was produced by a group called Open Science Collaboration (OSC). Four other researchers have, however, found that analysis “bogus – for a dazzling array of reasons.”
Slate points out two of these problems.
The first—which is what tipped researchers off to the study being not-quite-right in the first place—was statistical. The whole scandal, after all, was over the fact that such a low number of the original 100 studies turned out to be reproducible. But when King, a social scientist and statistician, saw the study, he didn’t think the number looked that low. Yeah, I know, 39 percent sounds really low—but it’s about what social scientists should expect, given the fact that errors could occur either in the original studies or the replicas, says King.
His colleagues agreed, telling him, according to King, “This study is completely unfair—and even irresponsible.”
More problematic was the second problem. This was the fact that the OSC did a terrible job of replicating the studies. The methodology was often shoddy, or the numbers too low, or the contexts completely changed, and so the reliability of these second studies is very questionable.
This new commentary will be available in the journal Science on Friday. I look forward to reading it! Any psychologist is welcome to guest post about it!