Fudging Research

“Fudging” is a nice word; “falsifying” is more accurate.

By Ben Goldacre:

Here is a news story about a psychology researcher who has been caught
out manipulating his data.


There is one very interesting aspect to this case: the researcher
regarded the dodgy manipulations he used as completely standard

Recently there has been a small barrarge of research documenting the
high prevalence of statistics misuse in psychology and neuroscience. Briefly, I
thought it might be useful to draw some threads together about this in
one place. I’m not aiming to berate one corner of academia, but just to line up
some context.

Here, brain imaging studies report more positive findings than their
numbers can support:

Here, neuroscience researchers make an elementary statistical error
with very high frequency:

Here, evidence of publication bias, where positive findings are easier
to publish (lots on this, in all fields, in my Bad Pharma book this

Here, a 2011 paper showing that flexibility in data collection,
analysis, and reporting systematically increases the rate of false

Many of these problems occur in all fields of research, especially
medicine (again, I’ve a ton of stuff coming on dodgy medical trials in
September). But it’s good to see that they are being more
systematically assessed and investigated elsewhere, as this is
hopefully the start of a meaningful fix.

It’s my view that the information architecture of academia is flawed,
in several interesting ways, not least of which are that: negative

findings go missing in action; and simple statistical errors are
missed by journals, while the to and fro between peer reviewers and
authors is often dominated by petty disputes, over the contents of the
discussion section of a paper. And, by god, that’s a “buyer beware”
part of an academic paper anyway, at the best of times…


What Are Your Thoughts?leave a comment
  • http://rising4air.wordpress.com MikeK

    I was in a conference a few months back, and the organizer made a comment in the midst of an introduction that he was glad the presenter had allowed the data to take the results into conclusions that were not anticipated or expected.

    And I recall thinking, “Why wouldn’t he?” Alas, now I understand…thanks for the post.

  • http://LostCodex.com DRT

    I would be interested in feedback from RJS, Scot, and the other college professors on what I am going to say.

    Now is this also, or even mainly, and indictment to the level of compensation that is given to those who are researchers in Universities? People will lie cheat and steal for money. We know that. And the more money there is the more they will lie cheat and steal. In private industry we have the reputation of the corporation to consider, and therefore tend to have certain controls in place that supersede the science.

    But what about in academia? They cannot afford the controls, or can we not afford to not have them?

    If professors and researchers were paid less and we had more of them, or at least different ones…….wouldn’t that help?

  • Peter Davids

    One is not surprised that a researcher would think of something as standard practice, whether or not it was standard practice. Certainly there is some evidence that various types of errors, purposeful or not, are far from unknown. But I am talking about dodgy practices being thought of as standard. There researcher must have a rationalization, and that one is not a surprising one. After all, it is not unusual for abusers, both sexual and physical, to think that their practices are standard, or that “she really wanted it.” Such a justification is far easier for less extreme deviations. The long and short of it is that one must look for studies that have been replicated or otherwise been confirmed. One piece of science may just be a mistake (remember the neutrinos going faster than light?). Of course the same thing happens in biblical studies – we just say “many scholars believe” or some other such generalization without really having the facts. It is, as they say, standard practice.

  • CGC

    Hi Everyone,
    I suspect an accurate survey was done I saw in the news today that 65 percent of Americans believe that Obama would handle a theoretical alien invasion better than Romney. Is this really a voter issue? I want to know which one of the two is going to handle illegal alien immigration reform the best?

  • PLTK

    DRT, there is great variability in compensation to faculty and researchers–while compensation may play a part of this in some areas (perhaps medical research, for example), most published research pays little dividend in monetary form and even in those areas where it does pay significant dividends, this is for a minority of people. I would put forth that the primary culprit is job security–you need to publish to get tenure. Most faculty don’t really have such great salaries, particularly in social sciences, where you can get your Ph.D. after 5-8 years of graduate work and start at a salary of perhaps $50,000 to $60,000.

  • http://LostCodex.com DRT

    PLTK, thanks. I guess I naturally throw everything into the pot when I talk about money. It is comp, time off, job security etc. The job security part is especially in my sights these days since I have been unemployed for several months…..but am optimistic.

  • Bev Mitchell

    From a broader perspective:

    Yes, there are problems, in large part due to exceptional competitive pressure on young researchers in a dog-eat-dog environment, and pressure from funding groups who want the answers they paid for. This seems so similar to the problems our young theologians and biblical scholars are having with denominations and church funded schools that I can’t help making the connection (see recent articles on Peter Enns’ site for testimonials).  Unlike the situation in which the poor evangelical scholars find themselves however, the whistle blowers and investigators in the scientific world are often the established institutions and journals. So there is probably more hope for improvement.

    Here are the sad accusations highlighted on this thread, with modifications to include theological and biblical studies in certain quarters, and especially in literature aimed at a lay audience.

    caught manipulating data – willful interpretation bias

    report more positive findings than their numbers can support – overemphasizing the weight of favorite scriptures and proof texting

    making an elementary statistical error with high frequency – systematic avoidance of clear scientific evidence that would have a direct impact on favored interpretation

    publication bias, where positive findings are easier to publish – many (most) religious publishing houses

    Flexibility in data collection, analysis, and reporting systematically increases the rate of false positives – repeated bias and “in the bubble” stance leads to theological echo chamber

    All established institutions in which people should be able to put a high degree of trust are in danger of being largely ignored. Only relatively small percentages of the players have to be untrustworthy before the layman begins to distrust the whole enterprise. Or, in evangelical circles, enter some kind of cult-like bubble and contend against all neighboring bubbles.

    Grumble, grumble :(

  • Patrick

    There is now so much job security and money tied up in too many fields for it to be objective, unbiased, peer reviewed science in lots of cases.

  • http://restoringsoul.blogspot.com Ann F-R

    “Confirmation bias” R US… It would seem that some of these errors are the result of inadequate attention to details and checking of methodology and equations, too. Lazy R WE! oof… Wisdom and discernment are necessary, and a fine nose for baloney.

  • RJS

    Ok, I should step in …

    This article is neither unexpected nor damning.

    These things happen and have always happened. Irving Langmuir (a Nobel Prize Winner) spoke and wrote on what he called pathological science. I have an old article from Physics Today in my files that I frequently give to students and discuss such things.

    Occasionally there is also outright fraud, like Stapel (article linked above) and Schon at Bell labs, and I could list a half a dozen others from memory. These are cases where greed and ambition rule.

    When I read a paper – in any field – and when I teach my students to read papers or books, it is always critically. Does the data make sense? does the interpretation make sense? does the data support the conclusions? does anything seem fishy? are the most significant claims justified? why should I care? and I could list many more automatic questions. This doesn’t mean I distrust the authors (I do this to articles written by friends as well as strangers), nor does it mean I don’t believe anything I read or hear.

    New sciences, and brain imaging, neuroscience, and scientific psychology are relatively new, are always more prone to errors than other fields. Some of these are “innocent” and some are not.

  • RJS

    And another comment – I have gotten into big trouble in Christian circles, in churches and other situations, because of this approach. “We” are expected to respect authority by submitting to it or accepting it without real questions, learning at its feet. I have been trained to respect “authority” by testing it and in that way learning from it.

    Commenting on bad arguments is a good way to make one’s self unwelcome.

  • Bev Mitchell

    “Lazy R WE! oof… Wisdom and discernment are necessary, and a fine nose for baloney.” 

    Yes Ann, you are spot on, and the necessities you highlight reminds me of Massimo Pigliucci’s great book “Nonsense on Stilts: How to Tell Science from Bunk”. U. Chicago Press, 2010.

  • Scot McKnight

    DRT and others,

    In the Humanities, where one could say the data are not as objective as in the numbers games, what counts is (1) creativity or novelty or discovery, three terms not always distinguished well, and in some cases (2) capacity to sustain the line of thinking. Which means for #2 ideological bent shapes both what is looked for and what is found. The rewards, which aren’t much financially, are prestige and credibility as a scholar.

    In my field, in my career, the major figures have been Martin Hengel, Jimmy Dunn, Dale Allison, Tom Wright, etc.. These have distinguished themselves in controlling data and advancing the discussion. Distortion of evidence is found, of course, but it is more often than not non-consideration of evidence. But ours is a game of who tells the best story as much as anything else.

  • Bev Mitchell

    It’s the same in lots of scientific settings. The worst way to begin a paper or a talk is to say, “Well, it’s really quite complicated.”

  • http://LostCodex.com DRT

    Full disclosure – In my senior year engineering lab we developed an automated process to detect defects in Bose loudspeakers. I was responsible for the final write up, and we all did the experimentation. After the grades were published my one partner said he fudged the numbers a bit in the data analysis stage to make it come out better. I feel bad about this to this day since I did not do anything about it.

    [for those who are interested we determined the resonance frequency for the selected speaker series and fed it that tone, sampled that sound produced, and measured the absolute magnitude of the primary harmonic and compared it to the relative size of the next 3 harmonics. I thought, at least, that it was good since I thought it worked….]

  • http://www.gulfgateyouth.com RKM

    I was a Biology major in college and witnessed a professor manipulate his data so he could keep his grant. Talk about heartbreak for a budding biologist…now I’m a pastor :).