As a reminder, the views of this blog post do not necessarily represent those of Chris Stedman, the other NPS panelists, or any of the organizations with which they affiliate.
Hopefully, my title hopefully gives you some pause. Atheists have our PR issues, but no one in their right mind could possibly think we’re as distrusted as rapists.
But unfortunately, a certain camp of atheist has a contentious relationship with psychological research and surveys in the social sciences. Just do a quick Google of “atheists are the most reviled minority,” and you’ll see many hits referencing this 2006 study from the University of Minnesota, which shows that Americans are reluctant to vote for an atheist president, approve of their child marrying an atheist, and think that atheists agree with their vision for society.
I want to make it clear from the outset that I think these are bad things that are worth focusing our efforts to change,but I don’t want to go into detail about this here; I want to address how poorly some atheists misconstrue and misrepresent otherwise legitimate and important research.
So with a mixture of regret, frustration, and incredulity, I read a recent Alternet headline, “religious believers distrust atheists as much as rapists.” Another post on Alternet went on in more detail, and the story was posted to Reddit (with the sensational headline that “rapists are viewed as more moral than atheists“). The Friendly Atheist picked it up shortly thereafter, misreporting this finding. Hemant says, “Somehow, we’re less trusted than even rapists. That’s disheartening, but it really says more about how religious people think than anything about atheists.”
Let me be clear about this: no it doesn’t. The only thing this result says about atheists and believers is that they don’t understand statistics and study design. Let’s take a look at the graph that’s causing so much controversy:
One thing is immediately clear in this graph: the difference between atheists and theists aren’t significant. The skinny lines going up and down from each of the graphs is called the error bar, which is the range where the real value the statistic is meant to represent lies. Small error bars are in general good, while huge error bars are bad. If you’ll notice, not only are all of the bars huge, but the “rapist” and “atheist” error bars overlap a lot. That pretty much guarantees any difference between the two numbers is statistical noise; the results are “nonsignificant,” which the study itself says clearly. This means you can’t actually say whether atheists or rapists are “distrusted” more; you can only say that how distrusted atheists and rapists are lies somewhere in those huge bars, and we don’t know which one is higher or lower.
On its face, it might seem equally bad that atheists and rapists aren’t “significantly different,” but this is based on some assumptions on the study design that may not be reliable. First, the subjects of this study are University students in Canada, and perhaps their attitudes aren’t generalizable (one could imagine they’d be more tolerant of muslims than the general population). Second, it might be the case that the measure used is sensitive enough to pick out the differences we’d want. Perhaps it’s just the case that only so many people (around half) would make the implicit error the study measures. If that’s the case, then a population where people distrust rapists far more than atheists would look identical to a population that trusts them the same, because the measure would “ceiling,” so to speak. Maybe a more sensitive measures would show atheists at 50 and rapists at 90. We just don’t know. But it’s clear, however, that for a a claim so strong, we need better evidence than this one implicit measure applied to Canadian students.
The authors, Will Gervais, Ara Norenzayan and Azim F. Shariff are of course very careful about all of this, not making any unjustifiable claims about their results. But that didn’t stop their University’s press office from being sloppy and sensational, too.
The paper itself is smart, the findings are important, and their measures are clever, so though I disagree with some of the theoretical underpinnings and how their results are interpreted, I’d definitely recommend checking it out (despite the minor misreport, The Friendly Atheist gives a good rundown of the study). My take on the results, though, is that the finding doesn’t report distrust in any meaningful way that we care about, but maybe that’s another post.
Lastly, I’m not blaming people who took the reports of these findings at face value, and expressed outrage at what looked like insane bias. I actually first heard about the study from Chris’ twitter, and saw it on a lot of my friend’s Facebook feeds. Not everyone knows about p-values and ceiling effects and how to properly read a scientific graph, and they shouldn’t have to. Science reporters need to know what they’re talking about, and atheists need to stop taking sites like Alternet to be trustworthy sources of news.
UPDATE: I’ve expanded on this post, clarified a little, and addressed a common objection here. Thanks–Vlad
Vlad Chituc is a senior at Yale University, studying Psychology and Philosophy with an interest in how we form beliefs (particularly moral and religious), and an interest in metaphysics and moral philosophy on the side. He has served as the Community Service Coordinator and President of the Secular Student Alliance at Yale (formerly the Yale Humanist Society), during which he participated in the Inter-Religious Leaders Council and worked closely with the Yale Chaplain’s Office to foster relationships with liberal member s of the Yale religious community. In his spare time, Vlad enjoys listening to hipster bullshit and writing sarcastic articles and music reviews for the Yale Herald.