Because I’ve been working on a paper where this is relevant and it’s fresh in my mind, let me start with an illustrative example (which you can probably skip past the line-break if you’re lazy). I don’t talk about this very often, but I work with Dan Ariely at Duke University, which is fucking dope because Dan Ariely is fucking dope and my lab is smart and great and we work on awesome projects. I try to avoid wearing I-work-for-Dan-Ariely on my sleeve because it’s not (usually) relevant to this blog, and I like to let my work stand on it’s own rather than be propped up by my boss’s best-selling books and TED talks and great research etc. This will be relevant in a minute.
We ran a few studies on how people act when they’re solicited for a bribe. Suppose I bring you into a lab to either play one game that makes you a little money or another game that makes you a lot of money, and we decide which game you play by flipping a coin. Say you get heads—great, you’re in our control condition and you play the game that makes you a lot of money. But suppose you get tails, and I say “aw bummer, but look—my boss isn’t here today and no one really needs to know that you actually flipped a tails. I can just say you got in that other game so you can make more money, and in exchange you give me a bit of cash and no one needs to know. Deal?” And suppose you say yes. The game you’re about to play gives you the chance to cheat to earn a bit more money, do you cheat more, or less, after I ask you for a bribe?
I might say “Easy, they’ll cheat less because of moral balancing.” And I could cite great and interesting research by people like Benoit Monin at Stanford that shows that when people do something moral, they feel they’ve done their good deed for the day so they can cheat a little bit. Or, in reverse (and more relevant here), when people do something wrong, they feel like they need to do something moral to make up for it. We maintain a moral equilibrium at the comfortable space between “saint” and “total asshole,” so that’s what would happen.
So look, I’d say. Moral balancing is a fact, so bribe acceptors would cheat less. And furthermore, given this data, we know that you ought to let yourself cheat a little bit in small harmless area before you make a big moral decision. That way you’re sure to do the right thing.
But let’s back up to the bribing study. Do people cheat more or less? I might also say “Easy, they’ll cheat more because of moral consistency.” And I could cite great and interesting research by people like Dan Ariely on “self-herding,” where we use our past decisions and behaviors as a guide for our future ones. So if we do something that helps the environment, we’ll shift our future behavior in line with environment-helping as a way to stay consistent because no one likes feeling like a fool or a hypocrite. And if we do something immoral, we just say “what the hell, I’ve already cheated a bit why not just cheat more.”
So look, I’d say. Moral consistency is a fact, so bribe acceptors would cheat more. And furthermore, given this data, we know that you ought to not let yourself cheat on any small little thing before you make a big moral decision. That way you’re sure to do the right thing.
I hope it’s clear by now that it’s actually not at all easy, and past research isn’t always enough to generalize from one context to another. There are all sorts of things going on in the background that makes some studies turn out one way and others turn out another way. Do people morally balance or act morally consistent? Well it depends—individual differences and environmental factors, like whether they tend to follow rules or focus on outcomes, matter.
This is all to say that it’s too easy to find a study to support any conclusion you want and any behavioral prescription you want. And all of this, by the way, is why we actually ran our study instead of just speculating (I won’t tell you what we actually found, because I probably can’t; look out for our paper). The best we can say, looking at existing studies, is “if it’s more like case X we might expect effect Y to happen, if it’s more like case P we might expect effect Q to happen, but we probably need to run a study to be sure,” and how confident we are that any new case is like case X or P is probably going to determine whether running a new study is a waste of time or not. What you can’t do is just point to either effect Y or Q and say “this effect will also happen in this unrelated context, W” with no work at all to show how W is relevantly similar to X or P.
This was an extended set up, but I think it’s important to illustrating what’s wrong with some cherry-picked findings on the part of Patheos’s Peter Mosley. It’s easy to tell stories and hard to actually work out the theoretical ties that make inferences from the data justified. To Mosley, rude and in-your-face atheism works because:
- Attack ads inducing fear work in political campaigns, sometimes.
- Bucking social norms makes you seem dominant, sometimes.
- Bucking social norms translates to actual power, sometimes.
Mosley spends a lot of time filling in those “sometimes,” and whether or not atheism fits in those “sometimes” isn’t that important to me and I’ll grant him those, though I’m skeptical. Instead, I want to focus on all the other assumptions going on here to somehow hammer these results into supporting anti-theism, because that makes the entire point a non-starter. Why should we assume that atheism is relevantly like a political campaign? Why do we even care about dominance or power? What are we even trying to accomplish, and how is what we learn from these three studies supposed to translate into those results? None of this is clear, and Mosley doesn’t sketch any of it out.
The second and third studies Mosley cites are the easiest to tackle. The finding is about second-order signaling. If there’s a common trend that signals something (say wealth or intelligence) then intentionally bucking those trends (say, refusing to wear a suit; see: tech start ups) signals to others that you’re too good to even have to worry about that shit. For example, smart people use big words and awkward grammatical constructions like “in which” and “with whom” to signal that they’re smart—I say “shit” a lot. See how that works? (And as an aside: grad students in my lab like to brag that the “red-sneaker effect,” as it’s dubbed, was inspired by my boss, who likes to wear unconventional shoes and socks and, as far as I can tell, has never worn a suit in his life. I have no idea whether it’s true or not.)
But of course, as I hope I’ve show by now, this is all very complicated. All of this is going to depend on what the norm being bucked is, what that norm signals, what bucking that norm is meant to signal, who the audience of bucking that norm is, and on and on. Do we have any reason to suppose that respecting religion is a relevant norm like dressing professionally is for a professor? Does respecting religion confer any real status? This isn’t obvious to me. So what should bucking the “respect religion” norm even signal? That you, I don’t know, are a rebel? A cultural maven? A ~free thinker~?
Even so, and that’s a big even so, how are these findings supposed to matter? So you’ve established yourself as a powerful, independent maven. That’s great for you, but the connection to anti-theism isn’t exactly immediate (I can be a maven in lots of ways, why about religion?), and its not clear what goal it’s even meant to serve. Mosley never spells it out. Unless we think coming across as powerful is important in its own right, which I don’t, or that being seen as powerful might make you more likely to persuade an ideological opponent (which is plausible, but far from demonstrated; you can be high on competence and low on warmth, and not much is going to happen except people will think you’re an ass and socially exclude you), then I don’t see how these studies support an anti-theistic position.
But let’s look at the first study Mosley cites, because it exemplifies all these problems in more detail. Here’s what he says:
According to this article from the American Psychological Association, positive campaign ads and negative campaigns do work — but in different ways. If you don’t want people to really research your argument but just take it for granted, and if you want to reinforce the views people already have, a positive campaign ad will often do the trick. It’s reassuring. However, if you want to change minds and unsettle the status quo a negative campaign ad will be more useful.
He goes on:
If we are going to win the current culture war, then, it seems that atheists are helped by a negative campaign that helps others think. Positive discussion concerning atheism is going to invigorate our base, but it will do little to get the religious majority to reconsider their own religious beliefs and think about atheism. At the same time, too much of a focus on negative discussion regarding religion among atheists might eventually, ironically, alienate a base that is looking to be reinforced and reassured, that desires more positive reinforcement…which might explain some of the weariness some of the more long-time atheists often seem to express regarding the negative attacks on religion that atheist groups bring up regularly among themselves.
There are a few immediate reasons we should suspect Mosley’s conclusion, looking at the article he links to and the article that article is about. First, we can see that ads barely matter—a 1,000 ad advantage leads to a .5% increase in voting, which might matter for a tight election but little else. At the very least, it’s hardly the stuff of great social upheaval. Next, the effect he references was about feeling fear, not being rude. And lastly, we only know that fear only makes people interested in more information, and for information specifically relevant to the threat.
So how is all of this going to play out in atheism? It’s not like we’re running a tight election, and getting people scared (anti-theists don’t typically advocate scare tactics, as far as I know) isn’t going to make them go out and research the questions underlying the beliefs that are fundamental to their identity and worldview. Here’s what Mosley said:
People may be vocal about rude, negative arguments, but the fact is that, like it or not, they can be quite effective in several situations.
But Mosley hasn’t shown this at all—he’s only shown that inducing fear in political contexts can impact some voting behavior and intentions to look at information. He’s passing off a finding as relevant in a completely different context to make a behavioral prescription, with no work at all to show how the inferences involved there are justified.
After all, why should we expect a conflict between atheism and religion to be anything like a political race? Political races are often two entrenched ideological views that both rely on an unswayed middle to win them elections, but that’s not what atheism is like at all. There’s no reason in the slightest to suppose these findings might generalize in a way we’d care about.
Even if it were a closer analogy, suppose it showed that a party with a small fraction of the votes—say the Green Party—was more effective with attack ads than positive ads. Even then, we wouldn’t have any reason to accept it. How does the effect work? Who is it persuading? Is it rallying people who agree with you, a few Green Party members who were too lazy for political action earlier? Is it stoking the moderates and undecideds, who have no stake in the issue, yet? Is it genuinely persuading anyone who is ideologically intrenched and opposed? None of this is clear, and the only alternative that is relevant for anti-theism is this last explanation. Even more, what even are the goals of atheist activism? Are they anything like the goals of political ads? Mosley doesn’t say what his goals are, but “a slight shift in stated voting behavior” doesn’t seem like it’s on that list.
But this gets to an even deeper problem. Sure, Mosley might be able to tell a story about how all of this might work out, but you could do that for anything. What matters is whether we have any reasons to think it might be the case, and we don’t. What we do have, though, is lots of data about situations much closer in relevance to atheism than political elections are. Maria Konnikova surveys this literature beautifully. We have every theoretical reason in the world to suppose that anti-theistic rhetoric is going to be next-to-useless, and a few tortured interpretations of a few studies shoehorned to support anti-theism isn’t going to cut it.
Here’s what we know works for changing closely held beliefs, at least a little bit: self-affirmation. People’s beliefs don’t just abstractly track what they think is true–they’re functional. Beliefs are integral to who we are and what we stand for, and our identities often depend on them. If our identity and values are threatened, especially by someone loud and rude, we’re not going to reassess our values and identity and say “I’m going to join his team.” We’re going to write that person off as an asshole and get more entrenched in our views, which is exactly what happens (eg: this one example of many). If we’re secure in our identities, if we’re not threatened, if we’re feeling great and competent—only then are we going to be open to potentially threatening information.
There are no studies that show that ideologically entrenched views on a deeply personal and important topic are amenable to aggressive challenge. Every study we know of shows exactly the opposite, so it’s going to take a lot more than what Mosley has given us to argue that New Atheism works.
And since I’ve already gone on too long (see: behavioral consistency), now’s a good chance to mention that I used to be a New Atheist. I really did, as hard as that might be to believe. But what happened was I actually studied the relevant psychological data, and I haven’t met a single social psychologist or sociologist or scholar in any relevant field who thinks the basic claims of New Atheism are true. And I’ve met a lot of people in relevant fields, almost all of which are atheists. Richard Dawkins and Sam Harris have been running jokes in more than a few psychology and philosophy classes I’ve been in.
So what we’re left with is a New Atheism of the gaps, shoehorning the efficacy of New Atheism where it doesn’t fit and where we can’t test it (“oh no, New Atheism works over time, so you can’t pick it up with individual studies,” or “New Atheism is really about convincing the fence-sitters, even though there’s no real relevant data that shows that that works,” and “I know this guy who was totally convinced by being made fun of, so that should probably take precedence over all the available scientific data” and so on). And of course, the academe is just too politically correct and intellectually meek to say the truth we all believe so loudly and bravely (as I’ve heard some New Atheists suggest).
But I don’t believe it. I don’t just think New Atheism doesn’t work, I think it’s a fundamentally misguided and reductive way of understanding religion, religious beliefs, and their role in the world. And it’s going to take some real evidence to change my mind.