The Freethinkers’ Political Textbook – What Do I Mean by “Evidence”?

The Freethinkers’ Political Textbook is a guide to evidence-based activism for atheists, agnostics, Humanists, and all freethinkers, designed to help us communicate better. Current posts in the series include:

The Freethinkers’ Political Textbook – Evidence-Based Atheist Activism
Go for the Gut – On the role of emotion in persuasion

Framing for Freethinkers – Introducing Lakoff’s concept of “Framing”
Mr. Jefferson, Reframe that Wall! – Changing the language of secularism to win more to our cause
Know the Audience – The single most important element of any persuasive campaign
Steel, Velvet, and the Honorable Duelist – Questioning the “Accommodationist/Firebrand” dichotomy
Logos, Ethos, Pathos – Aristotle’s classic peruasive tripod


  • There are many ways to generate relevant of evidence we might use to inform our freethought activism.
  • Experimental evidence evaluating the effectiveness of campaign materials or whole campaigns is valuable, but very difficult to acquire.
  • By contrast, there is a large amount of evidence, from psychological studies, about what makes for effective communication in general.
  • Therefore, we should at the very least use such evidence to design our campaigns.
What Do I Mean by “Evidence”?

When writing about “evidence-based activism” it makes sense to ask “What do you mean by “evidence”? What sort of evidence should you look to in order to improve efforts at acticism and communication?” It’s a good question, because there are many different ways of gathering relevant evidence, some much more valuable than others, and some much more easy to get than others. So what do I mean by “evidence”? In this post I explore three types of experimental evidence we might seek in order to improve our campaigns, evaluating the strengths and weaknesses of each, and explaining my approach in the rest of the textbook.

Experimental Evaluation of Individual Campaign Materials

One useful sort of evidence we could hope for when trying to learn what works in the field of communication would be experimental studies which evaluate the effectiveness of individual campaign techniques. Let’s say we were trying to design a great billboard campaign which reduces negative sentiments toward atheists. In this instance an experimental evaluation of a campaign material would focus on the billboard itself. It would mean exposing one representative sample of the target audience to one billboard, another similarly-representative sample of the target audience to another billboard, perhaps a third to a control stimulus, and then evaluating the change in attitudes in each group to determine which was most effective in achieving the desired result. This would be an experimental evaluation of a given campaign material.

Such an experiment would be valuable because it would provide a test-run of a given campaign with representative audiences, and would enable you to gauge likely reactions beforehand. It would also enable you to test many different possible campaigns against each other, to see which worked the best. However, it is extremely expensive and time-consuming to perform, and doesn’t easily allow you to judge the effect of your campaign on audiences you are not targeting (since your sample is presumably going to be chosen to be representative of your target audience, your non-target audience is not represented). This is problematic when the area of the campaign is a highly contested one like religion, sine the reaction of audiences which you are not targeting can significantly affect how you campaign is perceived. You could try to get around this problem by widening the scope of your sample audience, but as you do so you both dilute the information you are getting on your target audience and increase the time and money such a study costs. Also, studies undertaken in an experimental setting suffer from a lack of ecological validity: the setting in which the experiment takes place is significantly different to the real-world settings in which the billboards (for instance) will be placed. This reduces the confidence we might have in our findings.

Finally, there are also problems with this approach because it requires that you specify discreet, measurable outcomes you hope your campaign will have. While this may seem desirable (and, in many instances, is desirable) it is problematic when a campaign has the goal of changing people’s minds about an issue or changing their feelings about a group. If I’m selling a product, then the success of an ad campaign can be measured, in part, by the change in sales of my product. If I’m promoting a website, I could look at the effect on hits. If I’m encouraging people to come to a conference or join a group, I can look at attendance (though note, in all three of these cases, I have no easy way to separate out the effects of my campaign and the effects of other factors which might coincide with it – a significant problem). But what if my goal is to get people to feel more kindly toward atheists, or get atheists to identify as atheists, or stop “closet atheists” pretending to be religious? How do I measure that? It’s an extremely difficult problem, which makes studies of this sort very challenging.

Experimental Evaluation of a Whole Campaign

Alternatively, one could run a full billboard campaign in one area, and a similar campaign (but different in key respects) in another area (chosen to be as close as possible in terms of relevant characteristics to the first), and evaluate which of the campaigns was more successful using pre-determined outcome measures. This approach has the advantage of high ecological validity (since you actually ran the campaign), and would, in theory, enable you to evaluate the effect of your campaign on all sorts of audiences you might miss in the scenario above. Political polling sometimes takes a similar form: particular phrases or images will be tested with sample audiences to see which might work best. This would be an experimental evaluation of a whole campaign.

However, this approach has all sorts of problems, too. First, how are you going to gather a representative sample of people who have seen your billboards? This seems like a difficult task: you could stand on the street and ask people if they saw it, but there’s no guarantee the people you meet will be a representative sample. You could select a sample beforehand on the understanding they will pass the billboard during their daily routine – but then you’ve prepared the sample in a way which is likely to affect their reaction. You could, perhaps, determine how many people from the area of the billboard took a certain action, like donated to your organization or visited your website – but those are proxies for your real goal which is, in this instance, changing hearts and minds. Evaluating these campaigns is not easy! And, as above, the problem of identifying clear, measurable goals persists. 

Experimental Evaluations of Communication Techniques

Finally, we could seek to experimentally test the effectiveness of individual communication techniques. In this instance, instead of showing our target groups different billboards, perhaps we might only show them different fonts we might want to use, and see what reactions they have to those fonts. We could then try different colors, different phrases, different visual styles etc., slowly build up a picture of how our target audience might respond to different sorts of stimuli, and then use that knowledge to design our campaigns. Eventually we would hope to narrow-down our experiments until we are essentially doing psychological experiments testing what is, in fact, persuasive to people in general, or what helps people learn or change their minds in general.

This approach has numerous benefits. First, it gifts us with a repertoire of effective approaches which, if we select them judiciously and test them thoroughly, we can expect to be effective in a variety of circumstances beyond our current campaign. Like a painter’s palette, this approach leaves us with a series of beautiful “paints” we can use in future communication efforts which we can reasonably expect to be successful, even if we don’t get to directly test the success of that particular campaign. By building a clearer picture of how the human animal tends to communicate, and what levers we might pull to get a desired response, we can endeavor in our campaigns to work with the grain of human psychology, instead of against it.

Furthermore, this approach helps us determine why certain campaigns tend to be more successful than others. While the above strategies have the benefit of telling you in campaign X succeeded or not (perhaps), they don’t always offer much insight into why it was successful. This approach does do so, and enables you to do some initial evaluation of campaigns even when no concrete data is available as to how they performed, asking “Is it likely, given what we know of effective communication in general, that this campaign was successful?”

Finally, this approach is relatively easy to undertake, given that the sorts of studies which must be performed to generate the required information are standard psychological studies of the sort carried out in universities the world over. Indeed, the science of effective communication is extremely well-studied, given the obvious commercial benefits of communicating well with an audience. There are countless studies examining basic questions regarding how to communicate effectively which only take time and dedication – resources we do have – to understand.

Of course, there are drawbacks here, too. Particularly concerning is that, while you can use the fruits of this research to design a particular campaign with some confidence that it is likely to be more successful than campaigns developed without consideration for such insights, you are still left with no guarantee that you particular campaign will in fact be effective. Nor does this approach generate any data of whether the campaign was actually effective, after the fact – unless combined with the sorts of studies described above. Finally, there are questions as to the universality of the findings delivered by such studies: most studies conducted at universities use a subset of the population which may not be representative of people in general, and there are numerous cultural and other contextual factors at play whenever you attempt to communicate. This leads to warranted skepticism regarding the findings of research into effective communication.

So What Should We Do?

Ideally, those wishing to communicate effectively with the public should use all three approaches: use basic understanding of human psychology to design a set of different potential campaigns, then test those campaigns with representative samples of the target audience, then post-test after the campaign to see how effective it was at achieving specific goals. However, it is frequently simply unfeasible to perform experimental tests of different campaign materials, or to evaluate whole campaigns: non-profit organizations (like most freethought organizations) have few resources to conduct such testing, and the metrics they might want to measure are frequently extremely difficult to measure in a systematic way. So what are we to do?

My position is simple: we should learn as much as we can about the basics of human psychology as it concerns effective communication and, at the very least, design our campaigns with that in mind. If we do this, we will at least avoid designing campaigns which we have good reason to believe won’t be effective, because they go against the grain of how human beings generally think and react. Also, we will be using techniques which have, at least in certain controlled contexts, been proven to work reliably. An example would be recognizing that research shows that people are more likely to be persuaded by communicators who share important characteristics with them, and therefore, when communicating, we should seek to demonstrate our similarity with the target audience rather than emphasizing our dissimilarity. Such principles – principles of effective communication – seem to hold across cultures and in all contexts, and therefore make for useful guides as to how to conduct effective campaigns.

In all endeavors freethinkers should be skeptical and apply critical thought to their techniques and ideas. Where possible, we should be guided by evidence. In the case of activism and communication, some of our best and most freely-available evidence comes from studies into the general characteristics of effective communication. To overlook such evidence because it is not particular to a given campaign would be irrational – and a huge mistake. Evidence from direct studies of effectiveness of freethought campaigns is extremely think on the ground – I can think of no single example of a campaign which was evaluated through experimental test. That is why the Freethinkers’ Political Textbook relies primarily on this basic research into effective communication – the third kind of evidence I describe.

"I invite the author to look up the horrors of Democratic Kampuchea, and how the ..."

I Used to Think Like Dave ..."
"For the same reason that religious rejectionists are prejudiced against religion and actually are repulsed ..."

Why Are People Prejudiced Against Atheists?
"You made a very fair and honest observation, and yet the conclusions are tricky. If ..."

I Used to Think Like Dave ..."
"Has Rubin ever actually said that structural inequalities, and power balances don't exist? Has he ..."

I Used to Think Like Dave ..."

Browse Our Archives

Follow Us!

What Are Your Thoughts?leave a comment