Statistics interpret the world we see, and they make the world we know. When we discuss social problems, we measure them by figures purporting to describe how they grow and fall, how they become “epidemics.” Statistics are inevitably cited when we discuss such issues as homelessness, inequality, sexual abuse, harassment, rape, drug abuse, terrorism, or hate crime. Similarly, we measure religious trends through numbers – growing or shrinking congregations, processes of secularization or revival. Much of my career has involved discussing and unpacking social statistics, understanding how they are compiled and what they can and cannot tell us. I’ll be devoting a couple of posts to these topics, and suggest some guidelines for responding to quantitative claims about almost any topic. Before we can think about religious statistics, then, let’s establish some ground rules.
When we read any statistical claim, several questions should strike us immediately, to decide whether those numbers are credible or reliable. We always need to ask how the individual data points were reported and collected, and how the numbers were compiled. Specifically,
Who is doing the counting?
What are they counting?
Are those agencies or groups in a position to observe and assess reality?
Do they have motives to interpret the stats in particular ways?
What can the raw numbers tell us about attitudes or behaviors?
To illustrate the kind of problems I will be exploring, I’ll draw some examples from the study of suicide, which makes my points quite well.
To begin with a legendary sociological horror story. Sociology textbooks usually begin by discussing concepts and definitions of community, and how these change with the onset of modernity. For many years, one of the examples most commonly used to illustrate these ideas involved suicide statistics, and particularly the stark contrast separating conditions in Ireland and Sweden. Way back when – in the 1970s, say – these countries stood at opposite poles in terms of modernity and religiosity. Also, Ireland’s suicide statistics were extremely low, while Sweden’s were very high.
Aha, noted the textbooks, the reasons for this are obvious. (I stress I am reporting attitudes and conditions from decades past, not present day). Ireland is a traditional religious society, an organic community rooted in the village and the extended family, where everyone feels they have a place. People just don’t commit suicide. Sweden, in contrast, is an advanced secular society, based on values of individualism. People feel anomie, they are cut off from the world, they have few close family ties, and they face existential crises that lead them to kill themselves. Unlike the pious Irish, secular Swedes lack the religious inhibitions that might prevent them from taking that step. QED. Now on to our next textbook example …
Well, not exactly. After many years of seeing such examples in print, some scholars asked just what were the statistics on which the claims were based. Statistics, after all, don’t measure behavior, they measure recorded behavior. Imagine a Swedish case where a man has died, and by him stands a bottle of pills, and a note explaining his decision to end his life. The police are sympathetic, but record the act as a presumed suicide, and that is confirmed by a brief investigation. We have our statistic. Now imagine the same event in contemporary Ireland, say in 1965. Suicide in this society is a mortal sin, a horrendous blot on the record of any family or community, and the authorities will work very hard indeed to find any conceivable way to avoid the obvious interpretation. Perhaps police will discreetly burn the suicide note, or else pass it to the local Catholic priest for his counsel, and he will suppress the damning evidence. In saying this, I am suggesting no sinister motives whatever: authorities were working together for benevolent purposes, to protect family and community. But the result was that the local coroner would have no reason to find a verdict of suicide, and would report a tragic accidental overdose. There is no suicide to report, or to count.
The same act, in other words, would have one statistical outcome in one society, and a totally different one in another. Before an act or event can become a statistic, someone has to identify it as falling within a particular category, and someone has to report it. And someone – possibly a totally different person – has to record it officially, giving it a particular label.
So were suicides more common in Sweden or Ireland in these years? On the available evidence, we can make no such comment: we have no idea. For all we know, perhaps the Irish were killing themselves at a far higher rate than the Swedes. But going further, it is impossible to make any determination whatever about that claim. There are things that we don’t know, but also things that we literally cannot know, and few social analyses accept that latter possibility.
As one unknowable example of many, just think of the obstacles that prevent us from knowing just how many doctors in a past society have practiced euthanasia, once or a hundred times.
That suicide case is useful because any attempt to make statistics involves some knowledge of intent, and of attitudes. Think of some other modern-day examples. We regularly hear these days about the reportedly very high rate of suicide among doctors and medical professionals – an “epidemic” in fact. What is it about those jobs that makes people so despairing, so likely to end their lives?
But think it through. Compare (for instance) doctors with lawyers. Both categories might have an equal proportion of miserable and depressed people, some of whom decide to end their lives. (I have no idea what the actual rates of depression or misery might be). Doctors, though, know exactly how to kill themselves in the quickest and most painless way, and they have immediate access to the drugs that can achieve this effect. Lawyers generally don’t. Some lawyers will succeed in killing themselves, but many others will fail, perhaps repeatedly, through vain attempts to slash their wrists or consume sleeping pills. The doctor dies, the lawyer survives. Of course, then, recorded suicide rates are vastly higher for doctors than other groups, but that says nothing whatever about factors like the stress or depression they might face. We are measuring opportunities, not attitudes.
The lesson here: You can record a raw fact, but don’t try to interpret it unless you know exactly what the figures are recording.
Police offer another useful example. Anecdotal evidence suggests that police officers kill themselves at a rate far higher than official statistics suggest. The reasons for the gulf are plain enough, when you consider who interprets and reports the event. When an officer is found dead in circumstances that obviously point to suicide, other officers will likely do anything they can to avoid that finding, which could taint the department, and above all, prevent the dead person’s family from receiving benefits. Such concealment is illegal, but it would fall within the bounds of unofficial police culture. That is absolutely not a complaint about any police malfeasance, still less a conspiracy theory: it is a comment on human nature, and human decency. And unfortunately, human beings compile the data. Even worse, they are the ones who decide what constitutes data, and what does not.
Human nature may not change, but the tendency to conceal or withhold information certainly does evolve over time. Professional subcultures of whatever kind might favor benevolent discretion, but they can be forced into the open by a changing legal environment, for example by mandatory reporting laws. Shifting media attitudes can also make secrecy much more difficult. If you want to understand changing social statistics – in other words, the history of a given problem or situation – you absolutely have to understand those media and legal contexts.
This is a vast topic, but let me just say here how impossible it is to get around these obstacles by quantitative means alone. I have spoken of the critical role played by doctors, police and coroners, the gatekeepers of quantitative truth. If you want to know about police behavior and professional values, or the workings of doctors or coroners, then the only way to proceed is by qualitative research – by interviews and ethnography. If you want to assess trends over time, then it means getting into historical methodologies, the use of qualitative and often literary evidence. That is all assuredly scientific, in that the findings are or should be testable, replicable, and falsifiable. But a widespread academic prejudice holds that such “soft” methods are nowhere near as good as “real” hard statistics. And “prejudice” it is.
To address another very common problem in reading statistics: Correlation does not equal causation. Just because two graphs roughly track each other, whether up or down, does not mean that the phenomena they are describing are somehow connected.
To take a fascinating current example, the modern-day US (in the past 20 or 25 years) shows two significant and inescapable trends. One is the very steep rise in the number of guns in private ownership, and especially handguns of different kinds. The other dramatic change is the collapse of crime of all sorts, especially violent crime, including homicide. Whatever your age, sex, location, or race, you are vastly less likely to die as the result of private violence in the US, and yes, that does take account of all the well-publicized instances of mass murder. However horrific such incidents might be, statistically at least, mass murders are a microscopic portion of the overall homicide statistics in the US.
Any future discussion of US social history absolutely has to take account of that tectonic change in violent crime, and to try and explain it. I make an effort in my current book on US History since 2000.
As a graph, the rise in gun ownership looks like a daunting Alpine peak. The graph for trends in violent crime and homicide, meanwhile, looks like a terrifying ski slope, downhill all the way.
But for current purposes, what do we do with the two trends? Someone could argue that more guns serve to deter crime – more guns, less crime – but that linkage responds poorly to closer study. The simplest assertion we can make is that the two phenomena are just not connected, and independent of one another.
I’ll return to these matters in an upcoming post with a discussion of religious statistics, and what we can and can’t say about the rise and fall of particular churches and faiths.
If you want a really smart and readable account of how to read social statistics, there is a wonderful series of books by Joel Best, namely Damned Lies and Statistics (2001, new edition 2012); More Damned Lies and Statistics (2004); and, Stat-Spotting: A Field Guide to Identifying Dubious Data (2008, all from the University of California Press).