Representative Ben Baker wants the children of Missouri to study the bible in our public schools. His bill – House Bill 267 – has already passed the House and is now being considered by the State Senate. I testified against the bill at its public hearing this week, and there I heard a new argument offered in its favor: according to one Republican Senator, studying the bible makes you perform better in school. In the public hearing one Republican State Senator, Ed Emery, cited the work of Professor William Jeynes of California State University which, he said, demonstrates that students who study the bible can expect to see their GPA increase by an average of one point, compared with their non-bible-tutored peers.
The problem is, this is a complete misrepresentation of Jeynes’ research (though one, I’m sad to say, he has been happy to promote).
The main study referred to in the hearing is a meta-analysis performed by Jeynes and published in 2010 examining “The Relationship Between Bible Literacy and Behavioral and Academic Outcomes in Urban Areas.” This paper looks at 11 other studies to see if there is a relationship between the variables he’s interested in: bible literacy on the one hand; behavioral and academic outcomes on the other. The idea is that if he can show that there is a positive relationship between biblical literacy and good behavior/academic performance, then he can argue more easily for the inclusion of bible literacy lessons in public schools.
To its credit, Jeynes’ meta-analysis does find quite strong relationships between the variables he’s interested in. According to these studies, students who tend to score higher on tests of biblical literacy also tend to behave better and have do better in school. As Jeynes puts it:
“The results of this meta-analysis indicate that Bible literacy is associated with positive behavioral and academic outcomes. The relationship between Bible literacy and academic outcomes was especially large. Also notable is the fact that every single study that examined Bible literacy indicated positive effects.” (p. 536-537)
Does this mean that we should rush to introduce “bible as literature” classes in public schools everywhere? Absolutely not.
The main problem with this approach – a challenge which rears its head frequently when researchers try to show the utility of particular educational interventions – is that demonstrating a statistical relationship between two variables does not, in itself, show that one of the variables causes the other. This is a basic problem in social science: a relationship between two variables doesn’t show that the two variables are linked in any way, let alone that one is the direct cause of the other. Lots of work has to be done to establish a causal claim – much more than just finding a relationship between A and B. It is quite possible, for instance, that a third unexamined variable is responsible for changes in both A and B. It is possible that B causes A, rather than A causing B. It is possible that A and B are not causally related at all, and the observed relationship is simply a statistical accident. Much more work must be done after finding a relationship between A and B, before we can conclude that A causes B.
It’s a red flag, then, that in the introduction to Jeynes’ meta-analysis he writes the following:
“Conducting a meta-analysis will help lay people and academics alike fathom what will emerge as the likely effects if a course on the Bible as literature were permitted in the public schools.” (p. 526)
This is just not true. Conducting a meta-analysis examining these specific questions will show what relationships exist between the different variables analyzed in those studies, and particularly whether those studies show the same relationships or different ones. But unless the studies themselves are able to provide evidence of a causal relationship between the variables they examine the meta-analysis will provide no solid reason to think that any particular effects will ensue if public schools start teaching courses on the bible as literature.
How would a study start to secure a causal relationship between two variables? The best way is through experimental tests. Ideally, in the educational space, that would look something like this: you’d set up three groups of students which are relevantly-similar (same age, same gender mix, same ethnic mix, same socio-economic background, etc.); then do a set of tests which see where the kids in those groups are regarding the variables you’re interested in (here that would be their general academic performance); then you’d give one group the educational input you’re interested in (a course on the bible as literature), give the second a different-but-related intervention (say, a course of the same length on Hamlet – or, if you wanted to be spicy, perhaps the Koran), and give the third group no intervention at all; then you’d give them all the same tests you gave them at the start, to see whether one group outperforms the other. That way, you are getting much closer to the question you are really interested in: does teaching kids the bible make them do better in school, compared with other things we could be teaching them? (And even if you do this there are often lots of other variables which you haven’t controlled for which could undermine your causal inference – educational research is really hard).
Exactly one of the eleven studies in Jeynes’ meta-analysis does any work to establish a causal relationship between bible literacy classes and academic achievement. In fact, only one of them analyzes the effects of any educational intervention at all – and this is an unpublished doctoral dissertation which says nothing about academic achievement. So any causal implications made on the basis of this meta-analysis are merely speculations consistent with the data, not statements evidenced by the data. It is surprising, then, that Jeynes’ analysis of the data includes very few references to the problem of establishing a causal link between the variables he is tracking. The only extended discussion of whether such a relationship can be said to exist at all is this extraordinary sentence:
“To the extent that a cause and effect relationship likely exists between Bible literacy on one hand and behavior and academic outcomes on the other, one can suggest a number of reasons why this relationship exists.” (p. 537)
“I have presented no evidence at all that a causal relationship exists, and strictly speaking the studies I have analyzed cannot provide such evidence. However, were we to make-believe such a relationship, here is what might bring it about – and we all know it’s really there, don’t we?”
I don’t want to be unfair to Jeynes. It is perfectly reasonable, when we discover a strong relationship between two variables which we think plausibly might be connected, to hypothesize what a causal relationship between the variables might be, so that future researchers might focus their experimental tests. Sometimes, in social science, the best we can do is offer strong relationships between two variables and a plausible causal theory which links them – experiments are not always possible.
But it is a primary responsibility of researchers to be crystal clear when we are reporting the results of our studies, and when we are merely hypothesizing causal mechanisms which might explain them. Eliding the two in the way Jeynes does in this paper is not responsible. Even without looking closely at the constituent studies in this meta-analysis, we can already conclude that it can offer no evidence that bible literacy classes will improve student achievement: the studies included just don’t have the requisite features.
Once we do look closely at the studies, though, the problems with this meta-analysis get worse. Much worse
If we are interested specifically in the claim that bible classes improve academic outcomes, this analysis provides basically no relevant information at all. Jeynes himself notes that “only 3 studies [in the meta-analysis] focused on academics” (p. 536). So of the 11 studies included in the overall analysis, only three of them had anything to say about the academic achievement of students at all – and two of those seem to be Jeynes’ own research! So we have two studies by Jeynes, and one by someone else – that’s the entire basis for the Senator’s claim.
Worse still (and this is when the whole sorry affair becomes truly comedic), the one study of these three which is not Jeynes’ own work is from 1928. It is common practice in meta-analyses of this sort two consider only recent studies, partly because research methods change a lot and partly because the educational environment can shift dramatically over time, such that results of a study a few decades old are not particularly useful today. This study – the only study in the entire meta-analysis which investigates a relationship between biblical literacy and academic achievement and was not conducted by Jeynes himself – is almost a century old! Once you remove that study from the analysis, the observed effect size is reduced dramatically (p. 536).
So when it comes to the question at hand – whether introducing bible classes into public schools will improve student achievement – this is hardly a meta-analysis at all: it’s just a review of Jeynes’ own two studies! I can’t find the full text of both these studies online – one was not even published at the time it was included in the meta-analysis – but neither can support the causal claims Senator Emery and Prof. Jeynes seem to want to make because they are not experimental studies.
There are other problems. Jeynes’ meta-analysis glosses over some absolutely enormous differences between the educational environments in which its constituent studies took place. Two of the studies are from the UK, where comparative religious education is required by law, and where the religious landscape is profoundly different. One is about college students, not high school kids. One is not about students at all, but adults who may or may not be in school (Jeynes at least calculates each of his effect sizes after removing this one, but it seems odd to have included it at all). Jeynes claims that “the total number of participants [included in all studies in the meta-analysis] exceeded 50,000” (p. 529), but this figure relies on double-counting the 25,588 participants in a single survey whose results were analysed twice (p. 533). [Side note: imagine how closely I must have read the study, and how assiduously I must have followed up each citation, to have discovered this error.]
The bottom line is this:
- Jeynes’ 2010 meta-analysis offers no convincing evidence at all that introducing bible literacy classes to US public schools will lead to increased academic achievement.
- Only one of the eleven studies in the analysis even has the experimental structure which would be required to secure that causal claim, and it is an unpublished doctoral dissertation.
- Only three of the studies investigate even a relationship between academic achievement and biblical literacy. One of those is from 1928 (!), and the other two are by Jeynes himself.
- The study is written in such a way as to hint at a causal relationship far beyond that which the evidence can support – an impression reinforced by Jeynes’ public statements and non-academic writing.
The use of this manifestly inadequate research to promote their political agenda reveals that the politicians involved have either completely misunderstood it and are misrepresenting it by accident, or are willing to ignore the problems with it and are misrepresenting it on purpose. In either case, they should stop.
These posts take a lot of work. Consider supporting me on Patreon!