In a somewhat obscure comment on a recent post, mention was made of the problematic nature of “arguing from plausibility.” Presumably the point was that showing something is plausible or possible doesn’t demonstrate that it is correct, or in the case of history, that it actually happened.
While this is true, it must also be pointed out that, whether in the natural sciences or in history, ultimately we reach a point at which the choice is between a variety of scenarios, none of which may be strictly impossible, and so it is a question of which is more likely. Even determining likelihood is not the same as demonstrating what actually happened with no room for error or uncertainty. But it is the best we can hope to achieve.
If there is something that makes an argument problematic, whether it is made by an apologist or a scholar or anyone else, it is a demand for absolute proof before being willing to modify your own conclusions, while considering mere possibility, the mere absence of definitive proof to the contrary, as a sufficient case for your own beliefs.
Let me provide some illustrations of what I mean when I say that non-scholarly viewpoints are not distinguished from scholarly ones because the unscholarly or unscientific ones are impossible:
Is it possible that a deceitful God created the universe with the appearance of age and evolution – or for that matter, that aliens creating a simulated world that has these features and even deceives us as to the reality of the world we inhabit? Yes, strictly speaking this cannot be shown to be impossible. And so unless one is willing to accept that the data we can observe and test is a legitimate basis for drawing conclusions, then it will be possible to believe almost anything.
[As an aside, this is one of the reasons it is important to have theological arguments be made against the kind of deceptive divine activity that has to be posited for young-earth creationism to seem plausible to anyone. Unless such assumptions are addressed, one will not make progress no matter how much evidence is presented, since a mechanism remains in place for explaining that evidence away].
Is it possible that a rich group of ancient conspirators minted coins and left or dropped them at various places on their journeys, out of a malicious desire to mislead or at least confuse future generations about the political and social realities of earlier times? Yes. Unless at some point we decide that it makes more sense to treat certain kinds of historical evidence at face value, then no scenario about the past can be excluded.
How do we know that Christianity was not first invented by Nero in order to have someone to blame for the fire in Rome? Perhaps in the trauma that ensued, relatives of those wrongly accused began searching for answers about what this “Christianity” was that their relatives had supposedly adhered to, and someone forged documents to give them answers. Is that possible? Sure. The question is whether the mere fact that something is possible means that we ought to hold it as “true” – in the scholarly sense of providing the best explanation we can currently offer.
Perhaps the heart of the matter boils down to the following question: At what point should one stop asking about other possible interpretations of evidence, and accept what the majority of those who spend their working lives investigating the evidence agree on as most probable?
What differentiates mainstream science from various forms of pseudoscientific creationism, and mainstream historical study various forms of conspiracy theories and revisionism, is nothing other than the point at which one stops introducing ad hoc explanations to avoid the implications of the evidence, and instead follows the evidence where it leads.
The line is much finer than the above statement might suggest. Every major scholarly and scientific viewpoint struggles with difficulties, explanatory gaps, and incongruent data. That doesn’t mean that they are invalid, just that they are imperfect, and do not altogether eliminate the possibility of uncertainty and of subsequent improvement.
And so let me suggest that it is neither providing comprehensive, satisfying, completely unproblematic explanations that distinguishes scholarship from pseudoscholarship, nor the complete absense of assumptions introduced to make sense of data. Neither is it that those who make discoveries and contribute to discussions always have certain qualifications or a particular set of professions.
Rather, scholarship is the attempt to make the best sense of the data, in a way that does the best justice possible to as much of the relevant evidence as possible.
What do others think? Is this what scholarship ought ideally to be about? Does it provide a helpful continuum, if not a sharp dividing line, that might allow science and pseudoscience, scholarship and pseudoscholarship or apologetics, to be distinguished?
Perhaps it needs to be added that the question of what makes the best sense of the data is best decided by those who spend their lives rigorously studying the subject, and not by the impression that some may have who are only somewhat acquainted with it?