Until relatively recently, I had no idea that there was such a word as “Wikipediatrician.” But having come across it, I was inspired to revisit the question of how to view this widely-used source as an educator. The term is used by some to refer to those who are so immersed in online sources that they fail to consult other kinds of resources even when they need to. And yet the punny word sounds like it ought to be a doctor who seeks to cure Wikipedia from things that ail it.
But what disease if any does Wikipedia suffer from? If we are to offer a cure, first we must have an accurate diagnosis. And I’m less convinced than I used to be that there is something inherently wrong with Wikipedia – although there are definitely issues with some of those who use it. Encyclopedia articles, whether written by individual scholars or crowdsourced anonymously by as much of the whole human race as chooses to participate, summarize the conclusions of researchers but do a poorer job by definition of making clear how those conclusions are reached. It is through painstaking study of individual details and then synthesis of the results. The case for evolution or the historicity of Jesus is not made in Wikipedia articles or even in popular books by Richard Dawkins, Jerry Coyne, Bart Ehrman, or Maurice Casey, and denialist efforts to poke holes in those summaries will always manage to persuade some. But that is not because of shortcomings in the summaries, now who authored them, but a failure on the part of readers to understand how summaries relate to the processes whereby academic conclusions are drawn.
And so I think that I may have been too harsh towards Wikipedia at times in the past, when the real issues are with how it is used. What we “know” collectively, but no one of us is in a position to know or investigate individually, is a form of knowledge. Perhaps the issue is that, instead of a dichotomy between knowledge and non-knowledge, we need a hierarchy of knowing? As a recent article in 3QuarksDaily emphasized, there are different ways of knowing, and a BioLogos article a while back made a similar point. Collective crowdsourcing and sharing simply have to be one of those ways, since there is simply no way of investigating everything we need to individually. Sooner or later, we have to trust, and in some ways and/or in certain instances, trusting a community may make as much sense as trusting a highly-specialized individual expert. Because, as Yuval Havari emphasizes, we all have limited information, and the idea that any of us reasons in a wholly independent fashion is a deception.
In navigating these questions, the solution may not be to keep students away from Wikipedia, but to get them to write and edit Wikipedia articles, since there is evidence that contributes to their own information literacy development, as well as making a positive impact on our collective crowdsourced knowledge. But in the process, they also need to learn when it is safe and appropriate to rely on a consensus of the general public or the interested, and when one needs to seek out the consensus of those with training and expertise. In a pair of blog posts that I have been meaning to mention here for more than a year, Keith Reich has discussed the matter of trusting experts. In the first of those posts, he wrote:
Once one has achieved a level of expertise in a subject, his or her hard wrought conclusions ought to be trusted, at least by those with no business questioning them. We ought to trust that those who have put in the hard work of learning the depth and breadth of their field know what they are talking about when it comes to their conclusions in their field. Yet, since the internet seems to democratize all voices, many feel it their duty to inform the public that the experts are wrong. This is a shameful practice and one that ought to be ignored. Yet, all too often people listen to those spouting on about things they have no business spouting on about. Is there a good solution to this problem, or is this the price one pays for the convenience of the internet?
That sounds a bit like the fallacious appeal to authority, when understood individually. In those instances in which my own views are idiosyncratic, someone outside of my field would do better to trust the consensus than me, until such time as I persuade my peers to adopt my conclusions. Reich clarified this point in his follow-up post, in which he wrote:
While I said in my last post that one should not disagree with experts if one is not qualified to do so, I should give the following caveat: what I was really talking about was expert or scholarly consensus. Individual experts may not be correct. A particular scholar may hold an idiosyncratic, minority, or fringe opinion. Individually, experts are often wrong on particular issues. Yet, there is something called scholarly consensus which non-experts have no ability to judge adequately.
A scholarly consensus is when the vast majority of experts in a given field, with the relevant skills and knowledge, agree that the evidence points to one conclusion. Depending on the field of study, scholarly consensuses can be quite rare. Experts within any given field disagree on plenty of issues. Scholars are not inherently prone to agree with each other. Therefore, when the vast majority in a given field do agree, non-experts ought to respect that scholarly process that led to the consensus. Why these consensuses ought to be trusted is that what is being claimed when a consensus is reached is that, of all of the people with the relevant skills and expertise, looking at the same evidence, the vast majority reach the same conclusion. Scholarly consensuses are hard-fought and contentious matters and are not reached lightly.
Another reason scholarly consensus ought to be trusted by non-experts is because, built in to the very fabric of the scholarly world is a strong motivation to overturn consensus. Most scholars, myself included, want to be respected by one’s peers. Because scholars spend their lives thinking and producing ideas, we want those ideas recognized for their merit by other scholars. One of the best ways to gain notoriety and respect in one’s field is to successfully challenge a scholarly consensus. If that occurs, what it means is that a particular scholar has gone against the majority opinion of experts, and has been able to convince the vast majority that his or her position is correct. He or she as caused the majority of experts to change their mind. Therefore, there is a built in motivation for scholars to challenge consensuses. And, this does happen. Long-held consensuses are often challenged. Most of these challenges are not successful because the evidence does not support them. But, sometimes they are successful, and the consensus is overturned, a new consensus is formed, and the collective knowledge of experts in the field grows.
See also Bart Ehrman’s blog post about how to find out “what most scholars think.” Michael Pahl also tackled the subject, in a blog post in which he writes:
[T]his is why that “strong majority” is so important. Again, having participated among experts, having gone to numerous academic conferences, I know that all those personal biases don’t normally come together into some large-group bias. Rather, the group acts as a system of checks and balances and such individual biases tend to get leveled out in the group. After all, academics are a pretty critical lot, by both temperament and training.
And some grand conspiracy among experts? Organizing such people is like herding cats. Seriously. Academics in particular don’t herd easily, if at all. (I know, I’ve been a department chair.)
That, again, is why the “strong majority” is so significant. If somewhere around 95% of published climate scientists from around the world say climate change is real and human activity is the root cause, for example, then, since I’m a non-expert in climate change, I’m going to believe them. Quite frankly, the idea that this many scientists from this many countries employed by a mix of public universities and government agencies and private companies and NGOs are involved in some giant hoax is, to me, far harder to believe than that these scientists are simply correct.This “strong majority” of experts is important. It’s why we know the earth is round, that it revolves around the sun, and that it’s 4.5 billion years old. It’s why we know the Bible is a collection of ancient human writings from multiple cultures across centuries. It’s why we know that fascism has a terrible track record, as does any system that places too much power in the hands of too few with no checks and balances. It’s why we know you can stick a cryoballoon catheter up someone’s vein to their heart, inflate the balloon with nitrogen, scar the surrounding tissue, and so have a good shot at correcting atrial fibrillation (okay, I only know that because a friend is having that surgery done this week—amazing).
The “strong majority” of experts has given us the knowledge and technology we all take for granted all around us. Imagine a world without modern medicine, without high-speed transportation and communication, without electricity. Imagine a world without constitutional democracies or declarations of human rights.
All this and more is the result of the accumulation of expertise, experts collaborating together, building on the expertise of those before them. Ironically, it is only because of the expertise of experts that someone blogging in their basement can rail against experts and their expertise.
And so, at the end of the day, I’m with the experts. No, I don’t believe everything every expert says, even on their area of expertise. My own experience with expertise has taught me that. But trusting in the strong majority of experts has done us pretty well as a human race—this my experience with expertise has also taught me…
There has been a lot of blogging and other online writing on these sorts of topics in recent months. First Monday had an article a while back on credibility, trust, and authority in relation to Wikipedia (and see also their article on the use of Wikipedia for educational purposes in an Australian context). Darwin-bashing is but one tactic used in support of science denial, the deliberate sowing of distrust in experts in order to achieve ideological or profit-related aims. Figuring out how to combat dismissal of scientific conclusions, while also acknowledging that published scientific results can be wrong, is a challenge, but here too the solution seems to be to focus on consensus and not on individual results. Individual scientists and scientific papers are wrong quite often; the scientific community is much less likely to be wrong, and certainly less likely to be than someone who is not even directly engaged in the scientific process. And if we move beyond mere wrongness to the possibility of conspiracy, David Bailey has a nice explanation of why scientific consensus is not likely to be due to a conspiracy:
In short, there is no possibility whatsoever that major facts of science are being withheld or misrepresented in a conspiracy, at least not for any significant consensus conclusion of modern science. Alleged frauds of evolution, climate science, moon landings, vaccinations or cancer cures would require tens or hundreds of thousands of people, without any exception, to keep secrets over many years, which is exponentially unlikely (contrast these alleged conspiracies to the actual frauds mentioned at the start of this article, which involved only one or a handful of people). And nothing can stop maverick scientists from publishing papers that overturn conventional wisdom.
Unpleasant and inconvenient as some scientific findings may seem, we must accept them (provided they have passed peer review, have been examined and confirmed by numerous independent researchers and are accepted as well-established consensus in the field), not as incontrovertible truth, which can never be provided in science, but as reliable facts on which we can and must construct a rational worldview. To believe otherwise is to detach ourselves from modern scientific progress.
Randal Rauser also blogged about this, and why Donald Trump’s denialist tendencies are not the cause but the result of a widespread popularity of denialism and conspiracy thinking. Bob Cornwall made a connection between Pizzagate and information literacy, and NPR had a piece about a class that seems to do a good job of teaching skills that help see through fake news of that sort.
In an era when there is talk of the “death of expertise” and living in a “post-truth” world, we seem to need a multi-pronged approach. We need to cultivate the ability to be both appropriately skeptical and appropriately trusting of human perception as a whole, globally and across cultures, including the media. And we need to accept the limits of but also the power of scholarship and expertise, especially or at least when the experts agree. Perhaps above all else we need to focus on cultivating the kind of humility that can acknowledge the limited perspective and imperfect perception of any one individual, nation, political party, or any other grouping.
Of related interest, see Matt Sheedy’s account of attending a skeptics’ conference, and a recent post on the ethics of integrating social media into the classroom. Note too this fast-approaching deadline for a conference on liberal education and discernment:
Here is another conference with an extended deadline that relates to this topic:
And while this call for papers deadline is past, the conference itself remains interesting, and a testament to the increasing focus on studying questions of truth, authority, and expertise:
An infographic comparing the Bible and Wikipedia perhaps deserves a mention here.