The Validity of Expert Authority

The Validity of Expert Authority September 22, 2017

Some of you might see this as concerning a horse that has bolted after being flogged to death. Zombie horses aside, let me return to the conversation about consensus and, in this case, an appeal to expert authority. I think this piece has a goodly amount of really meaty and interesting ideas to get our heads round. To give you the context, here are the precursors to this post:

The debate has been taking place with Spiritual Anthropologist (SA) and now Verbose Stoic (VB). From my point of view, it has been in good spirit, and although other commenters have been “robust” in their clashes, I hope to continue it in good faith. The key point to the debate is the challenge and contention that scientific consensus in any way reflects the robustness of a scientific theory. Without really defining either consensus or robustness, things can get difficult. I have countered that consensus is reflective of pragmatic and coherent (in terms of epistemology) aspects of scientific theories with consensus. SA is demanding that there be empirical data to show the connection between consensus and robustness. This is fine as an ideal. My point was that pragmatic and coherent aspects of scientific consensus theories are implicitly or even explicitly built into those theories. It is why they have consensus.

One idea that has popped up has been about experts. As SA has said:

You mean expert knowledge. Yes. Appeal to authority can indeed be valid. And that’s fine. But the issue then is “is that enough?” Yes. If the person is indeed an expert.

As well as:

Until then, we have no idea as to how bias is or isn’t eliminated by larger and larger groups of experts.


Sure, but once we assume expertise on a topic, that’s it. It does not matter if you have one expert or 100.


Yes, I am not talking about whether or not an authority IS correct, just whether or not we are to assume that an authority is correct. If a person is reasonably assumed to be an expert, them appeal to authority is valid. Otherwise it is fallacious. Also, if an argument is made, then you can no longer just say “but this expert said otherwise.”

And when someone gets a second opinion, they do not just go to another GP. They go to someone with expertise on the specific issue. Remember, a GP is not actually an expert on a specific illness. In any case, I explained this all. There’s not much else I can do. If you want to take consensus on faith, then go for it.

There is a lot to unpick here. First of all, this is connected to the idea of groupthink. Yes, there is evidence to show that humans are biased by the opinions of those around us, such as in the moral dimension. The thing here is that the scientific method should, and does, in the long-run mitigate against such biases. This is the direction of the scientific process as well as being one of the aims of the peer-review process, if it works correctly. Of course, there may be elements of peers being more accepting of work that confirms their own position prima facie. Eventually, though, the scientific method should weed these issues out. Data wins, right?

The More, the Merrier

An interesting piece of research into comparing expert knowledge (Encyclopedia Britannica) to crowd-based knowledge (Wikipedia) states:

We find that Wikipedia articles are more slanted towards Democratic views than are Britannica articles, as well as more biased. The difference in bias between a pair of articles decreases with more revisions.

So, we have some evidence to support the notion that expert knowledge is less biased, and that the more something is peer-reviewed and edited, the less biased it becomes. In other words, the more people working towards a “body of knowledge”, the less biased (more accurate?) it becomes. To refine it more:

Our study finds that crowd-based knowledge production does not result in articles with more biased than articles produced by experts when the crowd-based articles are substantially revised. This is consistent with a best-case scenario. Contributors with different ideologies engage in fruitful online conversations and do not segregate into communities with others who share similar views (e.g., Mullainathan and Shleifer 2005, Gentzkow and Shapiro, 2011). We think this is an important and novel finding.

In looking at the science of predictions, and their accuracy (which, in a sense, can be seen as a parallel to this issue), Philip Tetlock and colleagues has found:

Intelligence helps. The forecasters in Tetlock’s sample were a smart bunch, and even within that sample those who scored higher on various intelligence tests tended to make more accurate predictions. But intelligence mattered more early on than it did by the end of the tournament. It appears that when you’re entering a new domain and trying to make predictions, intelligence is a big advantage. Later, once everyone has settled in, being smart still helps but not quite as much.

Domain expertise helps, too. Forecasters who scored better on a test of political knowledge tended to make better predictions. If that sounds obvious, remember that Tetlock’s earlier research found little evidence that expertise matters. But while fancy appointments and credentials might not have correlated with good prediction in earlier research, genuine domain expertise does seem to.

Practice improves accuracy. The top-performing “super forecasters” were consistently more accurate, and only became more so over time. A big part of that seems to be that they practiced more, making more predictions and participating more in the tournament’s forums.

Teams consistently outperform individuals. The researchers split forecasters up randomly, so that some made their predictions on their own, while others did so as part of a group. Groups have their own problems and biases, as a recent HBR article explains, so the researchers gave the groups training on how to collaborate effectively. Ultimately, those who were part of a group made more accurate predictions.

Teamwork also helped the super forecasters, who after Year 1 were put on teams with each other. This only improved their accuracy. These super-teams were unique in one other way: as time passed, most teams became more divided in their opinions, as participants became entrenched in their beliefs. By contrast, the super forecaster teams agreed more and more over time.

More open-minded people make better predictions. This harkens back to Tetlock’s earlier distinction between foxes and hedgehogs. Though participants’ self-reported status as “fox” or “hedgehog” didn’t predict accuracy, a commonly used test of open-mindedness did. While some psychologists see open-mindedness as a personality trait that’s static within individuals over time, there is also some evidence that each of us can be more or less open-minded depending on the circumstances.

Training in probability can guard against bias. Some of the forecasters were given training in “probabilistic reasoning,” which basically means they were told to look for data on how similar cases had turned out in the past before trying to predict the future. Humans are surprisingly bad at this, and tend to overestimate the chances that the future will be different than the past. The forecasters who received this training performed better than those who did not. (Interestingly, a smaller group were trained in scenario planning, but this turned out not to be as useful as the training in probabilistic reasoning.)

Rushing produces bad predictions. The longer participants deliberated before making a forecast, the better they did. This was particularly true for those who were working in groups.

Revision leads to better results. This isn’t quite the same thing as open-mindedness, though it’s probably related. Forecasters had the option to go back later on and revise their predictions, in response to new information. Participants who revised their predictions frequently outperformed those who did so less often.

I think this is very interesting because it feeds back into the previous work I cited, showing that revisions are important. Revisions are what the scientific model does. The whole process is performance, feedback and revision. As Baba Brinkman says:

Man, I still love that.

The Consensus on What?

Steven Novella, back in 2008, wrote about this very topic:

There are, of course, examples of when the scientific consensus proved to be wrong.  In fact, this happens all the time – whenever evidence points in one direct but later evidence reveals a different answer or (more likely) reveals a deeper reality. It is the nature of science that is constantly changes in response to new evidence, and so the consensus of opinion is a moving target.

But not all scientific consensus is created equal.

We therefore need another assessment in addition to what the consensus is on a given question – we also need to consider how solid the consensus is. There are some question for which the scientific consensus is so solid (reflecting an overwhelming amount of evidence) that it would be perversely absurd to deny it. The earth is an oblate spheroid. DNA is the molecule that carries hereditary information. Life on earth is the result of common descent. Infectious illnesses are caused by microscopic organisms.

Other conclusions are solid but not beyond the possibility of revision. Still others are probable but preliminary. And some scientific questions are genuine controversies, without a clear consensus. The more solid a consensus is, the less likely it is to be overturned in the future. It’s not impossible for a consensus to be overturned – it’s just progressively unlikely as the consensus becomes more solid.

When is a consensus of scientific opinion not reliable? If the scientific process is working properly, then never. So the real question is, when does the scientific process break down. Here are what I think are the read flags for a supposed consensus of which you should be skeptical:

– The consensus seems premature.  If we have only been studying a problem for a short time, the overall amount of evidence is small, or there has not been time for proper replication of experiments to occur, then scientific opinions are likely premature.

– If the consensus emerges from a highly politically or ideologically charged atmosphere.

– If the consensus exists only within a subculture, such as a fringe group looking to promote a predetermined conclusoin.

– If the consensus is dominated by industry self-interest.

As I said – these situations do not represent genuine scientific consensus, but rather a breakdown in the system or an ideologically or otherwise motivated subgroup looking to masquerade as a scientific consensus.

I think this deals with one of the problems in the ongoing debate I have been involved in, namely that there is a wide difference between what constitutes a scientific theory about which scientists might have a consensus. That we theorise that X drug has Y effect or that burning fuel in such a way in such an engine has such an effect is theoretical in the sense that everything outside of cogito ergo sum is a theory of sorts. So much of what is theorised by scientists (and this includes disciplines of technology, engineering, chemistry, pharmacology etc.) is so very taken for granted that we barely consider the fact that there is overwhelming consensus for the claims involved.

No one doubts the efficacy of paracetamol to the point that we epistemologically doubt the robustness of the theories involved in their usage because, you know, consensus doesn’t reflect robustness.

We all take for granted scientific consensus every day, all of the time. We do it when we turn on the TV, use the car, turn on the stove or take some well-established prescription drugs. There are literally thousands, no hundreds of thousands of “scientific “positions adhered to by consensuses that we take utterly for granted, that perhaps SA hasn’t even remotely considered. The theory that the sun will keep “rising” every day, that toothpaste works to inhibit dental decay, that mobile phone masts do X or Y. These, themselves, are based on other theories, and eventually foundational scientific laws.

Statistically, if you were to collect in all the claims that have scientific consensus and compare them to all the ones that don’t, I would bet you my mortgage that there would be a correlation of robustness.

I wouldn’t put my mortgage on one particular interpretation of quantum physics. Would they? If SA really dismisses consensus, then what is he saying about evolution and its robustness versus the De Broglie-Bohm interpretation of quantum? To me, consensus is so pragmatically useful that we forget it even exists as a thing in almost all cases of it. The concentration on climate science, in this manner, derails the idea of what consensus is and where we so often have it.

Many of these areas, claims and theories are very different to massive systems of theories such as the work involved in climate science or evolution.

Then there is this idea that a consensus in evolution (or, indeed, climate science) doesn’t mean there are smaller, fuzzy areas around the side.

For example, with climate science there are so many sub-claims – the amount the climate will change, the amount the sea-level will rise, the amount of this and that and the exact effect of this and that. Changing data and views on these edges of the larger theory do not necessarily invalidate the larger theory.

Chris Hallquist, in his piece “Trusting Expert Consensus”, refers to a book that could be an interesting read:

I recently read Bryan Caplan’s The Myth Of The Rational Voter. It’s an excellent book in a lot of ways, and one of those ways is how it got me thinking about the issue of expert consensus. It’s pushed me more towards thinking that hard data on what the experts in a given field believe about their area of expertise is incredibly useful. Specifically, based on examples from a variety of fields (listed below the fold), I’ll conclude:

  • When the data show an overwhelming consensus in favor of one view (say, if the number of dissenters is less than the Lizardman’s Constant), this almost always ought to swamp any other evidence a non-expert might think they have regarding the issue.
  • When a strong but not overwhelming majority of experts favor one view, non-experts should take this as strong evidence in favor of that view, but there’s a greater chance that evidence could be overcome by other evidence (even from a non-expert’s point of view).
  • When there is only barely a majority view among experts, or no agreement at all, this is much less informative than the previous two conditions. It may indicate agnosticism is the appropriate attitude, but in many cases non-experts needn’t hesitate before having their own opinion.
  • Expert opinion should be discounted when their opinions could be predicted solely from information not relevant to the truth of the claims. This may be the only reliable, easy heuristic a non-expert can use to figure out a particular group of experts should not be trusted.
Notable incidental conclusions relating to specific fields include:
  • Economics doesn’t have the kind of overwhelming consensus you find on some issues in the natural sciences, but there still seem to be a lot of things most economists agree on.
  • Atheists shouldn’t be timid about citing the majority opinion of philosophers as evidence for atheism.
  • Non-experts should default to thinking probably there was a historical Jesus.
  • You should really seriously accept the mainstream scientific consensus on global warming.

Defining an Expert, and Issues of Circularity

As ever with philosophical wranglings, defining the terms is key. And even when we define the terms, who gets to arbitrate who gets to expert status and when? SA’s point, originally, is that it is the knowledge not the knowledge-carrier that is important. One issue is that no one has access to the ultimate “objective” truth of reality. We can’t know an awful lot of things, starting with whether or not we are in The Matrix. We can’t know things-in-themselves and everything is subjectively interpretted. We have the scientific method and, arguably, consensus to help guide us through our epistemological journey, but how do we check this? Well, we when we ourselves are not experts, by appealing to the very system we are trying to check. Verifying the quality of an expert is often about appealing to other experts, and the scientific model and system.

If SA wants someone to see how well aligned the consensus on X is to objective truth, we cannot do this, because we humans cannot access objective truth (if, indeed, such an idea properly exists), so we appeal to what science and experts can tell us. There is arguably an infinite regress and no way of satisfying the demands of SA.

As a result, I favour the pragmatic and coherent approach, to which SA hasn’t really replied.

We use rules of thumb that may well have exceptions but that are probabilistically valuable. We may appeal to an expert in medicine who has all the qualifications but who may actually know less on a given medicinal subject than someone unqualified but who has invested their whole lives into studying this particular area. This may happen, but surely we can rely on experts usually having more knowledge than non-experts in a given area. Interesting discussion on experts and consensus can be found in David Harker’s book Creating Scientific Controversies: Uncertainty and Bias in Science and Society.

When considering experts, and our appeal to particular ones (such as those who agree with our predispositions), it might be worth checking out a paper by Dan Kahan – “Cultural cognition of scientific consensus” – which is nicely summed up here. This includes the idea that when we disagree with experts:

Two key results in support of the basic hypothesis that scientific opinion fails to resolve societal dispute because culturally diverse individuals form opposing perceptions of what experts believe, and that individuals systematically overestimate the degree of scientific support for positions they are culturally predisposed to accept:

  1. Strong correlation between individuals’ cultural values and their perceptions of scientific consensus on risks known to divide people of opposing worldviews – people who have hierarchical and individualistic worldviews disagreed substantially with those holding egalitarian and communitarian worldviews on the state of expert opinion on climate change, nuclear waste disposal, and handgun regulation

  2. When asked to evaluate whether an individual of elite academic credentials was a “knowledgeable and trustworthy expert”, people answered based on the fit between the position the expert was depicted as adopting and the position usually associated with the subject’s worldview.

Interpretations of consensus in politically or ideologically contentious areas mean that we can often interpret claims in inaccurate ways.

Testing the Accuracy of Consensuses

I want to end on this point as it is something that SA has really failed to answer. To me, consensuses are implicitly built upon pragmatism and coherence. If a position (let’s choose something neutral here) such as paracetamol works well for headaches, didn’t cohere at all with a whole host of other theories, and if it had no pragmatic use (such that, in a sense, it didn’t really work, or have use in the world), but had a consensus, you would have a weird scenario that wouldn’t really happen. We don’t see consensuses in science where the consensus is not useful or coherent. Indeed, pragmatism is wrapped up with data and interpretations matching. That paracetamol works for headaches is pragmatically useful both for the sufferer, and for people working on other treatments who can use the findings from paracetamol.

Poor science, data collections and interpretations are simply not useful. They fall down pragmatically as well as being unlikely to cohere with other systems and data collections, theories and interpretations.

Scientist Tom Soloman:

A scientist advocating a particular theory must propose an experiment and use her theory to predict the results of that experiment. If the experimental results are inconsistent with her predictions, then she must admit that her theory is wrong. To gain acceptance for a theory, a scientist must be willing to subject it to a falsifiable test.

If an experiment produces results that are consistent with a scientist’s predictions, then that’s good news for her theory. Just one successful test, though, is not usually enough. And the more controversial a theory is, the more experimental verification is required. As Carl Sagan said, “Extraordinary claims require extraordinary evidence.”

Wide acceptance comes from repeated, different experiments by different research groups. There is no threshold or tipping point at which a theory becomes “settled.” And there is never 100 percent certainty. However, near-unanimous acceptance by the scientific community simply doesn’t occur unless the evidence is overwhelming….

Scientific theories aren’t mere conjecture. They are subject to exhaustive, falsifiable tests. Some theories fail these tests and are jettisoned. But many theories are successful in the face of these tests. It is these theories – the ones that work – that achieve consensus in the scientific community.

We test the robustness of a theory to see how well it works in the world around us. If the data comes in that climate change is bunkum, then the consensus will change and adapt. But, as mentioned in a comment to SA and VS on a previous piece, consensus is also concerned with the best data available at a given time. If new data and knowledge turns up, the consensus may well shift.

Therefore, all scientific knowledge should be considered in the context of what we do and don’t know. I am comfortable thinking that a scientific consensus is broadly a reliable tool to understand the world, but also know that there may well be “hidden variables” about which we may be unaware. Unknown knowledge does not invalidate the scientific consensus as a useful tool. The great thing is that when we do become aware, you can bet your bottom dollar the scientific comunity will get onto it, with relish.

In this way, scientific consensus is like good wine. It gets better with age.

"Women’s football 1 Trump 0"

Quote of the Day: Sheila C. ..."
"It's interesting to note that "natural ends" are whatever we deem them to be. Even ..."

Quote of the Day: Sheila C. ..."
""Lt Gov Dan Barker." Who knew that FFRF's co-president became the Lt. Governor of Texas?"

Nationalistic Strain of Christianity Shaping America’s ..."
"Wondered if you'd be interesting in a look at a related paper of mine called ..."

Natural Law Theory and the Enjoyment ..."

Browse Our Archives

Follow Us!

What Are Your Thoughts?leave a comment