Leah Libresco is doing a series of posts on Daniel Dennett’s Breaking the Spell. Her two most recent posts (out of three) both seem to make this complaint about Dennett: many of his claims are rather boring (to imperfectly condense the critiques into bumper-sticker form). The first of the two takes aim at this paragraph from Dennett:
In spite of the religious connotations of the term, even atheists and agnostics can have sacred values, values that are simple not up for reevaluation at all. I have sacred values–in the sense that I feel vaguely guilty even thinking about whether they are defensible and would never consider abandoning them (I like to think!) in the course of solving a moral dilemma. My sacred values are obvious and quite ecumenical: democracy, justice, life, love, and truth (in alphabetical order).
Leah correctly notes that “democracy excepted, almost everyone is in favor of the nouns he listed” (and in our society, democracy comes pretty close too). But she thinks this is a problem:
It’s cheating and unhelpful to say that your non-negotiable is The Good, which is essentially Dennett’s summation. It’s unfair to your readers, because you’re not giving them a fair shot at you, and it’s bad for you, because, when push comes to shove, you’ll have an easier time clinging to your non-negotiables if they’re a little less diffuse.
Similarly, the second post is titled, “Dennett’s Thesis isn’t Evidence for Very Interesting Claims” starts this way:
The main thrust of Daniel Dennett’s Breaking the Spell is that the history of religion is not incompatible with evolutionary theory. That sounds a lot less exciting than an attack on religion, but it’s what the book is actually about. Dennett’s book doesn’t mount up any direct evidence against the truth claims of religion, but it does make the argument that religion is something you might be reasonably likely to observe in a world where there was no god. That means the mere existence of religion is not strong evidence for the existence of god. Fine and dandy.
But that’s not really so big a claim…
I think the criticisms in the second post are partly based in a misunderstanding of Dennett’s intentions, but ignoring that, my reaction is “so what if Dennett’s claims are boring?” In the case of the first post, I’m unclear on what the basis is for the claim about ease of “clinging” to a value, but the complaint about being unfair to readers seems to imagine that discussion must be an oppositional activity (to talk of “cheating” frames it as a game or sport), where players are required to stick their necks out so the other players have a fair shot at decapitating them.
Another good example of the value of boring claims is the paper “Intelligence Explosion: Evidence and Import” by Luke Muehlhauser and Anna Salamon. When I got my job working for the Singularity Institute, one of the things they had me do was read the paper, find something to disagree with, and explain why. This was actually a really hard task, because for the most part the claims are very carefully qualified.
I found a couple nitpickable things, particularly the claims about the Gödel machine and AIXI, and the claim about good outcomes depending on solving problems in decision theory and value theory. Even then, though, the issue is mainly of the claims not clearly being true rather than clearly being false, and the claim about Gödel machine and AIXI is just one of several pieces of support for the more important claim that there’s a significant chance of human-level AI coming this century (which would be a big deal, as I argued in my last post).
I know for a fact that Luke (and probably Anna, though I’m less familiar with her writings) subscribes to important theses about the future of AI that didn’t make it into the paper (see here, for example, for somewhat stronger claims about when human-level AI is likely to happen). I suspect a lot of people–and not just Leah–would on those grounds criticize him for being somehow sneaky, holding back the things he really wants to say, and hiding them behind relatively uncontroversial claims.
But a paper like IE:EI has the potential to be incredibly valuable. By sticking to relatively uncontroversial claims, it can get everyone on the same page for further discussion and action. And it’s a good example of how seemingly boring claims (like, “there’s at least a ten percent chance of human-level AI in the next century) can actually have really important consequences (again, see my last post).
In fact, I suspect the academic philosophy world could benefit a lot from being more boring. Right now, the academic philosophy world is set up to reward finding innovative ways to be wrong. It would be nice if there were more rewards for philosophers saying things that are boring but clearly correct.