Could a sufficiently accurate brain simulation really change the world?

In The Mind’s Eye, Daniel Dennett writes:

But now suppose we made a computer simulation of a mathematician, and suppose it worked well. Would we complain that what we had hoped for was proofs, but alas, all we got instead was mere representations of proofs? But representations of proofs are proofs, aren’t they? It depends on how good the proofs represented are. When cartoonists represent scientists pondering blackboards , what they typically represent as proofs of formulae on the blackboard is pure gibberish, however “realistic” these figures appear to the layman. If the simulation of the mathematician produced phony proofs like those in the cartoons, it might still simulate something of theoretical interest about mathematicians – their verbal mannerisms, perhaps, or their absentmindedness. On the other hand, if the simulation were designed to produce representations of the proofs a good mathematician would produce, it would be as valuable a “colleague” – in the proof producing department – as the mathematician. That is the difference it seems, between abstract, formal products like proofs or songs (see the next selection “The Princess Ineffabelle”) and concrete, material products like milk, On which side of this divide does the mind fall? Is mentality like milk or like a song?

If we think of the mind’s product as something like control of the body, it seems its product is quite abstract. If we think of the mind’s product as a sort of special substance or even a variety of substances – lot ‘n lots of love, a smidgin or two of pain, some ecstasy, and a few ounces of that desire that all good ballplayers have in abundance – it seems its product is quite concrete (pp. 94-955).

Forget for a moment the question of what the product of the mind really is. Even if control of the body isn’t the only product of the mind, it is certainly an important one.

With that qualification, Dennett’s claim seems to be surprisingly uncontroversial. For example, John Searle would object that the simulated mathematician wouldn’t really understand mathematics, but he would agree that it could churn out proofs all the same. Or Roger Penrose might (though I am a little unsure of what Penrose would actually say here) claim that such a simulated mathematician would be impossible in principle to build. But I do not think Penrose would deny that if you could build such a simulated mathematician, it would be able to do what Dennett says it could.

If that’s correct–and practical issues like building sufficiently powerful computer hardware can be worked out–the implications are potentially revolutionary. A simulated mathematician with a traditional hardware/software distinction in particular would be extremely valuable, because of both the ability to make copies and the ability to transfer the software to faster hardware.

Before you make any seemingly obvious objections to this, see here and here for correction of some misunderstandings that have cropped up. And maybe the idea of a simulated mathematician isn’t feasible. But assuming it is, is everyone on board with Dennett’s point above? It seems like a very important one, maybe one that seems too obvious to state explicitly, but not stating it explicitly makes it unclear if everyone accepts it.

How selfish are voters?
When passing a law is the easy route
Slavery abolition and animal rights: the biggest problem
I am very smart and it isn’t fair (to other people)
  • JohnH

    A proof producing computer has been around for a long time and continue to be used, they are not that interesting. I mean sure if P=NP then most problems become solvable in polynomial time but excluding that to actually get a difficult interesting proof requires a very long time as to be impractical in most cases.

    An interesting point that is usually overlooked is that if simulating a human fully on a computer is possible and equivalent to a human then there are statements which are true but unprovable as we would be subject to the halting problem; or more accurately there would be statements which would be determinable to be unprovable as being a finite state machines and not truly Turing machines there would be problems outside the limits of what it is possible for us to compute, both true, false, and unprovable by a true Turing machine.

    • Ray

      ” if simulating a human fully on a computer is possible and equivalent to a human then there are statements which are true but unprovable as we would be subject to the halting problem”

      I find this conclusion completely unsurprising. It would be utterly shocking if humans could solve the halting problem — for starters it would render all cryptography useless (does the key search algorithm halt when we restrict the first bit of the key to zero? the second? Rinse and repeat.)

  • Edward Clint

    I side with Dennett and find Searle’s objections unconvincing. He argues that the notion is counter-intuitive, how can little transistors accomplish “understanding”, but the strangeness of the idea does not make it incorrect. Ultimately the meaning of the term “understand” has to be couched in information processing terms, whether we’re talking about people or an artificial system. I see no reason we could, even theoretically, separate what transistors made of wetware can do that digital ones could not.