Alan Turing on the possibility of machines smarter than humans

When MIRI folks talk about the idea of intelligence explosion, they often cite I. J. Good’s 1965 paper, “Speculations Concerning the First Ultraintelligent Machine.” But I recently discovered that Alan Turing beat Good to the punch on raising the possibility of machines becoming smarter than humans in every way:

I have tried to explain what are the main rational arguments for and against the theory that machines could be made to think, but something should also be said about the irrational arguments. Many people are extremely opposed to the idea of machine that thinks, but I do not believe that it is for any of the reasons that I have given, or any other rational reason, but simply because they do not like the idea. One can see many features which make it unpleasant. If a machine can think, it might think more intelligently than we do, and then where should we be? Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled. A similar danger and humiliation threatens us from the possibility that we might be superseded by the pig or the rat. This is a theoretical possibility which is hardly controversial, but we have lived with pigs and rats for so long without their intelligence much increasing, that we no longer trouble ourselves about this possibility. We feel that if it is to happen at all it will not be for several million years to come. But this new danger is much closer. If it comes at all it will almost certainly be within the next millennium. It is remote but not astronomically remote, and is certainly something which can give us anxiety.

It is customary, in a talk or article on this subject, to offer a grain of comfort, in the form of a statement that some particularly human characteristic could never be imitated by a machine. It might for instance be said that no machine could write good English, or that it could not be influenced by sex-appeal or smoke a pipe. I cannot offer any such comfort, for I believe that no such bounds can be set.

This is from a lecture broadcast on the BBC titled, “Can Digital Computers Think?” not to be confused with the much more famous essay where Turing asked the question, “Can machines think?”, which is “Computing Machinery and Intelligence.” The talk is not well-known, but was published in the anthology The Essential Turing edited by Jack Copeland (pp. 482-486).

Turing didn’t delve into the issue of super-human machines developing even smarter machines, though, so Good may still get priority on that idea.

  • smrnda

    I had to give this one a read since reading Turing’s “Computing Machinery and Intelligence” was why I got into AI. Glad to see the lesser-known talk is still out there. Doing AI applications has made me think that the word ‘intelligence’ is pretty vague. People have intelligence in a way machines don’t in that we have intentionality, but machines are often better than us in highly specific domains. I’m less worried that they’ll take over, and more that they’ll put lots of people out of work and would require us to make some major changes in economic structure. There’s lots of mental work that nobody really likes to do – I can imagine computers being able to scan my relevant tax documents and being able to take the accountants out of the picture. Computers will eventually drive our cars for us as well.

    In terms of human vs. computer conflict, I don’t know if us and the machines are really fighting for the same resources or have the same notion of pleasure/pain. Will computers *mind* having to do the tasks we give them, and why?

    Speaking of being humbled, genetic algorithms are often better than consciously designed ones, which can be an odd experience…

  • Pingback: Alan Turing, Another D-Day Engineering Hero | 3D PERSPECTIVES

  • Pingback: yellow october


CLOSE | X

HIDE | X