When MIRI folks talk about the idea of intelligence explosion, they often cite I. J. Good’s 1965 paper, “Speculations Concerning the First Ultraintelligent Machine.” But I recently discovered that Alan Turing beat Good to the punch on raising the possibility of machines becoming smarter than humans in every way:
I have tried to explain what are the main rational arguments for and against the theory that machines could be made to think, but something should also be said about the irrational arguments. Many people are extremely opposed to the idea of machine that thinks, but I do not believe that it is for any of the reasons that I have given, or any other rational reason, but simply because they do not like the idea. One can see many features which make it unpleasant. If a machine can think, it might think more intelligently than we do, and then where should we be? Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled. A similar danger and humiliation threatens us from the possibility that we might be superseded by the pig or the rat. This is a theoretical possibility which is hardly controversial, but we have lived with pigs and rats for so long without their intelligence much increasing, that we no longer trouble ourselves about this possibility. We feel that if it is to happen at all it will not be for several million years to come. But this new danger is much closer. If it comes at all it will almost certainly be within the next millennium. It is remote but not astronomically remote, and is certainly something which can give us anxiety.
It is customary, in a talk or article on this subject, to offer a grain of comfort, in the form of a statement that some particularly human characteristic could never be imitated by a machine. It might for instance be said that no machine could write good English, or that it could not be influenced by sex-appeal or smoke a pipe. I cannot offer any such comfort, for I believe that no such bounds can be set.
This is from a lecture broadcast on the BBC titled, “Can Digital Computers Think?” not to be confused with the much more famous essay where Turing asked the question, “Can machines think?”, which is “Computing Machinery and Intelligence.” The talk is not well-known, but was published in the anthology The Essential Turing edited by Jack Copeland (pp. 482-486).
Turing didn’t delve into the issue of super-human machines developing even smarter machines, though, so Good may still get priority on that idea.