How intelligible is intelligence?

Awhile back, I stumbled across a really good paper put out by the Machine Intelligence Research Institute titled, “How Intelligible Is Intelligence?” By “intelligibility,” the paper’s authors mean, “the extent to which efficient algorithms for general intelligence follow from simple general principles, as in physics, as opposed to being necessarily a conglomerate of special case solutions.”

This is important, because if all we need to make human-level AI is a relatively small number of key insights, the development of human-level AI might come seemingly out of nowhere. On the other hand, if it’s going to take a slow grind of R&D, we’re likely to get more of a warning. There are also potential implications for the impact of AI and what we can do to manage it. So how intelligible is intelligence?

The authors sketch a number of potentially relevant considerations here. On the theoretical side, there are results like the fact that a universal Turing machine can simulate any other Turing machine, and the existence of general-purpose learning algorithms, that seem to provide some weak evidence that intelligence relatively intelligible.

When the authors consider the empirical evidence, though, the authors seem to greatly understate some evidence that human intelligence isn’t very intelligible at all. They write:

Human brains excel at ancestral tasks such as visual processing and navigation, far outperforming current AI programs in most tasks, but often struggle with evolutionarily novel challenges such as extended mental arithmetic. This pattern of strengths and weaknesses can be understood as a result of using developed competencies to emulate absent ones; e.g., mechanisms for processing visual imagery can be applied to manipulate complex abstract concepts through analogy and visual symbols (Pinker 2010). This suggests both that specific cognitive abilities evolve more easily than does general-purpose thinking skill, and, given the convergent evolution of complex learning in many lineages, that many different specific cognitive abilities can be used to approximate general intelligence.

Sure, we can use use mental abilities that evolved for the African savannah to do an incredible range of things, but it’s hard to overstate how bad we are at most of them. I admit to being confused here, because the authors mention that fact in the first sentence of the paragraph above.

The issue gets discussed in the paper by Steven Pinker cited above (under the heading “Emergence of Science and Other Abstract Endeavors”), but maybe Pinker wasn’t vivid enough in that particular paper. Here is Pinker explaining why members of a particular culture have trouble doing things like counting five objects, from his book The Stuff of Thought:

If the Mundurukú had numbers for three, four, and five, why didn’t they use them exactly? The investigators put their finger on the problem: the Mundurukú don’t have a counting routine. It’s tempting to equate a use of the number five with the ability to count five things, but they are very different accomplishments. Counting is an algorithm, like long division or the use of logarithmic tables—in this case an algorithm for assessing the exact numerosity of a set of objects. It consists of reciting a memorized stretch of blank verse (“one, two, three, four, five, . . .”) while uniquely pairing each foot in the poem with an object in the spotlight of attention, without skipping an object or landing on one twice. Then, when no object remains unnoticed, you announce the last foot you arrived at in the poem as the numerosity of the set. This is just one of many possible algorithms for ascertaining numerosity. In some societies, people pair up the objects with parts of their body, and I know several computer programmers who count like this: “Zero, one, two, three, four. There are five.” Now, the counting algorithm we teach preschoolers, like the more complex mental arithmetic we teach school-age children, co-opts words in the language. But it is not part of the language, like subject-verb agreement, nor does it come for free with the language. (pp. 140-141)

Imagine that! The ability to count isn’t native to the human mind, it’s something we add to children through a hack that recruits their capacity for memorizing blank verse. The fact that humans depend on such a horribly inefficient means for doing something as simple as counting–rather than having a mental module that would allow us to just glance at a set of objects and instantly know how many there are–while at the same time having mental adaptations as complicated as the capacity for language, drives home the point about just how specialized human intelligence really is.

Or maybe the authors of the paper know all that stuff, and the issue is how they’re thinking about “intelligence.” If you think the challenge of human-level AI is building something that can match humans at everything (or almost everything) we do, a definition used in this paper, for example, then it looks like AI is a matter of replicating a bunch of specialized modules.

But you might think that an AI could have as much “general intelligence” as a human being without matching human abilities in the areas we’re best at (like language). It’s unclear, though, what exactly that would mean and whether it would be very significant. Given all the things computers already do better than humans, why don’t they already count as having more general intelligence than us?

Having said all this on the side of “intelligence isn’t very intelligible,” let me mention one reason it might be: maybe with the right handful of insights, you could make a general-purpose learning algorithm that wouldn’t initially be as good as humans at, say, language or social interaction, but with an appropriate source of data would be able to quickly get up to speed. I kind of doubt that will happen, but it’s a possibility worth mentioning.

  • hf

    What does it mean to “kind of doubt that will happen,” and why do you think this?

    I think that if we don’t destroy technological civilization first, we’ll make a general program that will surpass humans in speech etc. (Though the appropriate source of data to make this happen “quickly” might be unrealistically large and reliable, depending on what you mean.) Given how badly we think, I can imagine just two and a half alternatives:

    1. Intelligence has principles we can never grasp, even with effectively unlimited time. This claim lost its main justification (concerning self-reference) when MIRI came out with the recent math results. To believe it, you would need to know something I don’t.

    2. Intelligence has no principles, and what we consider ‘skill’ is wholly arbitrary. This would seem to require randomness not just in the visible tools and tactics, but in somewhat higher-level instrumental goals of speech and interaction. Bayesian software (eg spam filters) can already match words to purposes if we figure out how to ask the question. Seems something would have to prevent both us and every AI from asking the question. I’m having trouble picturing a realistic world that could make alternative #2 true.

    3. We’re too dumb to apply or approximate the principles, even with effectively unlimited time. I almost left this one out, because the problems go beyond Bayesian thought as such. The success of science in general seems suspicious on this view. While Newton himself might be smart enough to get results without knowing how, it sure looks like his example and/or his “rules for natural philosophy” had an effect.

    • http://patheos.com/blogs/hallq/ Chris Hallquist

      The thing I kinda doubt will happen is the multi-part scenario that includes:

      1) Just a handful of insights
      2) Leading to a general-purpose learning algorithm
      3) That quickly learns to match humans in the things we’re best at

      I agree that as long as we don’t destroy civilization first, we’ll see computers surpass humans in just about everything (for some definitions of “computer” and “human,” anyway), but the path to that is less certain.

      That said, can you link me to the math results you’re referring to? I know MIRI came out with some of that recently, but I’m not sure which ones you’re referring to.

      • hf

        On reflection (rimshot) I go back and forth on the relevance of the paper and its follow-up. But let’s say we try to steel-man the usual blather about Gödel disproving human-level AI. We could point to the unsolved problems of self-modification and the fact that we only have an example of opaque intelligence at human level, not transparently-written intelligence that could read and improve on itself. You would have hopefully found this unpersuasive. (When and how would the impossibility manifest?) But until now we had no formal reply showing the possibility of self-modification in a certain sense.

        • http://patheos.com/blogs/hallq/ Chris Hallquist

          Just to make sure I understand correctly: you’re not even talking about whether AI in some sense is possible, but rather whether transparently-written AI is possible?

          • hf

            Yes, exactly. Artificial intelligence is trivially possible for some value of “artificial” (as in ‘insemination’).

  • Andrew G.

    rather than having a mental module that would allow us to just glance at a set of objects and instantly know how many there are

    We actually do have one of these (there’s a cool visual illusion that demonstrates it). But at least without training it isn’t precise, just giving a “more” or “fewer” signal; and while I’ve seen informal claims that the facility can be trained, I’ve not seen any proper research on it and neither have I tried it myself.

  • eric

    Imagine that! The ability to count isn’t native to the human mind, it’s something we add to children through a hack that recruits their capacity for memorizing blank verse.

    I don’t have to imagine it, I have a two-year-old. It is not much of a stretch to say that humans spend hours a day for several weeks or months just to learn how to count to ten. The only reason we think this sort of capability is natural is that we don’t remember going through the learning process.

    In fact I can pretty much guarantee to anyone without direct experience of little kids that probably 90% of the things you think are natural or innate human capabliities, aren’t – they are learned behaviors. Here’s an even more extreme case – adults burp kids until they figure out how to do it for themselves. Consider what this really indicates: even that simple variations to breathing are learned.


CLOSE | X

HIDE | X