How intelligible is intelligence?

Awhile back, I stumbled across a really good paper put out by the Machine Intelligence Research Institute titled, “How Intelligible Is Intelligence?” By “intelligibility,” the paper’s authors mean, “the extent to which efficient algorithms for general intelligence follow from simple general principles, as in physics, as opposed to being necessarily a conglomerate of special case solutions.”

This is important, because if all we need to make human-level AI is a relatively small number of key insights, the development of human-level AI might come seemingly out of nowhere. On the other hand, if it’s going to take a slow grind of R&D, we’re likely to get more of a warning. There are also potential implications for the impact of AI and what we can do to manage it. So how intelligible is intelligence?

The authors sketch a number of potentially relevant considerations here. On the theoretical side, there are results like the fact that a universal Turing machine can simulate any other Turing machine, and the existence of general-purpose learning algorithms, that seem to provide some weak evidence that intelligence relatively intelligible.

When the authors consider the empirical evidence, though, the authors seem to greatly understate some evidence that human intelligence isn’t very intelligible at all. They write:

Human brains excel at ancestral tasks such as visual processing and navigation, far outperforming current AI programs in most tasks, but often struggle with evolutionarily novel challenges such as extended mental arithmetic. This pattern of strengths and weaknesses can be understood as a result of using developed competencies to emulate absent ones; e.g., mechanisms for processing visual imagery can be applied to manipulate complex abstract concepts through analogy and visual symbols (Pinker 2010). This suggests both that specific cognitive abilities evolve more easily than does general-purpose thinking skill, and, given the convergent evolution of complex learning in many lineages, that many different specific cognitive abilities can be used to approximate general intelligence.

Sure, we can use use mental abilities that evolved for the African savannah to do an incredible range of things, but it’s hard to overstate how bad we are at most of them. I admit to being confused here, because the authors mention that fact in the first sentence of the paragraph above.

The issue gets discussed in the paper by Steven Pinker cited above (under the heading “Emergence of Science and Other Abstract Endeavors”), but maybe Pinker wasn’t vivid enough in that particular paper. Here is Pinker explaining why members of a particular culture have trouble doing things like counting five objects, from his book The Stuff of Thought:

If the Mundurukú had numbers for three, four, and five, why didn’t they use them exactly? The investigators put their finger on the problem: the Mundurukú don’t have a counting routine. It’s tempting to equate a use of the number five with the ability to count five things, but they are very different accomplishments. Counting is an algorithm, like long division or the use of logarithmic tables—in this case an algorithm for assessing the exact numerosity of a set of objects. It consists of reciting a memorized stretch of blank verse (“one, two, three, four, five, . . .”) while uniquely pairing each foot in the poem with an object in the spotlight of attention, without skipping an object or landing on one twice. Then, when no object remains unnoticed, you announce the last foot you arrived at in the poem as the numerosity of the set. This is just one of many possible algorithms for ascertaining numerosity. In some societies, people pair up the objects with parts of their body, and I know several computer programmers who count like this: “Zero, one, two, three, four. There are five.” Now, the counting algorithm we teach preschoolers, like the more complex mental arithmetic we teach school-age children, co-opts words in the language. But it is not part of the language, like subject-verb agreement, nor does it come for free with the language. (pp. 140-141)

Imagine that! The ability to count isn’t native to the human mind, it’s something we add to children through a hack that recruits their capacity for memorizing blank verse. The fact that humans depend on such a horribly inefficient means for doing something as simple as counting–rather than having a mental module that would allow us to just glance at a set of objects and instantly know how many there are–while at the same time having mental adaptations as complicated as the capacity for language, drives home the point about just how specialized human intelligence really is.

Or maybe the authors of the paper know all that stuff, and the issue is how they’re thinking about “intelligence.” If you think the challenge of human-level AI is building something that can match humans at everything (or almost everything) we do, a definition used in this paper, for example, then it looks like AI is a matter of replicating a bunch of specialized modules.

But you might think that an AI could have as much “general intelligence” as a human being without matching human abilities in the areas we’re best at (like language). It’s unclear, though, what exactly that would mean and whether it would be very significant. Given all the things computers already do better than humans, why don’t they already count as having more general intelligence than us?

Having said all this on the side of “intelligence isn’t very intelligible,” let me mention one reason it might be: maybe with the right handful of insights, you could make a general-purpose learning algorithm that wouldn’t initially be as good as humans at, say, language or social interaction, but with an appropriate source of data would be able to quickly get up to speed. I kind of doubt that will happen, but it’s a possibility worth mentioning.

Russell Blackford on human enhancement
The ignorance and dishonesty of Christian apologetics, part 1: anti-evolutionism
No scientific evidence for that
So I've been flipping through The Transhumanist Reader...

CLOSE | X

HIDE | X