Robots and Humans

Robots and Humans August 12, 2011

Damon Horowitz:

We are more than the sum of our parts.

How does someone become a technologist?

In my case, it happened in college. I was an undergraduate at Columbia University, reading and discussing what were once unrepentantly called “the classics.” I really wanted to understand what the great thinkers thought about the great questions of life, the human condition, the whole metaphysical stew. And the problem was: We didn’t seem to be making much progress….

So in my sophomore year I learned to program a computer. And that was an intoxicating experience.

When you learn to program a computer, you acquire a superpower: the ability to make an inanimate object follow your command. If you have a vision, and you can articulate it in code, you can make it real, summon it forth on your machine. And once you’ve built a few small systems that do clever tasks—like recognizing handwriting, or summarizing a news article—then you think perhaps you could build a system that could do any task. That is, of course, the holy grail of artificial intelligence, “AI.”…

But there was a problem. Over time, it became increasingly hard to ignore the fact that the artificial intelligence systems I was building were not actually that intelligent.

Thus, about a decade ago, I quit my technology job to get a Ph.D. in philosophy. And that was one of the best decisions I ever made.

When I started graduate school, I didn’t have a clue exactly how the humanities investigated the subjects I was interested in. I was not aware that there existed distinct branches of analytic and continental philosophy, which took radically different approaches to exploring thought and language; or that there was a discipline of rhetoric, or hermeneutics, or literary theory, where thinkers explore different aspects of how we create meaning and make sense of our world.

As I learned about those things, I realized just how limited my technologist view of thought and language was. I learned how the quantifiable, individualistic, ahistorical—that is, computational—view I had of cognition failed to account for whole expanses of cognitive experience (including, say, most of Shakespeare)….

Most striking, I learned that there were historical precedents for exactly the sort of logical oversimplifications that characterized my AI work. Indeed, there were even precedents for my motivation in embarking on such work in the first place. I found those precedents in episodes ranging from ancient times—Plato’s fascination with math-like forms as a source of timeless truth—to the 20th century—the Logical Positivists and their quest to create unambiguous language to express sure foundations for all knowledge. They, too, had an uncritical notion of progress; and they, too, struggled in their attempts to formally quantify human concepts that I now see as inextricably bound up with human concerns and practices.

In learning the limits of my technologist worldview, I didn’t just get a few handy ideas about how to build better AI systems. My studies opened up a new outlook on the world. I would unapologetically characterize it as a personal intellectual transformation: a renewed appreciation for the elements of life that are not scientifically understood or technologically engineered.

In other words: I became a humanist.


Browse Our Archives