Hubert L. Dreyfus Interview: Conversations with History; Institute of International Studies, UC Berkeley

Meaning, Relevance, and the Limits of Technology: Conversations with Hubert L. Dreyfus, Professor of Philosophy, UCB; November 2, 2005, by Harry Kreisler

Page 6 of 8

Artificial Intelligence

Now let's go pick up that first book that you wrote, which I will show our audience, which is called What Computers Still Can't Do.

Yeah, but wait, stop. Can I interrupt? book coverBecause the book was called What Computers Can't Do. There's a kind of joke in this third edition. It started out in '72 as What Computers Can't Do but in '92 I put in the "still."

Okay. And which then helps us relate it to its recent mate, so to speak, book coverwhich is a book called On the Internet, which I will now show our audience. One could say that the thrust of your thinking and looking at cybernetics was what the first book was about, and then the book on the Internet focuses on the World Wide Web.

The theme running through the other side, so to speak, the people that you're opposing, is the notion that machines or technology manifested in forms such as the World Wide Web could replace humans, basically, that we are heading to that world in which there will be the howls of Stanley Kubrick's movie 2001. You dismiss that, and you're drawing on all of these insights from philosophy that you just talked about.

So, for example, in critiquing things like tele-presence you're trying to describe what a machine or a technology cannot do, and it is this coping with reality that you've just described. You write,

Two human beings conversing face to face depend on a subtle combination of eye movements, head motion, gesture and posture and so interact in a much richer way than most roboticists realize ... studies suggest that a holistic sense of embodied interaction may well be crucial to everyday human encounters, and that this intercorporeality, as Merleau-Ponty calls it, cannot be captured by adding together 3D images, stereo sound, remote control, and so forth.

So, it's really the same theme that you're confronting in both of these books. Is that fair?

Sort of. The way we have come at it, I realize something that I didn't realize, that What Computers Can't Do is a pretty strictly Heideggerian book, and that means it's mainly criticizing what people called mental representations, that people must have a model of the world in their mind in order to act intelligently, and that starts with Descartes. Then the computer people took it over, and now they talked about symbolic representations in the computer and they say -- Herbert Simon is the one who said it -- that we are physical symbol systems, people and computers both, because they each use representations and rules to make inferences about what's going on in the world and what to do about it. And that's what struck me as wrong, from a Heideggerian perspective, where we respond to the unique situation and we don't use rules.

Is there a common theme in these books that you've written on the limits of technology that requires us to bring in Merleau-Ponty at this point in the discussion again?

I hadn't thought about it before, but the way the discussion has gone there are really just two components. When I was at MIT teaching Heidegger, I was interested in the fact that the MIT AI people said that we had mental representations of the world, that in our mind there had to be a whole model of the world on the basis of which we could then make inferences and act. The computer people said, "We've just learned how to do that. We have symbolic representations in our computers and inference rules that enable the computers to plan and to act."

Now, there are several things to say about that. Mostly, I thought that this just wasn't going to work, because Heidegger and Wittgenstein too, had shown that the whole Cartesian mental representation way of thinking about the mind's relation to the world is wrong. Merleau-Ponty says we are an open head turned toward the world. We are directly in a dynamic interaction with the world and other people, we don't have the intermediate of a representation in the mind. And so, I predicted that AI was going to fail.

AI being artificial intelligence.

Right. Artificial intelligence was going to fail, and it now pretty well has failed. I started at RAND in 1965 with a paper called "Alchemy and Artificial Intelligence," which is the basis of the book. Oh, I forgot an important piece. Herbert Simon said -- you asked are we machines or not, are we just like computers? Well, computers and human beings are simply "physical symbol systems."

That's what Simon said?

Yes. In our brain the neurons represent the world and in the computer, the transistors, the chips represent the world. Simon said in '65 that in twenty years we'd have computers that would do everything that people could do. And I said, "They're not going to be doing anything that people can do." And I won. Symbolic AI is out, nobody does symbolic AI, or only one or two people. So, Heidegger won that round.

But now here's where Merleau-Ponty comes in. The interesting feature of Being in Time is Heidegger's destruction of the mental representation story and our interaction with the world without ever mentioning the body -- he mentions it once and says "The body is an interesting problem but we're not going to deal with it here." Merleau-Ponty filled that gap. Merleau-Ponty only talks about the body. If you ask, "What takes the place of mental representation?" it's body-sets to cope with things and to move toward the optimal grip. Whereas Heidegger's critique of mental representation is the way to bring down the claim by the artificial intelligence people that they understand these issues better than we philosophers. There's a little irony here; when I started reading their stuff I realized that they had taken over philosophy.

Who had taken over philosophy?

The people in the AI lab, with their "mental representations," had taken over Descartes and Hume and Kant, who said concepts were rules, and so forth. And far from teaching us how we should think about the mind, AI researchers had taken over what we had just recently learned in philosophy, which was the wrong way to think about it. The irony is that the year that AI (artificial intelligence) was named by John McCarthy was the very year that Wittgenstein's philosophical investigations came out against mental representations. (Heidegger had already done so in 1927 with Being in Time.) So, the AI researchers had inherited a lemon. They had taken over a loser philosophy. If they had known philosophy, they could've predicted, like me, that it was a hopeless research program, but they took Cartesian philosophy and turned it into a research program. Anybody who knew enough recent philosophy could've predicted AI was going to fail. But nobody else paid any attention. That's why I got this prize.

You write -- I think it's in the Internet book -- that "in cyberspace ... without our embodied ability to grasp meaning, relevance slips through our non-existent fingers." And you go on to say, "The world is a field of significance organized by and for beings like us with our bodies, desires, interests and purposes."

Right. That's where Merleau-Ponty comes in. None of that would be said by Heidegger. Heidegger was just interested in the way we could disclose the world without mental representation. But Merleau-Ponty sees that there isn't anything mental about it. Our body and its skills for dealing with things and getting an optimal grip on things is what we need to understand, and then it becomes clear that computers just haven't got it. They haven't got bodies and they haven't got skills.

What you just read is important. The world is organized by embodied beings like us to be coped with by beings like us. The computer would be totally lost in our world. It would have to have in it a model of the world and a model of the body, which AI researchers have tried, but it's certainly hopeless. Without that, the world is just utterly un-graspable by computers.

Next page: The Disembodied Internet

© Copyright 2005, Regents of the University of California