Jaron Lanier Interview: Conversations with History; Institute of International Studies, UC Berkeley

Culture and Technology: Conversation with Jaron Lanier, computer scientist and artist, October 3, 2005, by Harry Kreisler

Page 4 of 6

Philosophical Reflections

What in your background leads you to be such a humanist in the context of your success as a technologist?

That's an interesting question. It might be that I flunked out of art school when I was seventeen, so that I never really got back on the academic track. I can imagine had I not done that, I would have earned my Ph.D. and I would have, instead of just being a kid in Marvin Minsky's basement (he's a prominent proponent of these types of ideas, based at MIT) and being mentored by him, I would have formally had him as an advisor and wouldn't have been able to be such a thorn in his side. I adore him but I just disagree with him. And similarly for other figures. I think maybe it made me more independent.

I'm not sure about that. I know from a very young age -- when I first was around this body of thought, at sixteen or seventeen, I immediately felt that it was wrong. It bothered me immensely on many levels. It seemed as though it created an absurdity where we were all trying to build this machine for its own sake, but the only measure of that machine was us, so we ended up doing nothing, we ended up just launching one absurdity on top of another. I felt it was a completely ungrounded program.

This group of technologists who embraced these ideas -- you coined the term "cybernetic totalists." Now interestingly enough, for somebody who doesn't study this, in reading some of your writing, you were at some point talking about communism and you said, "Communism did not acknowledge the existence of human experience beyond the scope of its own ideas." And then you go on to suggest that there was a vanity about it. So, this is parallel to what you're observing among your adversaries in the battle of ideas. Right?

Right. This is commonplace to all totalisms. All absolutists are vain, and that's essentially what the game is. "I'm important, I'm the master of reality, my ideas are of such importance that they transcend even death." This is the oldest game in the book, and as long as the people who believe that are not too powerful, it can be charming. However, the nature of the power of technologists is a little different from previous forms of power.

Some of these figures, like a Marvin Minsky or a Ray Kurzweil, or any of a number of other ones, are unlikely to run for political office. They're unlikely to become a figure like Oprah, who's explicitly a cultural leader. But what they do is create the tools that all of us organize our lives through, like our computers and our various gadgets, and those contain a structure that embeds their ideas. Through that indirect route they have enormous influence, and they also have enormous influence in the academy. I believe they've had an influence in everything from medicine to economics, that in some cases has been helpful, but also in some cases has been confusing.

In your philosophical writings you are very sensitive to power, to issues of justice and equity. In one piece that I read on infrastructure that you wrote, you talk about [software] architecture as politics, which is the idea that you just conveyed. Let me bring up this point that you made earlier: that the legacy in software sets us down a road which we can't retreat from. You use the example of "files" which at one time was a very good idea but as it became embedded in the traditions has created something that we have to work around.

Traditionally, the way one person influences another person is through talk. You meet in the agora and you debate and you speak, or you write books. The way you can influence people in the computer age is different. For instance, if the architecture of the Internet is designed in such a way that a government agency has windows onto what everybody is saying but they do not have windows onto each other, that embeds a certain kind of power structure that isn't just an interpretive one, as it is with rules handed down or policy directives from a politburo, but it becomes actually embedded in the way people connect. It becomes actually more mandatory.

Thus far, we've been pretty lucky with what's been laid down, but it's crucially important that the things that are laid down are well thought out. In terms of liberties we're actually doing fairly well, although there are perils. What concerns me more is the new humanistic concern of the definition of the human within the context of the information system.

An example I talk about a lot is in Microsoft Word, which is the almost universal means that people use to write -- what could me more important than that? There's an artificial intelligence design component in which the program tries to foresee what you want to do. It corrects your spelling and capitalization, and such things. But a great example of where it fails badly is if it believes you are starting an indented list. It'll throw you into this mode that you can never get out of, and you just essentially have to go along with it. So, you have this very strange thing where you can either bend over backwards to learn what cues it's looking for so that you can create the illusion that it's smart, that it knows you, or you can capitulate and do what it wants you to.

The point is that you're forced to accept the computer as a creative partner and to delegate to yourself a computer-like role instead of a human role. That process and the fact that people are willing to accept it -- in this case it's not a big deal, it's a very minor point, but as a precedent of what might come it's deeply disturbing to me. I feel it's very, very important to point out this notion of computer/human equivalence and to try to oppose it at every turn.

The real danger here is that this might lead to an effort to make people [become] like computers, which has the consequences that you're talking [about].

Well, from the point of view of the people doing it, they don't believe that. They believe they're trying to make computers like people, but the problem is that ultimately there's no way. I mean, it's a matter of perspective. The two activities are absolutely equivalent, you can't distinguish them, and that's the problem.

Is this kind of thinking behind your very great concern about access and the inequalities that will be created? What is access about when you're talking about access to the world of all this technology?

Here's the dilemma we face. On the one hand, there is truth to the neoconservative economists' claim that the "rising tide raises all boats." It's true that there is a noticeable decrease in absolute poverty, and so forth. On the other hand, it's also true that if you have an exponential increase in the wealth of those at the top of the economic spectrum and that that exponent is steeper than those at the bottom, the gap between them widens. If you look at this widening, even if the bottom is rising somewhat, the gap is widening at an even larger rate, and so you start to have a separation of humanity.

Here we go back to another one of the early pieces of speculative fiction from over a century ago, H. G. Wells' Time Machine, in which he saw the human species breaking in two over this very issue. I think his concern was absolutely valid. With this widening gap taking place at the same time that we have an enormous expansion of medical technology, and so forth, I'm profoundly concerned that the species could break into two, with one part having designer genes and living longer, and this and that. This question of information access is not life or death right now, but it's the precedent question that will set the terms of what medicine and health are like, say, twenty or thirty years from now.

You say in one of your writings that your goal is to find the mysteriousness in each other, and we can do that if we treat the computer as conduits between imaginations rather than as things that are real in their own right, and then new art will be born.

Yeah. [laughs] I believe that. I'm a complete idealist.

We're going to talk about your music in a moment, but one other point that emerged in your philosophical writings is this notion of a circle of empathy. In fact, that was a point that you touched on in your most recent blog. The more we imagine ourselves becoming machines, the more we risk losing our humanity. You talk about defining a circle of empathy which seems to be a guide for understanding our humanness. The people you're arguing with, the technologists, want to put the computer inside that circle. Talk a little about that.

By the way, so far as I know, I've been using the term and the idea of the circle of empathy for a longer period of time, but it's also used in exactly the same way by the animal rights activist/philosopher from Princeton, Peter Singer. I don't know which of us has priority on it, but at any rate, he uses it in the same way.

The question is, if you draw a circle around yourself and you say things within this circle you empathize with, things outside you don't, what's in the circle? Is a fetus in the circle so that you would prohibit abortion? Is an animal in the circle? Is a computer in the circle? A great deal of moral struggle comes down to what belongs in the circle. And there's a pragmatic sensibility to this because you can't extend the circle to infinity or you become incompetent. This is the tragedy of life. This is the tragedy that every idealistic kid has to face.

The problem with putting a computer in the circle is that in empathizing with it you make yourself like it. And rather than looking into the computer, which no matter how big the computer is, it's not as big as the wilds or reality. Reality is a mysterious sea that you make contact with tentatively, as either an artist or a scientist. You're touching this mystery. If you're looking into the computer, you're looking into a reduced mystery that's a maze of ideas that have been represented, but reality itself is not represented; it's larger. And so you confine yourself to this smaller thing.

The way out of that is instead of believing in this little, smaller world as having something of value, you treat it as a conduit where it's a filter between people that allows people to see and explore each other in new ways, and then you rediscover the fundamental mysteriousness of reality within that other person on the other side. That is the way out. That's the way to have meaning and the benefits of digital technology at the same time.

Next page: Music

© Copyright 2006, Regents of the University of California