Guest Editorial
Origins & Design 18:2

Rethinking Deep Blue:
Why a Computer Can't Reproduce a Mind



Erik Larson

The recent hysteria over the defeat of world chess champion Gary Kasparov by IBM computer Deep Blue has provided fresh fuel for the debate over whether computers can be intelligent and, yes, even exhibit the other qualities of mind -- consciousness, sensation, emotion and the like. Researchers in artificial intelligence (AI) are no doubt pointing to the victory as a crucial step in what they already see as inevitable -- that, there being no essential difference between mind and machine, machines are, and will continue to become, more mind-like. The rest of us, less schooled in the technicalities of computer programming, no doubt are confused about the meaning of Deep Blue's victory and what it says about our humanity. We have long believed in (and for good reason) the uniqueness of our minds, and their qualitative distinctiveness from purely material things such as computers. Has Deep Blue threatened these beliefs? What, in light of Deep Blue's victory, should the rational person believe? Now, perhaps more than ever, is the time to re-examine the idea that computers can simulate our own minds.

No one can beat Deep Blue at chess. Gary Kasparov could not beat Deep Blue, and Kasparov is as good as any chess player has been and perhaps ever will be. Kasparov is the Michael Jordan of chess. Deep Blue is better (rematches notwithstanding). Deep Blue not only managed the impossible -- intimidating Kasparov at his own game -- but left him aghast at the apparent cunning and creativity of the manner in which it played. After his defeat in Game Two, Kasparov was so unnerved at the strategic maneuvering of Deep Blue that he insinuated that IBM might have tinkered with Blue's program during the match. Kasparov was almost certainly wrong about this, but he was right to be concerned -- Deep Blue is beginning to outdistance human chess playing in almost all aspects. Deep Blue -- calculating 200 million positions per second -- has become brilliant, strategic, and, yes, essentially unbeatable.

Really, though, is anyone surprised that the folks at IBM could develop an invincible chess machine? What is incredible about the Kasparov/Deep Blue match is not that a computer, evaluating 200 million positions per second, could beat a man, evaluating maybe three per second, but that Kasparov could give Deep Blue a match. Kasparov does not evaluate millions of positions before making a move -- Deep Blue does. How can Kasparov compete? That is the question left unasked in the hysteria over Deep Blue's victory. For those of us asking the really hard questions about the human mind, the debate begins here.

The Turing Test

One of the claims made about Deep Blue's performance is that it has passed the "Turing Test." Alan Turing, a British mathematician, proposed in 1950 that any machine whose output was indistinguishable from a human's could reasonably be said to be intelligent. That is, there are no grounds for denying intelligence to any machine, if one can't distinguish its output from a human's in the same situation. Thus, Deep Blue has passed the Turing Test in chess situations because an independent observer could not tell which moves were Kasparov's and which were Deep Blue's (or, to put it another way, whether Kasparov was playing a great human or a computer). The proviso here is that Deep Blue can pass the Turing Test only while playing chess. Outside of this restricted arena, it would be extremely easy to distinguish Deep Blue from a human interlocutor (suppose that you type questions to Deep Blue and a human and then read their typed responses from a monitor -- any normal human would make appropriate replies, while Deep Blue would sit mindlessly, waiting for a chess position to evaluate). It is therefore obvious that Deep Blue cannot pass the Turing Test, when the Turing Test is correctly construed as a general and unrestricted test of intelligence. Deep Blue is not even close; it cannot even answer questions about chess. It can only play. That's it.

Granted, one cannot tell a human's game from Deep Blue's. In that sense, it passed. But then, by that standard, a medical program which takes lists of symptoms and medications and makes correct prescriptions for patients has passed the Turing Test in the domain of medicine as well. Such so-called expert systems are a commonplace in many technical fields, yet no one is pronouncing them intelligent. That's because they aren't. In situations that require speed and efficiency in performing operations on discreet, finite lists of information, computers perform splendidly. In the real world, where the information cannot be given in discreet, finite lists (without being meaningless or effectively infinite in length), computers are imbecilic. Most six-year-olds can easily outperform the best computers in basic conversation. In the excitement over Deep Blue's victory, people have failed to see that evidence of Kasparov's mental superiority is not found in the chess match but outside that artificial domain. Where it really counts, Deep Blue doesn't have a chance. The question is: Can Deep Blue's (and other super computers') performance in an artificially restricted domain be expanded into the sort of general intelligence characteristic of humans relying on experience and intuition? Bridging this gap -- the gap from chess to human thought -- may be longer and more difficult than the fervor over Deep Blue suggests.

The Frame Problem

Philosopher and AI researcher Daniel C. Dennett describes the Frame Problem as how to get a computer to look before it leaps, or, better, to think before it leaps. Ask a computer to perform a task outside of a clearly defined domain (like chess), and one will soon be stopped cold by the Frame Problem. Dennett tells an amusing anecdote of a robot, R1D1, whose task it is to recover its spare battery from a room where there is a time bomb set to detonate soon. R1D1, designed by experts in AI to be an intelligent system, always considers the implications of its actions. This is a great improvement from R1, who, unfortunately, did not consider all the implications of pulling out the battery with the time bomb strapped to it, and met an untimely demise. R1D1 is much improved; a crowning achievement for AI. So (as the story goes) R1D1 must rescue its battery from the time bomb, and, like R1, hits on the command PULLOUT (WAGON,ROOM). Only this time R1D1's superior program begins to consider the implications of such an action. Dennett tells the story:

It had just finished deducing that pulling the wagon out of the room would not change the color of the room's walls, and was embarking on a proof of the further implication that pulling the wagon out would cause its wheels to turn more revolutions than there were wheels on the wagon -- when the bomb exploded.

R1D1, like all computers, is a victim of the Frame Problem. The Frame Problem arises because most tasks deemed intelligent require intuitive, contextually-based knowledge of a situation that cannot be pre-programmed because the possible scenarios arising from them are effectively infinite. A computer programmed to, say, order a sandwich at a restaurant like a person performs flawlessly until the waiter asks a question that presupposes knowledge of things on a broader scale. Suppose the waiter asks the program designed to order food, "How is the weather today?" Since the computer relies on a specified list of responses, any program lacking explicit responses about the weather will fail. (The computer, of course, could just respond "fine", but it would have to know to say "fine" and not say, "I don't know.") Any other interactions that are not straight-away consequences of ordering food (or playing chess, analyzing stock market trends, and so on) will fail as well. Because computers are programmed, they cannot fill in what is not explicitly given in their programs. Yet it is impossible to pre-program all the information that might become necessary in intelligent interactions. Managing even brief conversational exchanges requires an effectively infinite lexicon of facts, that, even if they could be stored, could not be used in real-time.

In the case of R1D1, a further difficulty involving the Frame Problem arises. It is impossible to pre-program all the information that is not necessary to complete an intelligent task. There is an effectively infinite list of implications connected to any action. Only a few are relevant. The ratio of the revolutions of the wheels to their number on the wagon is not relevant to rescuing R1D1's battery. Nor is the paint on the walls. Why not program a computer to just ignore irrelevant implications? This sounds fine until one realizes that a computer busy ignoring infinite numbers of irrelevant implications is not likely to solve a problem in real-time -- the time it takes to get the battery before the bomb explodes. Of course, by simply programming R1D1 to, say, locate and remove the bomb from the battery, one can get the desired result. What is left, however, is not intelligent behavior but a programmed list of instructions for a mindless machine.

And that is exactly what Deep Blue is. Calculating millions of positions per second, Deep Blue avoids the Frame Problem by having pre-programmed instructions for every move and position it encounters. Mindlessly computing positions, Deep Blue plays magnificent chess. But life, unlike chess, is not a closed logical system. Intelligent behavior in the broader context of life requires experience and judgment -- an ability to learn as one goes. Deep Blue does not need this because, in the domain of chess, all that is required can be specified beforehand, deterministically. Kasparov, of course, brings human qualities to chess. Kasparov uses his mind. But he is at a grave disadvantage, because he plays a game whose essence is logical steps and not intuitive feel. Sooner or later, a machine that can evaluate millions and millions of these logical steps will surpass even Kasparov's great feel for the game -- much like a calculator can surpass a human arithmetician. Deep Blue wins. Big deal. Deep Blue is a mindless calculator, and for this reason it is relatively irrelevant to solving the problem of real intelligence.

Consciousness and the Limits of AI

With the almost universal acceptance of materialism in the cognitive sciences (AI, neuroscience, and other cognate fields), there seems no grounds for believing that the mind, although intuitive and intelligent, could be anything but the product of a material thing. The idea that the mind is a computer is particularly compelling. After all, what else could the mind be, if not a biological calculator, computing complex yet ultimately discreet and tangible operations? Won't scientists eventually decipher how the neurons in our brains fire to create the programs that are our minds?

But now we have reached an impasse. AI scientists know -- though they are strangely loathe to admit it -- that they are dealing in a sort of alchemy, because not one of them knows how a mind with consciousness -- the particular subjective feel of emotions and sensations -- could arise from blind computations. It is one thing to debate whether computers could ever pass the Turing Test; whether they could simulate a mind by displaying the outward signs of general intelligence. But it is quite another issue whether computers could actually reproduce a mind; have real, subjective experiences within. What, after all, do you program into a computer to generate anger or taste or the experience of, say, the color red? What sort of instructions do you give a computer which lacks this, in order that it experiences it? AI and cognitive science are hot new fields because the challenges they present are deep and theoretically mind-boggling. For intelligence, we have the Frame Problem. For mind itself , we have the Consciousness Problem. Can computers be programmed to actually come alive? The perplexities with the Frame Problem pale by comparison. Asking someone to program a computer to be conscious seems rather like asking someone to explain the atmospheric conditions of Mars by reference to the financial markets in capitalist countries. No common ground exists. Feeling is just not the sort of thing that a discrete list of rules can explain. Recognition of this has prompted a growing number of researchers to begin speculating not only about the limits of AI but the limits of material science itself. How, exactly, could a conscious mind arise out of material stuff -- whether a computer, a brain, or anything else?

The traditional idea is that stuff and conscious minds are different things -- different substances in the vernacular of philosophy. That idea dates back to Plato and is rooted in our Judeo-Christian heritage and our conviction that persons have souls which can survive the death of their bodies. Twentieth- century science has almost universally rejected this view, largely on the grounds that no materialistic account of a soul can be given. The idea, they say, is tantamount to believing in magic, because no conceivable explanation within science can account for immaterial substances.

One could ask, however, what is less magical about the idea that a computer's specified list of rules could suddenly begin feeling and perceiving? AI researchers who recognize that their programs cannot possibly explain such subjective phenomena nonetheless point out that someday, some super-fast and complex computer might just come alive anyway. There is of course no logical reason for rejecting this claim. Yet, by accepting it, we might just as well have accepted the traditional view: minds are immaterial substances which somehow, though we know not how, are connected with our material brains and are the basis for our subjective lives, experiences, moral judgments, and intelligence. That this view is so unpopular speaks less to its scientific merits (what, after all, are the scientific merits of a computer just coming alive?), than to its connection with traditional beliefs and ideas considered outdated.

Deep Blue has given us a lot to think about. Yet it is not its prowess at chess that illuminates the debate over what our minds are. The day will come when Deep Blue beats Kasparov (or his successor) in all six games. When that happens, a new wave of technological euphoria (dread) will sweep over us. Or, perhaps, we will be wiser and more critical, and realize that the hard questions -- the ones to ask -- lie outside of the chess matches between man and machine. They lie in the deep and perplexing tangles of the Frame Problem; in the task of designing computers with broad, general intelligence. They lie in the mind-boggling questions of conscious experience; how we have it, and why. In these great philosophical and theological questions lie the real answers we seek. We may never find them. Yet, regardless, we can be sure they aren't answered -- aren't even touched -- by Kasparov's loss to Deep Blue.

Copyright © 1997 Erik Larson. All rights reserved. International copyright secured.
File Date: 1.1.98