A HOMILY ON A SIMILE: ARTIFICIAL INTELLIGENCE AND THE HUMAN MIND

Daniel N. Robinson
Georgetown University

The Editor and Directors of TRUTH deserve thanks for recording the spirited dialogue staged at Yale under the rubric, "Artificial Intelligence and the Human Mind". Celebrated figures from the domains of science, philosophy and engineering have attempted to set limits or to erase the alleged limits on the extent to which human mental prowess might be simulated or duplicated artificially. In this the contributors have joined that Long Debate on the nature of human nature and, in light of the burden, are not to be faulted if they have advanced the debate only negligibly. In these few introductory remarks, however, I will pay less attention to their specific contributions than to the need for a coherent framework within which to evaluate essays and commentaries of the sort appearing in these pages. The assignment I have accepted creates a primary duty to the reader, and only the duty of fairness to the contributors.

Perhaps it is best to begin by considering the otherwise eccentric claim that human mentation, or an indistinguishably good simulation of it can be achieved by entirely physical means; in this case, by means of current or readily conceivable computational devices and associated programs. The arguments adduced in support of this claim are various, are grounded in quite different assumptions and are not easily merged. I dare to confine the welter of them to the following genres:

  1. Ontological Monism: How many kinds of "stuff" occupy the universe? At least since the time of Democritus and the ancient school of Greek Atomism, arguments have been advanced to the effect that only one kind of entity exists, and that kind is a physical kind. On this assumption, all seemingly non-physical aspects of reality are ascribable to superstition or ignorance. Today's ontological monist is likely to insist that, although this was merely a metaphysical thesis in the past, it is now a requirement of science, thanks to the Conservation Laws and to Thermodynamics.
  2. Nomological Monism: Quite apart from the question of how many kinds of "stuff" abound in the cosmos, only one set of laws regulates natural phenomena, including that part of nature called mental life. Thus, even if our mental life is granted ontological standing, it is to be understood as no more than and no different from the "life" of any complex information-processing system. Accordingly, the realm of the mental does not bring about the need for a non-scientific language or a shrugging concession to the mind's anarchy. Rather, it requires some number of bridging laws, not unlike those that are needed to explain how a file is compiled in a computer.
  3. Turing's Test and Leibuitz's Law: There is no basis on which to rest the claim that two realms are different if, in fact, the events occurring in what is alleged to be one of the realms are indistinguishable from those occurring in what is alleged to be the other and different realm. In what may be taken to be a species of Leibnitz's Law of the Identity of Indiscernibles, A. M. Turing proposed his now famous test: Problem-solving behavior (and, by extension, any public manifestation of internal processes) productive of indistinguishable outcomes betokens the functional identity of those systems that produce it. Thus, a physical system so programmed as to yield rulings practically identical to those reached by experienced jurists is "just", or may be said to have "the concept of justice" in the only sense that makes any sense when applied to persons.
  4. Epiphenomenalism (Updated): Granting that there really are mental events, it is now clear to a moral certainty that these events are causally brought about by occurrences in the body and specifically in the central nervous system. How the cause produces the effect remains the quaetio vexata, but that it does has been demonstrated with monotonous repetition in the laboratory and in the neurology clinic. But occurrences in the brain are physical (electrochemical) occurrences, and they arise within a known anatomical context. The task of duplicating the anatomy is daunting -- though probably unnecessary -- but not logically or metaphysically proscribed. In principle, then, an entirely physical system might be constructed to simulate and even duplicate the pattern of occurrences taking place within the brain. If this pattern can be said to produce psychological 'epiphenomena' in the latter context, there is no reason to deny the possibility in the former context.
There may be additional arguments favoring the overall AI-thesis, but these four seem to be at the bottom of the influential writings in this field and are clearly central to all of the 'pro-AI' essays in the present volume. There are, of course, any number of uselessly ad hominem flourishes that some AI-enthusiasts add to these basic arguments; e.g., that their opponents are infecting science with religion, or are defenders of a bygone age or are "greedy" to be more than machines or to live beyond the demise of their bodies. This sort of talk comes under the heading not of argument but of impertinence and warrants less a comment than a scolding. Science can survive its mistakes, but not its ideologues.

Ideology, of course, is a two-way street, at least in free societies, and the AI-enthusiasts are not alone therefore in the epithet business. The cause of Mind is not aided by dismissing skeptics as "Godless materialists" or as mere machines themselves. There is, alas, all the difference between what I have called "The Long Debate" -- which is nothing less than the history of ideas itself -- and that counterfeit version that plays to the gallery. Far too much of this now occupies us. Until the general readership displays its impatience, specialists in science and technology are likely to persist. The habits here are not, I think, endemic to the 'personalities' of scientists and technologists, but to the perspectives of those who are not really at home in the venerable, subtle and pluralistic world of deep reflection. Persons drawn from and steeped in the hard discipline of scientific and technical work expect real problems to have real answers. There is an impatience with ambiguity and a suspicion that dark and personal motives are behind it. Whence the name-calling. But I digress. Let me turn to the four main arguments favoring the strong AI-Thesis, the thesis to the effect that human mentation may be or has already been duplicated by machinery.

Although it is common for defenders of ontological monism to insist that every species of 'mentalism' is either defeated by or is incompatible with the principles of thermodynamics or the conservation laws, such arguments are at once wrong and question-begging. They are question-begging because they take for granted the ultimate authority of Physics in settling ontological questions and thus assume the truth of the very proposition being contested. It is, after all, the final dispository powers of physical science and physical phenomena that is denied by mentalists (dualists, idealists, et al.), and it is scarcely helpful to reject their assertions on the grounds that they thereby violate the laws of Physics!

But the gambit fails for still another reason. As it happens, correct formulations of the laws in question do not rule out garden-variety dualisms. Dualists take the position that mental events influence the activity of the brain. Being mental, these events are external to the brain -- which is a physical system -- and thus qualify as external forces in the sense required by the laws of conservation. The general form of these same laws does not specify just what type of force must be considered in relation to changes in, for example, the linear momentum of the system. Similarly, the First Law of Thermodynamics asserts only that any change in the total energy of a system (delta U), added to the work (L) done by the system during this interval, exactly equals the heat (Q) delivered to the system during this same interval. If the law is expressed as, Delta U = Q - L, it is clear that the work (L) is not limited by the law as to its source. Might it be mental work? Why not? To the extent, then, that ontological monism seeks its vindication in modern Physics, its claims are without merit. Monistic materialism may be appealing to some on aesthetic grounds or even on metaphysical grounds, but it cannot be said to be required by the laws of science.

Nomological monism fares no better. The extent to which psychological processes and events are lawful is an empirical question not to be settled deductively by appeals to doubtful or question-begging premisses. Basic perceptual processes have long been tamed nomologically; consider only the reciprocity laws of energy-integration in vision or the psychophysical laws of sensory magnitudes. But the subject of Psychology is vast, and most of it continues to resist attempts at nomological reduction. Significant human events typically arise from complex patterns of motivation, rational planning, interpersonal influence and potent if uncertain contextual influences. Attempts to explain such events require examinations of the reasons behind the actions, not the causes operating on physical bodies. There are good reasons for believing that reasons are not just causes by another name. This, too, is a vast and vexing issue. I note here only that claims to the effect that scientific laws cover such events are hopeful and glib.

Turing's test is grounded in assumptions that either beg the mind/ brain question or are entirely and defectively divorced from it. The question, "Does Smith have a mind?", calls for qualitatively different evidence depending on whether Smith or anyone other than Smith must reach a just verdict. Turing's test is at most a test of the evidence others might bring to bear on the question, but it has nothing to do with Smith's own position. If it is true, then it is only trivially true that a device whose performance is indistinguishable from a person's could not be distinguished from a person, at least on the task in question. Indeed, we have the hint of a tautologous truth in such examples. At still another level of analysis, Turing's test seems to record the truism, if only in other words, that a person's acquaintance with his own mental life is direct, but with the mental life of others by inference. In this regard, we apply Turing's test (at least implicitly) whenever we judge the verbal, cognitive or moral "outputs" of others as proceeding from the inaccessible reaches of their minds. But we do not analogize in the dark. Each person is directly aware of his own reasons and motives, and thus knows what he is imputing to others whose actions are similar to what his own are in similar circumstances. Note that such attributions are possible not because of something in the behavior of others, but because of the witness's direct and incorrigible knowledge of his own thoughts. Thus, it would only make sense to ascribe reasons and motives to an entity on the plausible assumption that the entity in question is sufficiently like us to warrant the ascription. It is not the performance of the entity -- which, we know, could come out of a robot -- but the (perhaps misjudged) nature of the entity that supports such attributions. Smith is observed holding his cheek and claiming to be in pain. Robot-Smith is observed doing the same and claiming the same. Jones, the designer of Robot-Smith, has established the precise conditions under which Robot-Smith performs these actions. Jones, therefore, can determine whether Robot-Smith is functioning properly when it does perform these actions. Jones can decide, for example, that the programmed conditions do not obtain and that the robot is providing false information regarding its internal states. But the case of Smith is entirely different, for there is no one who can claim more valid information regarding Smith's toothache than can Smith himself.

To criticize this line of analysis on the grounds that it settles the matter by appeals to "private" or "introspective" evidence otherwise unavailable to others -- on the grounds, then, that it is "unscientific" -- fails as criticism. To require only public or "scientific" evidence in such instances is, alas, to beg the question and to decide in favor of Turing's test as the ultimate test of the strong AI-thesis. It is also to depreciate the nature of introspective evidence. Such evidence is not "private" in the sense of being secretive or chimerical, but in the sense of being privately owned. Everyone has his or her own sensations, ideas, motives. My toothache is mine in this respect, but not in that there cannot be other toothaches. Each percipient has the last word on what he or she is now perceiving, and no one can claim that another is not having such sensations. But Jones can claim that Robot-Smith is misreporting, is broken, etc. To argue otherwise is to give the robot an epistemologically authoritative standing in relation to its claims and this is not explicable in terms of its design and its principles of operation. But to deny such standing to Smith is absurd; and to deny it on the grounds of Smith's physical design and physical principles of operation is to beg the very question at issue.

Epiphenomenalism has had a spotty metaphysical past and survives now as a grudging dualism, no less dualistic for all of that. How anything genuinely mental might be "immanent" in the physical, or arise out of it, or somehow sit cloud-like above it, stands as a challenge to imagination, if not a threat to reason. But it is not a threat to mentalistic psychologies or to a dualistic ontology, for it grants existential status to mental phenomena, even if causal powers are vested in the brain. As I have already noted, at least a version of dualism is compatible with thermodynamics, but it is less clear that epiphenomenalism is. What would be required is physical work that yields mental effects in such a way as not to violate the relationships expressed by the First Law. It seems that epiphenomenalism calls for a mental species of Q in a way that two-way dualistic interactionism might not, even if the latter assumes a non-physical species of L. This is consequential to epiphenomenalism because it makes its ultimate appeal to the physical sciences, whereas mentalism does not. To confront a dualist with the laws of Physics is question-begging. To do likewise to the epiphenomenalist is not.

In any case, it remains unclear just how we are to respond to the claim that the psychological side of our nature is causally determined by the functions of the nervous system. This is not a radical materialism, for it is not a monistic materialism. Rather, it is a kind of determinism, the sort that John Stuart Mill might have dubbed, "Asiatic fatalism". If we are to assume that everything about human psychology is determined by the brain, then this will include our willingness to believe the thesis, not to mention the willingness of others to advance it. Moreover, it will leave in doubt the purpose behind those appeals to reason and to scientific evidence made by those who defend the thesis and who would prevail upon others to adopt this new and radical point of view. The effort becomes all the more dubious when it must be further granted that reason and evidence and science itself are also and only functions of certain nervous systems, and can therefore claim no validity or epistemological office higher than that enjoyed by any other function of the nervous system or, for that matter, the digestive system!

Together, these comments on the foundations of the strong AI-thesis tend toward the conclusion that we are not obliged scientifically or philosophically to accept this thesis. There is no accepted canon of science requiring a rejection of dualism, nor does a careful conceptual analysis compel adherence to nomological monism, Turing's test or epiphenomenalism. However, the suggestion lingers that there still might be grounds on which to impute psychological attributes to computers, even if no compelling reason exists for denying them to human beings. Might it not be argued that computing devices have as much claim to an "inner life" as does each individual person, thus deserving the same presumptive inferences we make when we regard our fellows as having minds?

It would seem that this question, too, rests on the same mistaken assumptions that pervade strong versions of the AI-thesis and the entire AI program, at least when the latter aspires to mimic or "create" the psychological life of human beings. The mistaken assumptions are often revealed innocently; e.g., in such expressions as "discovering the symbols in the brain", or "deciphering the brain's codes for qualia". What is assumed is that what we take to be the contents of consciousness -- the thoughts, percepts, sensations, etc. -- are the result of some sort of information-processing, initially neuroelectric but finally (through the operation of a set of algorithms) "realistic". On this account, the external world presents, say, a grove of oaks half concealing young sheep in a distant meadow. The patterns of radiating quanta strike the cornea, some of them getting to the retinal mosaic and there triggering the decomposition of photopigments. Soon there arises a complex stream of pulses in the fibers of the optic nerve; next, the major centers of the visual pathways are activated and then, through the operation of some sort of translational or algorithmic mechanism, a scene occurs where once there were only pulses, pauses and graded synaptic potentials.

This story, or kindred versions of it, can be found in nearly all of the polemical treatises devoted to a defense of the AI perspective. Critical appraisals have not been in short supply, some of them seeming to be nearly if not totally fatal in their effects on the plausibility and even possibility of this perspective being true. To wit:

  1. In John Searle's now famous "Chinese Room", the monolinguist dutifully arranges cards bearing Chinese characters, his compilations governed by various rules set down in English. The correct application of the rules leads to meaningful statements in Chinese, of which the compiler is totally ignorant. Thus does the computer yield meaningful outputs for our benefit, but with no cognitive or conscious element entering into the performance.
  2. Let us assume that the nervous system can have inputs only from itself and from sensory systems feeding it. In both cases the information supplied will be in a neuroelectric format. For any system or mechanism in the brain to "translate" such information into, for example, a scene -- or anything we perceive, sense or in any way consciously apprehend -- this construction would have to be in the system's own vocabulary. But, as noted, only neuroelectric terms are entered in this vocabulary. Thus, the system itself is monolingual. This is but a hi-tech way of repeating George Berkeley's old chestnut, according to which "... an idea can only be like another idea".
  3. Godel's incompleteness theorem proves that in any formal system possessing number theory there will be at least one formula that cannot be proved by (Peano's) axioms for arithmetic. Computers are formal systems and thus suffer this fate. We, however, do not, and we, therefore, are formally distinguishable from all possible computers. (See Professor Lucas's essay in the present volume).
The net effect of these formal arguments and many others derived from them is that Artificial Intelligence is destined to remain artificial because it must, and not merely because of technical limitations.

With this in the background, we can test the contributions to this volume for the light and clarity they bring to bear on the broad questions concerning artificial intelligence vis a vis the human mind. The contributors are all well known and established in their special disciplines and certainly do not need me to explain what they have said in their essays and commentaries. Furthermore, as they have spared each other little in their criticisms, I would not serve the interests of the readers by summarizing the thrusts and parries or by scoring them. I shall content myself to offer a few vagrant remarks on those essays that really do seem to have something new to say and that elicited what I would regard as insufficient replies; and still other remarks on essays that strike me as perpetuating certain confusions.

In the recent writings of Professor Margenau and in the essay reprinted here by my respected friend, Sir John Eccles, principles of modern Physics have been imaginatively applied to the Mind/Brain problem. They have raised the prospect of identifying the physical analogue of mental spontaneity and freedom with the inherently probabilistic nature of quantum fields. Sir John more specifically searches for these relations at the level of synaptic function. Any number of implications can be drawn from such hypotheses, but one that would seem to be both inescapable and defective is that every mental production, when finally analyzed in detail, must therefore bear the stamp of those quantum probability functions amassed within the synaptic fields. Yet, perhaps the most defining feature of human mentation is the capacity to arrive at necessary and certain conclusions, such as those governed by formal logic and by other deductive sciences. The Kantian "apodictic" cannot be generated or analogized by probability functions. There would seem, then, to be something of a modal mismatch between the microstructure of indeterminacy and the logic of necessities.

Professors Lewis and Flew consider versions of the AI-thesis philosophically, the former adopting something of a common sense psychological realism. Both find usual versions of the thesis (such as Professor Minsky's) uncompelling, though Anthony Flew worries that Hywel Lewis's criticisms leave the door open to a species of dread-Cartesianism. I should say that this ism does not fare well even among those we might expect to be its friends. There is a strong tendency in a number of the essays in this volume to treat this venerable version of dualism as something merely historical, to be dredged up only for the purpose of applauding our own progress and the soundness of Ryle-type refutations.

Descartes was a subtle and often elusive thinker; the father of analytic geometry, a master of optics, an expert in the physiological sciences of his time. The corpus of his philosophical and scientific work fails entirely to support characterizations commonly offered in secondary sources, abridged anthologies or the exegeses provided by devout friends and enemies of dualism. He did not, for example, subscribe to a theory of "innate ideas" (of the type routinely attributed to him), and he went so far as to deny the charge in print. He did not exclude psychological functions from the causal influences of the body, nor did he defend the one species of dualism that may be said to have been defeated by Professor Ryle in The Concept of Mind. If his version of dualism is to be fairly represented, one must begin with what Descartes would take to be the ultimate mark of the mental. His clearest writing on this specific point is to be found within the context of his own critique of "artificial intelligence"; i.e., where Descartes seeks the basis on which we might correctly distinguish between a human being and a nearly perfect physical simulation of the same. This is all discussed in his posthumously published Treatise of Man (1662) which offers the example of a "machineman" having a body like a statue and equipped internally with various hydraulic and mechanical devices for recording and acting upon external stimuli. He concludes this part of the discussion thus:

"Wherefor it is not necessary...to conceive of any vegetative or sensitive soul or any other principle of movement and life than its blood and spirits, agitated by...those fires that occur in inanimate bodies''.[l]
What is exempted from this simulation is the capacity of abstract rationality. The device might have all of the sensitive and locomotor abilities of human beings, and even behave as if in response to internal conditions kindred to our passions. But it would never engage creatively in language, or traffic in mathematical abstractions or, for that matter, attain the idea of God!

Without going into the details of Descartes's analyses of such possibilities and limitations -analyses distributed across the full range of his writings -- it is possible nonetheless to reconstruct his arguments rather faithfully:

  1. Any device that is itself physical requires physical modes of activation. Thus, only physical stimuli can be effective stimuli.
  2. The device's internal representations must also be physical and must, therefore, be confined to what can be coded (without loss of fidelity) physically.
  3. Abstract cogitations are about what is non-physical in principle; e.g., universals, deductive certainties, axiomatic sciences, matters of divinity and sublimity. Such entities are not physical and have no physical analogues in nature. Thus they cannot enter the device and cannot be physically implanted within it.
  4. That aspect of our own nature that is merely physical is similarly constrained and, therefore, cannot account for our own abstract rationality which must, instead, be understood as endemic to mind as such, and must be immaterial.
None of this is at all demoniacal and so Professor Ryle's exorcism was successful only because he eliminated all traces of a demon that was never there in the first instance. I do not presume to speak for Descartes, of course, but it is obvious from his writings that he would find nothing remarkable in the predictions or promises contained in Hans Moravec's essay in this volume, but would note that something has been overlooked. All of the genuinely mental functions (the abstract rational functions) Dr. Moravec's devices engage in are entirely supplied and are only "rational" from the perspective of a rational being. This is fully understood by John Beloff, fully misunderstood by Marvin Minsky and fully unnoticed by the more ardent defenders of modest mentalism and what might be called "hardware store" materialism. As noted earlier, an authentic "Cartesian" mind is as aloof to quantum effects at the synapse as to Vitamin B6. Such a mind knows the difference between what it uncovers as it reflects on its own nature and what might be uncovered among the toys in the attic, including the ones with tape-recorded messages inside.

Before concluding, I should say something about reductionism which is hinted at in most of the essays and discussed explicitly in some of them. I have treated this at length elsewhere, most recently in my Philosophy of Psychology (Columbia University Press, 1985), so I shall not say too much here. It is important to recognize, however, that most alleged "reductions", not to mention those promised by patrons of the so-called brain sciences, do not achieve anything properly called a reduction. To establish, for example, that every mental event is reliably preceded by an event in the brain is to leave the size and the population of the ontological domain exactly the same. The universe still has the same two kinds of "stuff", and minds remain secure in their (spaceless) locations. If, in fact, the relationship turned out to be lawful, so that every mental event was functionally tied to specific events in the brain, it might be permissible to speak of a nomological reduction; i.e., the reduction of once very complex explanations to the more economical language of science. In such a case, our explanations would have fewer terms, but the mental events would be no less mental for all of that. Similarly, were the strong AI-thesis somehow realized in fact (ignoring for the nonce that it may be logically impossible), nothing of a reductive nature would thereby be achieved vis a vis the human mind. From the fact that something non-human does what I do, it does not follow either (a) that I am any less human or (b) that it is any more human or (c) that our respective achievements are explicable in the same terms or arise from the same principles. To argue or even to hint otherwise is to display a very great confusion. Moreover, the fact that something non-human not only does what I do, but does so in the same way and owing to precisely the same principles, also may fail to be reductive. It may, instead, only increase the total number of entities with a mental life that is inexplicable in physical terms.

This is all by now part of a very old argument. It has little of the freshness that surrounded disputes between Descartes and Gassendi, or even T. H. Huxley and his various interlocutors. One can say, with an impatience that does not rule out respect, that our contemporary teachers are beginning to repeat themselves far beyond the point required by our imperfect understanding. There will, no doubt, be many more conferences devoted to artificial intelligence and the human mind. My wager, however, is that the proceedings will offer nothing of consequence beyond what can be found in the essays in the present volume, or beyond what had already been concluded by the better minds of an earlier century.

Notes

  1. Rene Descartes, Treatise of Man, French text wth translation and commentary by Thomas Steele Hall. Cambridge: Harvard University Press, 1972.