MINDS OR MACHINES

John Beloff

Abstract:

In this paper we take a look at the "strong claim" of the artificial intelligence movement which here is taken to mean that there is no essential distinction to be drawn between a living mind and some possible mind-like machine, with the corollary that there is no upper limit to the intellectual achievements of which a machine might be capable. This claim is then shown to rest on a particular theory of mind that has come to be known as Functionalism. This theory is discussed but found to be based on a fallacy. The fallacy in question consists of ignoring the fact that the mind-brain problem is, primarily, a problem about the ontological status of minds and not, in the first instance, about their functional properties. The conclusion is drawn that the essential distinction between minds and machines remains and cannot be eliminated hence the "strong claim" must be dismissed as must its corollary. How far automation can take us in practice is a matter that can only be settled empirically. The paper concludes by suggesting that the main contribution which A.I. can make to our understanding of the mind is the negative one of demonstrating what mind is not. In other words its value lies in the clearer appreciation it affords of the difference between the mechanical aspects of mental activity as opposed to the intrinsic properties and powers of the mind as such.
The attempt to understand the workings of the mind using mechanical models and analogies is as old as psychology itself. An entirely new chapter began, however, around the middle of this century with the advent of the computer. For the first time we had now to reckon with the fact that machines could perform tasks which had hitherto depended on human ingenuity. From then on the temptation to take the computer as the model for our own cognitive processes became even harder to resist. Yet the time has come to ask ourselves whether this computational metaphor as it has been called might not be more misleading than illuminating. Much will depend on whether we believe that a living mind has properties that distinguish it from any conceivable artifact that might be designed to simulate its output. What in this paper I shall refer to as the "strong claim" is the claim put forward by some proponents of artificial intelligence that nothing essential differentiates the human mind from a possible mind-like machine so that a perfect simulation of human mental activity using the hardware of a computer would in fact amount to an exemplification of such activity. This strong claim carries with it the corollary that there is no theoretical limit to what a machine could be expected to do.

It is clear that the claim we are to consider is a bold one and yet it has already received widespread endorsement from eminent contemporary philosophers and psychologists. Nor should this surprise us. Behavioristic and physicalistic theories of mind have been a recurrent feature of twentieth century philosophy and psychology at any rate in the English-speaking world. Such theories purport to cut the ground from under the obvious objection which the claim might otherwise encounter, namely that while minds are conscious machines are not. For, if mental processes are nothing over and above the workings of the brain conceived as a selfregulating physical system there seems to be no reason, other than practical limitations in our engineering techniques, why mental processes should not be exemplified in an artifact. On the traditional dualistic account, of course, mental events were taken to indicate the interaction between an immaterial mind and a material brain and, so long as this view prevailed the scope of A.I. was clearly restricted. Thus, the strong claim is closely bound up with the perennial mind-body problem and our assessment of it will depend on the position we take on that issue.

The current theory of mind that is most congenial to the strong claim is that which goes by the name of Functionalism. It has been expounded and defended by various philosophers and psychologists among whom one could include Dennett, Fodor and Boden to mention only some of the best known names of those not present on this occasion. Briefly, it is the contention of a Functionalist that a mental event is to be understood in terms of the function that it performs with reference to a given system or organism. For example, for a Functionalist, a given sensation is not something that is to be defined ostensively in terms of its peculiar quality but rather by reference to the sort of behavioral discriminations that it makes possible. Thus a pain should not in the first instance be thought of as some special kind of private event but as that which brings about avoidance or escape reactions. Likewise, thinking should not be regarded as a process going on in some private arena to which the thinker alone has access but as the process that brings about problem-solving or other such objective achievements and, in the same vein, a Functionalist analysis can be provided for desiring, intending, believing, expecting and so on across the board for each instance of what traditionally comes under the heading of the mental. The novelty of the Functionalist position is that it is neutral with respect to the composition of the system with whose operations one is concerned. That the system in question should be made of neurons rather than wires and transistors, that it should be a living organism rather than an artifact, is, for Functionalism, a matter of indifference since the concept of the mental is to be defined in terms of function not in terms of the nature of the system in which it originates. In this respect it differed from other earlier physicalistic theories of mind. Thus, according to "Central State Materialism" mental events had to be identified with events in the brain or central nervous system. This implied that brain tissues had some special property that could not necessarily be attributed to inanimate systems. It follows that Functionalism is easier to reconcile with our "strong claim" than any version of mind-brain identity theory.

The crucial question we have next to ask is whether Functionalism is, in fact, a tenable position with respect to the mind-body problem? Given its illustrious credentials one hesitates to say no. Nevertheless this is the answer that I propose to make and which I shall endeavour to justify. It transpires that Functionalism falls at the first post inasmuch as it disregards certain facts that are universally acknowledged, at least by everyone who does not happen to be committed to some bizarre philosophical thesis. Consider those objects of introspection which philosophers sometimes refer to as "qualia" or which in ordinary parlance we might call conscious sensations. The fact is that these entities can be recognized and described by those who experience them independently of the function, if any, that they may perform in our behavior or mental life. Indeed, in certain types of passive experience, say in contemplation, it is arguable whether they have any observable consequences but they are none the less real and definite for all that. In short, Functionalism cannot be the whole truth about the nature of mental events. Nevertheless, the Functionalist might well plead that the sacrifice of the qualia is a small price to pay for the advantages which his philosophy offers in the wider context of the continuing debate about the compatibility of artificial and natural intelligence. So, rather than digging in our heels over the issue of qualia, let us move on to something much closer to the heart of this controversy, namely the nature of thinking itself.

Such plausibility as attaches to the strong claim derives from the fact that computers solve complex problems often more successfully and always more rapidly than the unaided human intellect. This is a remarkable fact which familiarity tends to make us take too much for granted. There was a time when the very idea that a machine might be able to compete on the intellectual plane with a rational human being would have been regarded as too fantastic or absurd to merit serious consideration. During the early 1 9th century a certain showman toured Europe and the United States with what was billed as a chessplaying automaton. People would be invited to challenge the "Turk", as the robot figure was nicknamed, and usually they were beaten. Naturally the shrewder onlookers suspected that a human operator was concealed somewhere inside the contraption but the concealment was done with such cunning that the audience would go away thoroughly bemused. The fame of this invention gave rise to a fair amount of controversy that found its way into print over the question of whether, in principle, it was possible to build a machine that was capable of thinking and reasoning. One of those who was fascinated by his encounter with the "Turk" was the American writer Edgar Allan Poe.[l]

Nowadays, of course, we would not be very surprised if we were to read in our newspaper one day that the new world chess champion was not Kasparaov nor yet Karpov but some chess-playing program with a fancy name conceived in some laboratory of A.I. For the fact is that we have learned to accept the principle that any rulegoverned activity that can be precisely formalized can be simulated with a computer program. And even though in chess, thanks to the vast number of possible combinations, one cannot compute an infallible minimax strategy as one can for simpler board games, the use of appropriate heuristics has made possible increasingly powerful chess-playing programs. If, therefore, what we mean by thinking is finding solutions to problems or answers to questions then we already have all the evidence we need to say that machines can think. But is that all there is to thinking? No one would be tempted to say, for example, of a chess-manual that it thinks or of a textbook that gave the answers to the exercises it set. Yet, when all is said and done, a computer program is no more than a glorified book even if the manner in which we interact with the computer is more reminiscent of the way we might interrogate a fellow human. But, if there is more to thinking than just information-processing in the widest sense, what is that something extra?

The answer, I submit, is plain enough when we turn to our own experience of being engaged in thinking. The least that we would require before we were ready to say of something that it was thinking is that it should be aware of what it is doing, that it should know what it was thinking about, that it should recognize when it has reached a conclusion and that, in general, it should have insight into what is going on. So the question is, can a machine have experiences of the sort which this implies? Admittedly one cannot prove that this cannot be the case but, at the same time, there is not the slightest reason for supposing that it does. Since the information-processing that is performed by a machine is wholly explicable in physicalistic terms it would be entirely gratuitous and fanciful to credit it with consciousness as this would imply. One could argue that, if we understood exactly how our own brains were constructed and knew the complete sensory input to which we have been exposed then we could explain our own thinking along physicalistic lines without needing to invoke conscious awareness. That is as may be. The point, however, is that, in the human case, we do have one instance where it is no longer a question of whether thinking is reflected in consciousness because we know, in a way that leaves no opening to doubt, that this is so, namely when we ourselves are the thinker. For it makes no sense to doubt whether we are conscious since doubting, as Descartes would have said, implies that one is conscious. It is true we cannot prove that other human beings are conscious when they think, logically speaking they might be insentient automata, that is the classic problem of "other minds", the inescapable possibility that the solipsistic nightmare might be true. But neither is there the slightest reason to suppose that we might be unique in this very fundamental way when, in most superficial respects, we are so similar to our fellow beings. In the case of the artifact, on the contrary, there is no reason to think and every reason to doubt that consciousness supervenes when information-processing is going on. Thus the situation as between minds and machines is in this respect by no means symmetrical. Defenders of the strong claim have tried to restore a degree of symmetry by proposing a law of emergence such that, once a given system attains a certain level of complexity and sophistication, it is deemed to be conscious. But it is difficult to see what evidence could be adduced to support such a law when the only known instance where consciousness supervenes is the human one. Moreover a study of brain activity suggests that there can be no simple relationship between brain processes and conscious experience since an important segment of our thinking, including perhaps the critical stages of problem solving, proceeds at an unconscious level.

One of the current cliches about mind and brain is that the brain corresponds to the hardware of a computer while the mind corresponds to its software, that is, to the way in which it is programmed. It is an analogy that is very much in accord with the Functionalist standpoint. A closer examination of the analogy, however, reveals certain confusions. We may compare the description of a computer as an information-processing device with the description of a chair as an item of furniture. The designation in both cases is appropriate and unexceptionable but we must not forget that it specifies the use to which the object is put. Hence, if we were to banish from the scene all potential users there could be neither information-processors nor items of furniture although, of course, computers and chairs would continue to exist as physical objects. Indeed one could imagine a computer, after the ultimate holocaust, continuing to print out information it was programmed to supply for as long as the electricity supply held out. And yet all that would be happening, in this instance, from an objective or ontological standpoint, is that electrical impulses would be passing through its circuitry. The point of this illustration is to bring out the distinction between the functional account of some entity, how it is used, what it is designed to do, etc., and an ontological account, what it consists of, what goes on inside it, etc. The basic deficiency of the Functionalist account of mind which allows for the comparison of mind with a computer program is that a program is a program only so long as there is a potential user; a mind, on the other hand, exists in its own right, my subjective experiences exist for me whether or not they have any implications for anyone else. We may conclude that Functionalism can at best account for the functional aspect of mental life. It can provide no warrant for the strong claim that there is no essential difference between a machine and a mind.

But where does that leave what we called the corollary of the strong claim to the effect that in terms of achievement, at any rate, nothing was in principle beyond the capacity of an artificial intelligence? Having reconciled ourselves to the advent of an artificial world chess champion have we any reason to doubt the future advent of artificial geniuses of all kinds or of the ultraintelligent robot such as the science fiction writers have taught us to envisage? Stated thus I do not think there can be a definitive answer to this question. For the question here is no longer whether such machines will resemble minds but rather whether they will be able to do everything and more than the human mind could do. But what a priori principle could we invoke to set an upper limit to the advances of technology in any direction, be it in transport, in manufacture or in information technology? Whether one is optimistic or skeptical about technological progress will depend on which of the various relevant factors one takes into account. An optimist about progress in the field of A.I. will no doubt take heart from the rapid advances that have already been achieved. A skeptic, on the other hand, is likely to dwell on the unevenness of the progress that has so far been made. He may point out that where A.I. has scored its most spectacular successes has been in just such tasks as chess-playing which lend themselves most effectively to formal analysis. Where it has been less successful has been in connection with those skills that are so central in human life, the use of natural language, pattern recognition, the exercise of creative imagination and so on. What lesson can we draw from this comparison?

There is a question which at first sounds odd because we so rarely perceive it as a problem. It is the stark question as to why we should have minds at all. If thinking is an activity of the brain alone why should we not have evolved with brains such as we now possess but without minds? We might even speculate that we would have been even more efficient from the biological point of view if our brain processes were free from any risk of interference from consciousness. However, we know that somehow and at some point in time consciousness emerged and it is a reasonable presumption that it did so because a conscious organism was more effective than one devoid of consciousness. In that case it is meaningful to inquire what specifically consciousness might contribute to thinking which brain activity alone could not have done. It is a question which William James asked himself in opposition to the scientific materialists of his day who took an epiphenomenal view of the nature of consciousness and he speculated about a possible answer.[2] It now seems more plausible than ever that mind is responsible for just those aspects of thinking that are lacking in computer simulations notably the intuitive insights on which we constantly rely but which cannot be reduced to any set of explicit rules or the voluntary aspects of thinking such as attending to the task in hand and striving to attain its fulfillment.

Thus we come back again to the mind-body problem. We already noted that any solution that rests upon a denial of consciousness is a non-starter. This leaves us with two viable contenders. Either consciousness is an entirely superfluous feature of the world which might just as well have run its course as a closed physical system without ever becoming the object of awareness, since there would then have been no conscious observers only anthropoid robots going through the motions of observing, or, alternatively, consciousness could be taken to represent the incidence of mind when it intervenes in the physical world at the juncture that we call the brain. This is the dualist interactionist view of the mind-brain relationship and, if I am right in believing that it is the correct alternative then it follows that an artificial brain necessarily lacks one vital ingredient of a natural brain, namely its link with the non-physical mind or psyche that activates or animates it. How far this imposes a break on the potential achievements of artificial brains may be a matter for trial and error but it does weaken the assumptions on which the strong claim is based.

There is a further consideration that leads one to doubt the strong claim. There exists a body of evidence that suggests that the mind has certain transcendental powers that cannot be explained in physicalistic terms and are independent of the constraints of time and space. I allude here to what parapsychologists have called "psi phenomena", those transactions between the individual and the external world that do not appear to be mediated by any of the known sensorimotor channels. Of course there is at present no obligation to accept such evidence at face value. The phenomena in question are so unstable and so marginal that, even after a hundred years of psychical research, no conclusive demonstration or unequivocal experiment can be cited to prove that they exist. Nevertheless, my impression, as one who has made a special study of the field, is that it would be unwise and short-sighted to ignore such evidence as of no account. At the very least there remains a real possibility that the evidence may be valid and it is, after all, the only empirical evidence which, if valid, would decisively sway opinion in favor of the dualist position. And, if we do take such evidence into account then it represents another barrier to the pretensions of the A.I. enthusiast as indeed Turing himself recognized when, in his 1950 paper, he introduced the concept of an imitation game.[3]

In my talk I have dwelt on the limitations of A.I. because it was this negative aspect that struck me as being philosophically the most interesting. I hope that, in doing so, I have not given the impression that I do not care about or value the positive achievements of this youthful science. I would like to add, therefore, that psychology is always liable to stagnate unless it can draw sustenance from developments in allied sciences and, in my lifetime, no other developments have made a profounder impact on psychology, more especially cognitive psychology, than those of A.I. What may eventually come of this we can only surmise but it can hardly fail to be of importance. It is my belief that the most important lesson we are likely to learn from A.I. is precisely what mind is not. It should enable us to see more clearly the distinction between the purely mechanical aspects of thinking which are presumably mediated for us by the brain, and what is intrinsically and irreducibly mental. Perhaps we shall even be able to lay down a general principle to the effect that whatever can be fully automated in a machine does not pertain to the mind.

  1. This episode is discussed at length by David Fryer in his Wheels within Wheels (Unpublished Ph.D. Thesis; Edinburgh, 1978) see Chap. 6 "Opinions on Pinions: An Eighteenth Century Turing Game". The inventor of the machine was von Kempelen, a brilliant mechanician who was a court councillor at the court of Empress Maria Theresa of Austria. It was there that the machine was first demonstrated in 1769. It was later taken on tour by the showman Maelzel. Poe published his reflections on it in the Southern Literary Messenger for April 1836. Maelzel had a chess-player among his entourage which gave rise to speculations as to who might have been the "ghost" in his machine.
  2. The school of psychology associated with James came to be known as the functionalists on account of their insistence on the efficacy of mind as a force in nature. However, unlike the present day Functionalists who are closer to the behaviorists, the original functionalists were dualists like James.
  3. See A. M. Turing Computing Machinery and Intelligence. Mind 59, 1950. This classic paper has been reprinted many times in various compilations, most recently by D. R. Hoftstadter and D. C. Dennett (Eds.) in their The Mind's 1: Fantasies and Reflections on Self and Soul with a commentary by the editors.