J. R. Lucas
Fellow of Merton College, Oxford

In 1959 I read a paper to the Oxford Philosophical Society entitled "Minds, Machines and Godel". It represented the culmination of a long search. While I was at school, I had heard an essay by a contemporary of mine which had put forward a position of extreme materialism. I felt sure it was wrong and I argued against him that the very fact that he put forward his position as having been reached rationally, and commended it to us as a rational one to adopt, belied his claim that he and we were mere collocations of atoms whose behavior was entirely determined by physical laws. He was not impressed. Nor was I too sure of my ground. It was a slippery topic, in which it was very difficult to say exactly what was being talked about; and every counter-example to his thesis that I could think up he could account for as the effect of some disturbance on counter suggestible human subjects. But the- argument I put forward then did not leave me and, and in my gradual evolution from a schoolboy chemist, through an undergraduate reading, first mathematics, and subsequently Literae Humaniores, at Oxford, to a Junior Research Fellow in Philosophy, I kept on trying to reformulate it in a satisfactory fashion. A number of comparable arguments occurred to me, for example that the Verification Principle, being itself neither a tautology nor verified by empirical observation, would be, if true, meaningless, and must therefore be rejected. My tutors were not impressed, and talked darkly about the impropriety of self-reference, and insisted on distinguishing the meta-from the object-language. I countered by asking in which language this distinction was formulated, and they went up to the meta-meta-language, and however far I chased them they were always one meta than I. They got bored sooner than I, and laid down a general rule that all statements about languages had to be in a higher level language. I thereupon asked in what level of language that rule was formulated. I got a very bad report.

Many years earlier, in 1931, Godel had found a way round this problem. He devised a scheme for coding logical and mathematical formulae into numbers, and relations between formulae into arithmetical relations between, or functions of, numbers. He was thus able to circumvent the ban on self-reference, and find an arithmetical formula which ascribed a certain arithmetical property to a certain number, which turned out to be the coded expression of that self-same formula's being unprovable from Peano's axioms for arithmetic, or Elementary Number Theory as it is called. In this way he was able to construct a formula which, in effect, says of itself that it is unprovable from Peano's axioms: but in that case it must be true, for if it were not, it would not be unprovable, and so would be both provable and false. Granted that no false formulae can be proved in Elementary Number Theory, it follows that the Godelian formula is both true and unprovable from Peano's axioms. I thought I could apply this to the mechanist hypothesis that the human mind was, or could at least be represented by a Turing machine. If that were so, I argued, it would be comparable to a formal system, and its output comparable to the theorems, that is to say the provable formulae, of a formal system. And since we evidently are able to do elementary arithmetic, the formal system must include Elementary Number Theory, in which case there would be a Godelian formula which could not be proved in the formal system, but which was none the less true, and could be seen to be true by a competent mathematician who understood Godel's proof. Hence no representation of his mind by a Turing machine could be correct, since for any such representation there would be a Godelian formula which the Turing machine could not prove, and so could not produce as true, but which the mathematician could both see, and show, to be true. In this way a sufficiently competent mathematician could refute the claim that he was represented by some particular Turing machine and since this way was available to him whatever Turing machine was claimed to represent him, he could be confident of not being adequately represented by a Turing machine, and mechanism -- the thesis that the mind could be represented by a Turing machine -- was false as far as he was concerned, and therefore false generally.

Mathematical logic was not much done in Oxford after the war, and although I had heard of Godel's theorem as an undergraduate, it was difficult to get to grips with it. I was very glad to have the opportunity of going to Princeton in 1957, thanks to the generosity of the Jane Eliza Procter fellowship, and attending courses on mathematical logic by Alonzo Church, and trying out my half-formed thoughts on a number of faculty members and graduates, most notably on Hilary Putnam who was then in his materialist phase. He was not persuaded. "If I thought there was anything at all in your arguments, I should have to be not only a theist, but an Episcopalian to boot" he said, after one interchange, reckoning that since Episcopalianism was, in his book, that than which nothing could be worse, this was an effective reductio ad absurdum. His objections were forceful[1] and it took me some time to think through them and meet them. I tried the argument out on my Cambridge colleagues, trying also, but unsuccessfully, to write an exposition of Godel's theorem that should be intelligible to non-mathematicians. I finally read the paper to a skeptical and puzzled audience in Oxford in October 1959 and published it in Philosophy in 1961. Even as I wrote it up for publication, I was thinking of further counters to Putnam's objections, and also of setting it in the general context of the debate on free will. Unfortunately, I was elected to a tutorial fellowship at Merton, my old college in Oxford, and was submerged in the pressures of Oxford tutorial life, so that my Freedom of the Will was not published until 1970. It is not very different from the original article, but does meet some of the criticisms first leveled against it, and sets the Godelian argument in a more general context. Perhaps two points I make in the book are worth reiterating. Professor Minsky spent some time yesterday criticizing Professor Margenau saying that he had misunderstood the import of quantum mechanics. I think actually the misunderstanding was on Professor Minsky's part and that he had failed to understand the structure of Professor Margenau's argument. Quantum mechanics is relevant to the problem of the will because it has replaced classical Newtonian physics which seemed to rule it out. It does not of itself prove free will - lots of quantum mechanical systems have no free will - but it disproves a disproof of it. In my book I referred to speculations on whether quantum mechanics might be replaced by a more determinist theory - a "Hidden Variable Theory" as it is called, and Von Neumann's argument against it. Since then the argument has progressed much further. Von Neumann's proof has been proved and criticized, and a whole series of results have been obtained - Bell's inequalities, Gleason's Theorem, Kochen-Specher Theorem, the two-color theorem: and four years ago Aspect concluded some experiments in Paris, which seemed to rule out any prospect of a hidden variable theory. So it seems to me that Professor Margenau was quite right to see quantum mechanics as bearing on the freewill problem, not as proving that it exists but as, in Plantinga's terminology, defeating a defeater.

The concept of randomness was also mentioned yesterday, and can give rise to confusion. To be random is to be inexplicable, and if there are many different sorts of explanation, there are different sorts of randomness. Whenever the word 'random' is used, it is always worth asking what it is being contrasted with, what it is not. Yesterday there was discussion of two choices, one to pick up a bit of chalk, the other to buy one house rather than another. Clearly the latter choice is not 'random' in the sense of there being no reasons for the choice ultimately made, whereas the former might be. But whether or not there are reasons for making a choice is a completely different question from whether there was some antecedent sufficient causal condition. It would be perfectly possible for there to be no causal explanation in either case, although in the one case there was an explanation in terms of reasons for and in the other case not.

The arguments I put forward in "Minds, Machines and Godel" and then in Freedom of the Will have been much attacked. Although I put them forward with what I hope was becoming modesty and a certain degree of tentativeness, many of the replies have been lacking in either courtesy or caution. I must have touched a raw nerve. That, of course does not prove that I was right. Indeed, I would at once concede that I am very likely not to be entirely right, and that others will be able to articulate the arguments more clearly, and thus more cogently, than I did. But I am increasingly persuaded that I was not entirely wrong, by reason of the very wide disagreement among my critics about where exactly my arguments fail. Each picks on a different point, allowing that the points objected to by other critics, are in fact all right, but hoping that his one point will prove fatal. None has, so far as I can see. I used to try and answer each point fairly and fully, but the flesh has grown weak. Often I was simply pointing out that the critic was not criticizing any argument I had put forward but one which he would have liked me to put forward even though I had been at pains to discount it. In recent years I have been less zealous to defend myself, and often miss articles altogether. There may be some new decisive objection I have altogether overlooked. But the objections I have come across so far seem far from decisive.

Many philosophers have objected to the Godelian argument not so much because it is invalid as because it is not needed. Godel himself rejects mechanism, but on other grounds - our ability to think up fresh definitions for transfinite ordinals and ever stronger axioms for set theory than the Godelian argument, and Wang is inclined to do so too.[2] And I fully concede that there are many other arguments against mechanism. The virtue of the Godelian argument, I claim, is that it concentrates that critique of mechanism into a form that is peculiarly effective against the mechanist, but non-mechanists may find other considerations more cogent. Nevertheless, if the Godelian argument succeeds in bringing out the bearing of such premises on the question of mechanism, it is serving a useful purpose.

The idealized machines - Turing machines or something of that sort - are idealized from the point of view of reductionism, not technological research, and my argument is directed against reductionism, not against artificial intelligence being evolved whose behavior we cannot in principle predict or explain in detail. Whether that can be done is a good question that will be discussed on other occasions - I was tempted yesterday to ask whether a prosecution for cruelty could be launched against those who gave the precipice-avoiding machine the fright of its life by putting it near the stair well. But that is not the question I am concerned with. I am concerned with the reductionist thesis that we could in principle give a mechanist deterministic account of human behavior which was complete and left no room for free will, moral responsibility or individual creativity. It is that thesis that the Godelian argument is intended to refute. If other arguments do too, and non-mechanists find them more convincing, I am perfectly content. Transfinite arithmetic also underlies the objections of Good and Hofstadter. The problem arises from the way the contest between the mind and the machine is set up. The object of the contest is not to prove the mind better than the machine, but only different from it, and this is done by the mind's Godelizing the machine. It is very natural for the mechanist to respond by including the Godelian sentence in the machine with a different Godelian sentence, but of course that makes the machine a different machine with a Godelian sentence all of its own, which it cannot produce as true but the mind can. So then the mechanist tries adding a Godelizing operator, which gives, in effect a whole denumerable infinity of Godelian sentences. But this, too, can be trumped by the mind, who produces the Godelian sentence of the new machine incorporating the Godelizing operator, and out Godelizes the lot. Essentially this is the move from w, the infinite sequence of Godelian sentences produced by the Godelizing operator, to w + 1, the next transfinite ordinal. And so it goes on. Every now and again the mechanist loses patience, and incorporates in his machine a further operator, designed to produce in one fell swoop all the Godelian sentences the mentalist is trumping him with: this is in effect to produce a new limit ordinal But such ordinals, although they have no predecessors, have successors just like any other ordinal, and the mind can out-Godel them by producing the Godelian sentence of the new version of the machine, and seeing it to be true, which the machine cannot.

Hofstadter thinks there is a problem for the mentalist in view of a theorem of Church and Kleene on Formal Definitions of Transfinite Ordinals.[3] They showed that we cannot program a machine to produce names for all the ordinal numbers. Every now and again some new, creative step is called for, when we consider all the ordinal numbers hitherto named, and we need to encompass them all in a single set, which we can use to define a new sort of ordinal, transcending all previous ones. Hofstadter thinks that the mind might run out of steam, and fail to think up new ordinals as required, and so fail in the last resort to establish the mind's difference from some machine. But this is wrong on two counts. It begs the question in assuming that the mind is subject to the same limitations as the machine is. And it misconstrues the nature of the contest. All the difficulties are on the side of the mechanist trying to devise a machine that cannot be out-Godelized. It is the mechanist who resorts to limit ordinals, and who may have problems in devising new notations for them. The mind needs only to go on to the next one, which is always an easy, unproblematic step, and out Godelize whatever is the mechanists latest offering. Hofstadter's argument, as often, tells against the position he is angling for, and shows up a weakness of machines which there is no reason to suppose is showed by minds.

Hofstadter's assumption that the mind must be subject to the same limitations as a machine is showed by many mechanists and is made plausible by a rhetorical question "How does Lucas know that the mind can do this, that, or the other?" It is no good, they hold, that I should opine it or simply assert it: I must prove it. And if I prove it, then since the steps of my proof can be programmed into a machine, the machine can do it too.

What he must prove is that he personally can always make the improvement: it is not sufficient to believe it since belief is a matter of probability and Turing machines are not supposed to be capable of probability judgments. But no such proof is possible since, if it were given, it could be used for the design of a machine that could always do the improving.[4]
It is only because Godel gives an effective way of constructing the Godelian sentence that Lucas can feel confident that he can find the Achilles' heel of any machine. But then if Lucas can effectively stump any machine, then there must be a machine which does this too.[5] This

is the basic dilemma confronting anti-mechanism: just when the constructions used in its arguments become effective enough to be sure of, (T) (viz. Every humanly effective computation procedure can be simulated by a Turing machine) then implies that a machine can simulate them. In particular it implies that our very behavior of applying Godel's argument to arbitrary machines - in order to conclude that we cannot be modeled by a machine - can indeed be modeled by a machine. Hence any such conclusion must fail, or else we will have to conclude that certain machines cannot be modeled by any machine. In short, anti-mechanist arguments must either be ineffective, or else unable to show that their executor is not a machine.[6]
The core of this argument is an assumption that every informal argument must either be formalisable or else invalid. I had drawn a distinction between two senses of Godelian argument: one an argument according to an exact specification, which a machine could be programmed to carry out: the other a certain style of arguing, similar to Godel's original argument in inspiration, but not completely or precisely specified, and therefore not capable of being programmed into a machine. No doubt, we cannot prove to a hide-bound mechanist that we can go on. But we may come to a well-grounded confidence that we can, which will give us, and the erstwhile mechanist if he is reasonable and not hide-bound, good reason for rejecting mechanism.

Against this claim of the mentalist that he has got the hang of doing something which cannot be described in terms of a mechanical program: the mechanist says "Sez you" and will not believe him unless he produces a program showing how he would do it. It is like the argument between the realist and the phenomenalist. The realist claims that there exist entities not observed by anyone: the phenomenalist demands empirical evidence: if it is not forthcoming, he remains skeptical of the realist's claim: if it is, then the entity is not unobserved. In like manner the mechanist is skeptical of the mentalist's claim unless he produces a specification of how he would do what a machine cannot: if such a specification is not forthcoming, he remains skeptical: if it is, it serves as a basis for programming a machine to do it after all.

The mechanist position, like the phenomenalist, is invulnerable but unconvincing. I cannot prove to the mechanist that anything can be done other than what a machine can do, because he has restricted what he will accept as a proof to such an extent that only "machine-doable" deeds will be accounted doable at all. But not all mechanists are so limited. Many mechanists and many mentalists are rational agents wondering whether in the light of modern science and cybernetics mechanism is, or is not, true. They have not closed their minds by so redefining proof that none but mechanist conclusions can be established. They can recognize in themselves their having "got the hang" of something, even though no program can be written for giving a machine the hang of it. The parallel with the Sorites argument is helpful. Arguing against a finitist, who does not accept the principle of mathematical induction, I may see at the meta-level that if he has conceded f(0) and (Ax)F(x)->F(x+l )) then I can claim without fear of contradiction (Ax)F(x). I can be quite confident of this, although I have no finitist proof of it. All I can do, vis a vis the finitist, is to point out that if he were to deny my claim in any specific instance, I could refute him. True, a finitist could refute him too. But I have generalized in a way a finitist could not, so that although each particular refuting argument is finite, the claim is infinite. In a similar fashion each Godelian argument is effective and will convince even the mechanist that he is wrong: but the generalization from individual tactical refutations to a strategic claim does not have to be effective in the same sense, although it may be entirely rational for the mind to make the claim.

Godel's theorem is paradoxical, it purports to show that the Godelian sentence is unprovable but true. But if it shows that the Godelian sentence is true, surely it has proved it, so that it is provable after all. The paradox is resolved by distinguishing probability-in-the-formal-system from the informal probability given by Godel's reasoning. But informal reasoning can be formalized. We can go over Godel's reasoning step by step, and formalize it. If we do so we find that an essential assumption for his argument that the Godelian sentence is unprovable is that the formal system should be consistent. Else every sentence would be provable, and the Godelian sentence instead of being unprovable and therefore true, could be provable and false. So what we obtain, if we formalize Godel's informal argumentation, is not a formal proof with Elementary Number Theory that the Godelian sentence, G is true, but a formal proof within Elementary Number Theory


where Cons (ENT) is a sentence expressing the consistency of Elementary Number Theory. Only if we also had a proof in Elementary Number Theory yielding


would we be able to infer by Modus Ponens


Since we know that


we infer also that


This is Godel's second theorem. Many critics have appealed to it in order to fault the Godelian argument. Only if the machine's formal system is consistent and we are in a position to assert its consistency are we really able to maintain that the Godelian sentence is true. But we have no warrant for this. For all we know, the machine we are dealing with may be inconsistent, and even if it is consistent we are not entitled to claim that it is. And in default of such entitlement, all we have succeeded in proving is


and the machine can do that too.

These criticisms rest upon two substantial points: the consistency of the machine's system is assumed by the Godelian argument and cannot be always established by a standard decision-procedure. The question "By what right does the mind assume that the machine is consistent?" is therefore pertinent. But the moves made by mechanists to deny the mind that knowledge are unconvincing. Paul Benacerraff suggests that the mechanist can escape the Godelian argument by not stating out his claim in detail. The mechanist offers a "Black Box" without specifying its program, and refusing to give away further details beyond the claim that the black box represents a mind. But such a position is both vacuous and untenable: vacuous because there is no content to mechanism unless some specification is given - if I am presented with a black box but "told not to peek inside" then why should I think it contains a machine and not, say, a little black man? The mechanist's position is also untenable: for although the mechanist has refused to specify what machine it is that he claims to represent the mind, it is evident that the Godelian argument would work for any consistent machine and that an inconsistent machine would be an implausible representation. The stratagem of playing with his cards very close to his chest in order to deny the mind the premises it needs is a confession of defeat.

Putnam contends that there is an illegitimate inference from the true premise

I can see that (Cons(ENT)->G)

to the false conclusion

(Cons(ENT)->I can see that G)

It is the latter that is needed to differentiate the mind from the machine, for what Godel's theorem shows is

Cons(ENT)->ENT machine

can see that (G):

but it is only the former, according to Putnam, that I am entitled to assert.

Putnam's objection fails on account of the dialectical nature of the Godelian argument. The mind does not go round uttering theorems in the hope of tripping up any machines that may be around. Rather, there is a claim being seriously maintained by the mechanist that the mind can be represented by some machine. Before wasting time on the mechanist's claim, it is reasonable to ask him some questions about his machine to see whether his seriously maintained claim has serious backing. It is reasonable to ask him not only what the specification of the machine is, but whether it is consistent. Unless it is consistent, the claim will not get off the ground. If it is warranted to be consistent, then that gives the mind the premise it needs. The consistency of the machine is established not by the mathematical ability of the mind but on the word of the mechanist who has claimed that his machine is consistent. If so, it cannot prove the Godelian sentence, which the mind can none the less see to be true: if not, it is out of court anyhow.

Wang concedes that it is reasonable to contend that only consistent machines are serious candidates for representing the mind, but then objects it is too stringent a requirement for the mechanist to meet because there is no decision-procedure that will always tell us whether a formal system strong enough to include Elementary Number Theory is consistent or not. So either the mechanist must be superhuman or we beg the very question whether the mind can solve an "unsolvable" problem.[7]

But the fact that there is no decision-procedure means only that we cannot always tell, not that we can never tell. Often we can tell that a formal system is not consistent - e.g. it proves as a theorem




Also, we may be able to tell that a system is consistent. We have finitary consistency proofs for prepositional calculus and first- order predicate calculus, and Gentzen's proof, involving transfinite induction, for Elementary Number Theory. So even if the mind were supposed to take on all challenges even from inconsistent machines, it would often be able to discriminate between those that were to be ploughed for inconsistency and those that were to be failed for not being able to assert the Godelian sentence.

Still, it might be hoped that the mind could discriminate in all cases. All machines are entitled to enter for the mind-representation examination, and it is up to the mind to sort out the inconsistent sheep who fail their finals. This however, is to demand more of the mind than the nature of the contest requires. There is no need to consider all possible machines. Only relatively few machines are plausible candidates for representing the mind, and there is no need to take a candidate seriously just because it is a machine. If the mechanist's claim is to be taken seriously, some recommendation will be required, and at the very least a warranty of consistency would be essential. Wang protests that this is to expect superhuman powers of him, and in a response to Benacerraff's "God, The Devil and Godel", I picked up his suggestion that the mechanist might be no more man but the Prince of Darkness himself to whom the question of whether the machine was consistent or not could be addressed in expectation of an answer.[8] Rather than ask high-flown questions about the mind we can ask the mechanist the single question whether or not the machine that is proposed as a representation of the mind would affirm the Godelian sentence of its system. If the mechanist says that his machine will affirm the Godelian sentence, the mind then will know that it is inconsistent and will affirm anything, quite unlike the mind which is characteristically selective in its intellectual output.

If the mechanist says that his machine will not affirm the Godelian sentence, the mind then will know since there was at least one sentence it could not prove in its system it must be consistent: and knowing that, the mind will know that the machine's Godelian sentence is true, and thus will differ from the machine in its intellectual output. If the mechanist does not know what answer the machine would give to the Godelian question, he has not done his home-work properly, and should go away and try to find out before expecting us to take him seriously.

In asking the mechanist rather than the machine, we are making use of the fact that the issue is one of principle, not of practice. The mechanist is not putting forward actual machines which actually represent some human being's intellectual output, but is claiming instead that there could in principle be such a machine. He is inviting us to make an intellectual leap, extrapolating from various scientific theories and skating over many difficulties. He is quite entitled to do this. But having done this he is not entitled to be coy about his in-principle machine's intellectual capabilities or to refuse to answer embarrassing questions. The thought-experiment, once undertaken, must be thought through. And when it is thought through it is impaled on the horns of a dilemma. Either the machine can prove in its system the Godelian sentence or it cannot: if it can, it is inconsistent, and not equivalent to a mind: if it cannot, it is consistent, and the mind can therefore assert the Godelian sentence to be true. Either way the machine is not equivalent to the mind, and the mechanist thesis fails.

A number of thinkers have chosen to impale themselves on the inconsistency horn of the dilemma. We are machines, they say, but inconsistent ones. In view of our many contradictions, changes of mind and failures of logic, we have no warrant for supposing the mind to be consistent, and therefore no ground for disqualifying a machine for inconsistency as a candidate for being a representation of the mind. Hofstadter thinks it would be perfectly possible to have an artificial intelligence in which prepositional reasoning emerges as consequences rather than as being preprogrammed. "And there is no particular reason to assume that the - strict Prepositional Calculus, with its rigid rules and the rather silly definition of consistency they entail, would emerge from even a program."[9]

None of these arguments goes any way to making an inconsistent machine a plausible representation of a mind. Admittedly the word 'consistent' is used in different senses, and the claim that a mind is consistent is likely to involve a different sense of consistency and to be established by different sorts of arguments from those in issue when a machine is said to be consistent. If this is enough to establish the difference between minds and machines, well and good. But many mechanists will not be so quickly persuaded and will maintain that a machine can be programmed, in some such way as Hofstadter supposes, to emit mind-like behavior. In that case it is machine-like consistency rather than mind-like consistency that is in issue. Any machine, if it is to begin to represent the output of a mind, must be able to operate with symbols that can be plausibly interpreted as negation, conjunction, implication, etc., and so must be subject to the rules of some variant of the prepositional calculus. Unless something rather like the prepositional calculus with some comparable requirement of consistency emerges from the program of a machine, it will not be a plausible representation of a mind, no matter how good it is as a specimen of Artificial Intelligence. Of course, any plausible representation of a mind would have to manifest the behavior instanced by Wang, constantly checking whether a contradiction had been reached and attempting to revise its basic axioms when that happened. But this would have to be in accordance with certain rules. There would have to be a program giving precise instructions how the checking was to be undertaken, and in what order axioms were to be revised. Some axioms would need to be fairly immune to revision. Although some thinkers are prepared to envisage a logistic calculus in which the basic inferences of prepositional calculus do not hold (e.g. from p & g to p) or the axioms of Elementary Number Theory have been rejected[10],any machine which resorted to such a stratagem to avoid contradiction would also lose all credence as a representation of a mind. Although we sometimes contradict ourselves and change our minds, some parts of our conceptual structure are very stable, and immune to revision. Of course it is not an absolute immunity. One can allow the Cartesian possibility of conceptual revision without being guilty, as Hutton supposes, of inconsistency in claiming knowledge of his own consistency.[11] To claim to know something is not to claim infallibility but only to have adequate backing for what is asserted. Else all knowledge of contingent truths would be impossible, although one cannot say 'I know it, although I may be wrong', it is perfectly permissible to say 'I know it, although I might conceivably be wrong.' So long as a man has good reasons, he can responsibly issue a warranty in the form of a statement that he knows, even though we can conceive of circumstances in which his claim would prove false and would have to be withdrawn. So it is with our claim to know the basic parts of our conceptual structure, such as the principles of reasoning embodied in the prepositional calculus or the truths of ordinary informal arithmetic. We have adequate, more than adequate, reason for affirming our own consistency and the truth, and hence also the consistency, of informal arithmetic, and so can properly say that we know, and that any machine representation of the mind must manifest an output expressed by a formal (since it is a machine) system which is consistent and includes Elementary Number Theory (since it is supposed to represent the mind). But there remains the Cartesian possibility of our being wrong, and that we need now to discuss.

Some mechanists have conceded that a consistent machine could be out-Godeled by a mind, but have maintained that the machine representation of the mind is an inconsistent machine, but one whose inconsistency is so deep that it would take a long time ever to come to light. It therefore would avoid the quick death of non-selectivity. Although in principle it could be brought to affirm anything, in practice it will be selective, affirming some things and denying others. Only in the long run will it age - or mellow, as we kindly term it - and then "crash" and cease to deny anything: and in the long run we die - usually before suffering senile dementia. Such a suggestion chimes in with a line of reasoning which has been noticeable in Western Thought since the Eighteenth Century. Reason, it is held, suffers from certain antinomies, and by its own dialectic gives rise to internal contradictions which it is quite powerless to reconcile, and which must in the end bring the whole edifice crashing down in ruins. If the mind is really an inconsistent machine then the philosophers in the Hegelian tradition who have spoken of the self-destructiveness of reason are simply those in whom the inconsistency has surfaced relatively rapidly. They are the ones who have understood the inherent inconsistency of reason, and who, negating negation, have abandoned hope of rational discourse, and having brought mind to the end of its tether, have had on offer only counsels of despair.

Against this position the Godelian argument can avail us nothing. Quite other arguments and other attitudes are required as antidotes to nihilism, and the Godelian argument can be seen as making this reductio explicit. And it is a reductio. For mechanism claims to be a rational position. It rests its case on the advances of science, the underlying assumptions of scientific thinking and the actual achievements of scientific research. Although other people may be led to nihilism by feelings of angst or other intimations of nothingness, the mechanist must advance arguments or abandon his advocacy altogether. On the face of it we are not machines. Arguments may be adduced to show that appearances are deceptive, and that really we are machines, but arguments presuppose rationality, and if, thanks to the Godelian argument, the only tenable form of mechanism is that we are inconsistent machines, with all minds being ultimately inconsistent, then mechanism itself is committed to the irrationality of argument, and no rational case for it can be sustained.

  1. They can be found in Hilary Putnam "Minds and Machines," in Sidney Hook, ed., Dimensions of Mind, A Symposium. New York, 1960: reprinted in A.R. Anderson, Minds and Machines, Prentice-Hall, 1964, pp 72-97 (check exact p. no.s)
  2. Hao Wanp, From Mathematics to Philosophy, London, 1974, pp 324-326.
  3. Douglas R. Hofstadter. Godel. Escher. Bach. New York, 1979, p. 475.
  4. I. J. Good. "Godel's Theorem is a Red Herring", British Journal for the Philosophy of Science,1968, pp. 357-8.
  5. Judson C. Webb, Mechanism, Mentalism and Metamathematics: An Essay on Finitism, Dordrecht, 1980, p. 230.
  6. P. 232. Webb's italics.
  7. Wang. 1974, p. 317.
  8. Benacerraff, 1967, pp. 22-23: J.R. Lucas, "Satan Stultified", pp. 152-3.
  9. Hofstadter, 1979, p.578: cf. Chihara, 1972.