Interestingly, we face a very similar situation in the scientific community today with the advent and maturing of the body of knowledge called artificial intelligence (AI). There is currently an unprecedented interest in AI; researchers are making wonderful claims for this newfound technology while government agencies and business and industrial organizations are shelling out millions of dollars to acquire a piece of the action. Don't misunderstand, there are many, many potential benefits to be gained from AI; in medical diagnostics, in manufacturing, in mineral exploration, in communications, in space exploration--in practically every arena of human endeavor.
However, there is a dimension of the research in AI that potentially has far-reaching implications for the lives of every one of us. The area to which I refer is, in fact, the theme of the papers in this volume-- whether computer intelligence can, in principle, do all that human intelligence does. Is the human mind more than a complex computer? Consider, in this context, just a few of the statements which have been made by AI researchers:
In assessing the impact of such comments, Sowa [6, p. 358] states that they "... may have a dramatic effect, but they lead to confusion especially for novices and people outside the AI field." He goes on to suggest that "they have a mind-numbing effect on experts within the field." To illustrate how true this is, one of the participants at the Yale conference disclosed during his talk that he had developed a robot which acts as though it experiences fear. Before the sessions ended for the day, the halls were buzzing with graduate students marveling at Prof. So-and-So's robot which experiences the emotion fear. There is a vast difference between a human experiencing fear and a machine responding to external stimuli in a manner that duplicates the human response--a small detail that was overlooked by the students in their enthusiasm.
It requires no great intelligence to realize the profound implications that statements such as these hold for the value of human life. In his interesting book, Into the Heart of the Mind, Frank Rose has a chapter entitled, "Should Robots Have Civil Rights?" [41 The narrative takes an unusual twist about halfway through the chapter when the question "should humans have civil rights?" is raised. The basic theme of the chapter is the influence upon society of artificial intelligence, but much of the discussion concerns problems which will be created by humanlike machines.
What can be done to ensure that reason and truth prevail in the unfolding inquiry into the relationship between human intelligence and machine intelligence? How can we assure that our society and our world will be shaped by true truth and not speculation? The Yale Conference was an excellent first step. In fact, Letovsky in describing the conference for the Al Magazine remarked that it "reminded me of the debates in England after Darwin's theory first came out." [3, p. 66] Let the debate continue!