Darwinism: Science or Philosophy

Chapter 7
The Incompleteness of Scientific Naturalism

William A. Dembski

[ Previous | Table of Contents | Next ]

Response to this paper.

FIRST LET ME EXPRESS my thanks to the organizers of this symposium for the opportunity to present certain ideas that for some time now have exercised me. The occasion for this symposium is Phillip Johnson's book Darwin on Trial. The title would suggest that Johnson's main concern is with Darwinism and neo-Darwinism proper. Nevertheless, I would claim that Johnson's book is as much about a philosophical world view used to prop up Darwinism as it is about Darwinism. Atheism, materialism, scientism, and secular humanism are a few of the names attached to this world view. Yet the name I like best and find most descriptive is scientific naturalism.

I want here to examine scientific naturalism. I am going to argue that this view has a serious defect-it is incomplete. As a consequence of this defect I shall argue that it is legitimate within scientific discourse to entertain questions about supernatural design. The backdrop for this discussion will comprise two areas in mathematics: computational complexity theory and probability theory.

First let's be clear what we mean by scientific naturalism. The key ingredient in scientific naturalism is, let me say it, naturalism. Naturalism as a world view has two components: (1) It is a metaphysical doctrine about what things exist in the world. These include material objects and sometimes (as for the philosopher Willard Quine) mathematical objects such as sets. Excluded are supernatural beings, nonmaterial interventions, divine meddlings, etc. (2) Naturalism includes an epistemological doctrine about how the things permitted under this metaphysical doctrine are to be explained-i.e., they are to be explained naturalistically. I am not sure that naturalistic explanation is a perfectly clear notion, but what is clear is that naturalistic explanation excludes any sort of appeal to nonmaterial intervention, divine meddling, etc.

Where does the 'scientific' in scientific naturalism come in? As a world view, scientific naturalism regards itself as continuous with science. It therefore looks to our scientific understanding of the world for its justification. This last point distinguishes scientific naturalism from naturalism simpliciter. It is also this last point that is responsible for scientific naturalism being incomplete.

To see what is at stake let me quote the last line of Edwin Hubble's The Realm of the Nebulae: "Not until the empirical resources are exhausted need we pass on to the dreamy realms of speculation." When Hubble wrote that line in the 1930s, he clearly believed that the empirical resources would not be exhausted and that our entrance into the dreamy realms of speculation could be postponed indefinitely.

Against this I would argue that empirical resources come in limited supplies and do get exhausted. Moreover, as soon as empirical resources are exhausted, naturalism can no longer fund its justification in science. This then is the incompleteness of scientific naturalism, namely, the incapacity of science to justify naturalism once the empirical resources wherewith science limits itself get exhausted.

Next I want to focus on two empirical resources, one computational, the other probabilistic. I want to show how even the possibility of these resources being exhausted undermines the completeness of scientific naturalism-the pretension, as far as I'm concerned, that a complete understanding of the world is possible apart from God. Since this talk is addressed primarily to non-mathematicians, I'll begin by considering the words of a well-known American philosopher, Woody Allen.

Woody Allen probably didn't think that God would take him seriously when he quipped,

If only God would give me some clear sign! Like making a large deposit in my name at a Swiss bank.{1}
But what if God had taken Allen seriously? Would an unexpected $7,000,000, say, in Allen's Swiss bank account have convinced him that God was real? Suppose that a thorough examination of the bank records failed to explain how the money appeared in Allen's account. Should Allen have inferred that God had given him a sign?

Since I can't answer for Allen, let me answer for myself. If I were a famous personality having uttered Allen's remark and subsequently found an additional $7,000,000 in my Swiss bank account, I would certainly not have attributed my unexpected good fortune to the largesse of an eccentric deity. It's not that I don't believe in God. I do. But my theology constrains me to think it unworthy of God to grant flippant requests like Allen's and then apparently ignore the urgent requests of so many suffering people in the world.

I would refuse to acknowledge a miracle for theological reasons, Barring theological reasons, however, I would still refuse to acknowledge a miracle. Why? Well, other explanations readily come to mind. If I had uttered the remark and were as famous as Allen, and if $7,000,000 had appeared in my account, I would probably have concluded that some eccentric billionaire with a religious agenda was trying to convert me to his cause. The strange appearance of the $7,000,000 would have been fiendishly designed to make me believe in God. But alas, I was too clever for them.

There is a point to these musings. Allen's remark is clearly funny; however, if taken seriously it is self-defeating. If God were in fact to do what Allen requested. Allen and just about anyone else would remain unconvinced. The question therefore arises whether God can do anything, either in response to a request like Allen's or otherwise, which would provide convincing proof that he and no one else had acted.

Let's put it this way: is there anything that has, could, or might happen in the world from which it would be reasonable to conclude that God had acted? Are there or could there be any facts in the world for which an appeal to God is the best explanation? Or to reverse the question, is God always an easy way out, a lame excuse, a prescientific device that invariably misses the best explanation?{2}

We are asking a transcendental question in the Kantian sense: What are the conditions for the possibility of discovering design (i.e., supernatural intervention, nonmaterial interference, divine meddling, call it what you will) in the actual world? This question must be answered at the outset, for if this world is the type of place where anything even in principle that happens can be adequately explained apart from teleology and design, then it makes no sense to look for design in what actually happens. Might the world do something, however quirky, that would convince us of design?

An illustration might help. Imagine a peculiar art studio comprised of ten-inch by ten-inch canvases, a full range of oil paints, and a robot that paints the canvases with the paints. In painting the canvases, the robot divides each canvas into a ten by ten grid of one-inch squares, and paints each square with precisely one color. Imagine that this robot also has visual sensors and thus can paint scenes presented to its visual field, though only crudely, given the coarse-grained approach it adopts to painting.

Imagine next that Elvis and an Elvis impersonator come to have their portraits painted by this robot. Will the portraits distinguish Elvis from his impersonator? Because the representations on canvas are so crude, if the impersonator is worth his salt, the two portraits will be indistinguishable. Our imaginary art studio cannot distinguish the real Elvis from the fake Elvis.

This example indicates what is at stake in determining whether design has at least the possibility of being detected and empirically grounded. Putative instances of design abound. But is it possible within this world to distinguish authentic from spurious design should instances of authentic design even exist? Or is this world like the art studio? Just as the portraits painted at the studio cannot distinguish the real from the fake Elvis, so too is it impossible for our empirical investigations of the world to distinguish authentic from spurious design?

Scientific naturalism prefers to think just this, namely, that the world is the kind of place where all objective phenomena can be explained by purely naturalistic factors. Non-naturalistic factors therefore become not only redundant but also illegitimate to explanation. As George Gaylord Simpson put it,

There is neither need nor excuse for postulation of nonmaterial intervention in the origin of life, the rise of man, or any other part of the long history of the material cosmos.{3}
Simpson claims that the world is the kind of place where no objective, empirical funding can ever legitimately lead us to postulate design (what he calls "nonmaterial intervention").

That is a bold claim. The question remains whether it is true. In the case of the art studio, it is true that robot portraits of Elvis and his impersonator will fail to distinguish the two. The paintings produced by the studio are simply too coarse grained to do any better. From these paintings there is, to use Simpson's phrase, "neither need nor excuse for postulation of" two Elvises, the real and the fake. From the portraits alone we might legitimately infer only one sitter. But is the world so coarse grained that it cannot even in principle produce events that would evidence design? That is what Simpson seems to be affirming. A little reflection, however, indicates that this claim cannot be right.

We consider a thought experiment, one I call "The Incredible Talking Pulsar." Imagine that astronomers have discovered a pulsar some three billion light years from the earth. The pulsar is, say, a rotating neutron star that emits regular pulses of electromagnetic radiation in the radio frequency range. The astronomers who found the star are at first unimpressed by their discovery. It's only another star to catalogue. One of the astronomers, however, is a ham radio operator. Looking over the pattern of pulses one day, he finds that they are in Morse code. Still more surprisingly, he finds that the pattern of pulses signals English messages in Morse code.{4}

Word quickly spreads within the scientific community, and from there to the world at large. Radio observatories around the globe start monitoring the "talking" pulsar. The pulsar isn't just transmitting random English messages, but is instead intelligently communicating with the inhabitants of earth. In fact, once the pulsar has gained our attention, it identifies itself. The pulsar informs us that it is the mouthpiece of Yahweh, the God of both the Old and the New Testaments, the creator of the universe, the final judge of humankind.

Pretty heady stuff you say. But to confirm this otherwise extravagant claim, the pulsar agrees to answer any questions we might put to it. The pulsar specifies the following method of posing and answering questions. The descendants of Levi are to make an ark like the one originally constructed under Moses (see Exodus 25). This ark is to be placed on Mount Zion in Israel. Every hour on the hour a question written in English is to be placed inside the ark. Ten minutes later the pattern of pulses reaching earth from the pulsar will answer that question, the answer being framed as an English message in Morse code.{5}

The information transmitted through the pulsar proves to be nothing short of fantastic. Medical doctors learn how to cure AIDS, cancer, and a host of other diseases. Archaeologists learn where to dig for lost civilizations and how to make sense out of them. Physicists get their long-sought-after unification of the forces of nature. Meteorologists are forewarned of natural disasters and weather patterns years before they occur. Ecologists learn effective methods for cleansing and preserving the earth. Mathematicians obtain proofs to many long-standing open problems-in some cases proofs they can check, but proofs they could never have produced on their own. The list of credits could be continued, but let us stop here.

What shall we make of the pulsar? Whether the pulsar is in fact the mouthpiece of Yahweh, the pulsar creates serious difficulties for scientific naturalism. Not only is there no way to square the pulsar's behavior with our current scientific understanding of the world, but it is hard to conceive how any naturalistic explanation will ever account for the pulsar's behavior. For instance, our curtent scientific understanding based on Einsteinian special relativity tells us that messages cannot be relayed at superluminal speeds. Since the pulsar is three billion light years from the earth, any signal we receive from the pulsar was sent billions of years ago. Yet the pulsar is "responding" to our questions within ten minutes of the written questions being placed inside the ark. The pulsar's answers therefore seem to precede our questions by billions of years.

To get around this, scientific naturalists might want to postulate reverse causality or superluminal signaling. Naturalists might find this idea more congenial than postulating "nonmaterial intervention," but reverse causality and superluminal signaling do not even begin to address the questions raised by the pulsar. It is inescapable that in dealing with the pulsar we are dealing with not just an intelligence, but with a super-intelligence. Now by a super-intelligence I don't mean an intelligence that at this time surpasses human capability, but which in time humans can hope to attain. Nor do I mean a super-human intelligence that might nevertheless be realized in some finite rational material agent embedded in the world (say an extraterrestrial intelligence or a conscious super-computer). By a super-intelligence I mean a supernatural intelligence, i.e., an intelligence surpassing anything that physical processes are capable of offering. This intelligence exceeds anything that humans or finite rational agents in the universe are capable of even in principle.

How can we see that the pulsar instantiates a super-intelligence? The place to look is computer science. There are problems in computer science that can be proven mathematically to require more computational resources for their solution than are available in the universe. Think of it this way. There are estimated to be no more than 1080 elementary particles in the universe. The properties of matter are such that circuits cannot be switched faster than 1045 times per second.{6} The universe itself is about a billion times younger than 1025 seconds (assuming that the universe is at least a billion years old). Given these upper bounds we can confidently assert that no computation exceeding lO80 x 1045 x 1025 = 10150 elementary steps is possible within the universe. By an elementary step I mean the switching of a two-state device, conceived abstractly as the switching of a binary integer (= bit). For a computation of this complexity therefore to be carried out in the universe, every available elementary particle in the universe would have to serve as an elementary storage device (= memory bit) capable of switching at 1045 hertz over a period of a billion billion years.

1050 is incredibly generous as an upper bound on the complexity of computations possible in the universe. Here are a few reasons why a much smaller bound will do: (1) quantum mechanical considerations indicate that reliable memory storage is unworkable below the atomic level{7} since at this level quantum indeterminacy will make not only storage, but also reading and writing of information impossible. Hence each elementary storage device will have to consist of more than one elementary particle. (2) The preceding calculation treats the universe as a giant piece of random access memory that is controlled by a processor outside the universe operating at 1045 hertz with instant access to any memory location in RAM. In fact, the processor will itself have to take up part of the universe. Moreover, its access to memory locations will have in some cases to be measured in light years and not in 1045 second chunks. Even with massively parallel processing, computation speeds will fall far below the 1045 hertz upper bound. (3) Finally, the bound of 1025 seconds for the maximum running time of a computation is excessive since the heat death of the universe will probably have occurred by then. Suffice it to say, even with the entire universe functioning as a computer, no computation requiring 1050 elementary steps, much less 10150 floating point operations, is feasible.

Now it is possible to pose problems in computer science for which the quickest solution requires well beyond this number steps, yet for which with a solution in hand it is possible even for humans using ordinary electronic computers to check whether the solution is correct. Factoring integers into primes is thought to be one such problem. Since the factorization problem is easy to understand, let me treat it as though it were one of the "provably hard problems." If at some time in the future a "quick" algorithm is found for factoring numbers, we shall need to modify this example; nevertheless, our contention that there are problems whose solution is beyond the computational resources of the universe, yet verifiable by humans, will still hold.{8}

What is the factorization into primes of 1961? Solving this requires a bit of work. But if you are given the prime numbers 37 and 53, it is a simple matter to check whether these are prime factors of 1961. In fact 37 x 53 = 1961. Factoring is hard, multiplication is easy. We can therefore go to our pulsar with numbers thousands of digits long and ask it to factor them. Factoring numbers that long is totally beyond our present capabilities and in all likelihood exceeds the computational limits inherent in the universe by many, many orders of magnitude. (When I was following the literature on factoring a few years back, numbers beyond two hundred digits in length could not be factored unless they had either small or special prime factors.) Nevertheless, it is easy enough to check whether the pulsar is getting the factorizations right, even for numbers thousands of digits in length.{9}

What lesson can we learn from the pulsar? I claim we should infer that a designer in the full sense of the word is communicating through the pulsar, i.e., a designer who is both intelligent and transcendent. Intelligence is certainly not a problem here. Alan Turing's famous test for intelligence pitted computer against human in a contest where a human judge was to decide which was the computer and which was the human.{10} If the human judge could not distinguish the computer from the human, Turing wanted intelligence attributed to the computer.

This operationalist approach to intelligence has since been questioned, by theists on one end and hard-core physicalists on the other. But the basic idea that there is no better test for intelligence than coherent natural language communication remains intact. If we cannot legitimately attribute intelligence to the pulsar, then no attribution of intelligence should count as legitimate. Transcendence is clear as well, given our discussion of intractable computational problems. Suffice it to say, a being that solves problems beyond the computational resources of the material world is not material. When we can confirm that such problems have in fact been solved for us, we cannot avoid postulating "nonmaterial intervention."

The pulsar demonstrates that ours is the type of world where design has at least the possibility of becoming perfectly evident-with the pulsar, empirical validation for design can be made as good as we like. In the actual world, design is therefore not only possible but also empirically knowable. I have belabored this point because it is a point scientific naturalism would rather not grant. Once, however, it is granted that the occurrence of certain events might require us to postulate "nonmaterial intervention," we need to consider whether any events that have actually occurred require us to postulate such intervention. It is obvious that the pulsar is an exercise in overkill. No instance of design so crushingly obvious is known. Science fiction has therefore done its work for us. It is time to put science fiction to rest, and look at what solid evidence there is for design in the actual world. We therefore leave computational resources and turn to probabilistic resources.

I use the term probabilistic resources to describe what I call replicational resources on the one hand, and specificational resources on the other. To appreciate what is at stake with these resources let us consider two examples. The first illustrates replicational resources, the second specificational resources.

Here is the first example. Imagine that a massive revision of the criminal justice system has taken place. Henceforward a convicted criminal is sentenced to serve time in prison until he flips n heads in a row, where n is selected according to the severity of the offense (we assume that all coin flips are fair and are duly recorded; no cheating is possible). Thus for a ten-year prison sentence, if we assume the prisoner can flip a coin once every five seconds (this seems reasonable), eight hours a day, six days a week, and given that the average attempt at getting a streak of heads before tails is 2 (=S18iTi2-i), then he will on average attempt to get a string of n heads once every 10 seconds, or 6 attempts a minute, or 360 attempts an hour, or 2,880 attempts in an eight-hour work day, or 901,440 attempts a year (assuming a six-day work week), or approximately 9 million attempts in ten years. Nine million is approximately 223. Thus if we required of a prisoner that he flip 23 heads in a row before being released, we could expect to see him out in approximately ten years. Of course specific instances will vary- some prisoners being released after only a short stay, others never recording the elusive 23 heads!

Now consider the average prisoner's reaction after about ten years when he finally flips 23 heads in row. Is he shocked? Does he think a miracle has occurred? Absolutely not. Given his replicational resources, i.e., the number of opportunities he had for observing 23 heads in a row, he could expect to get out of prison in about ten years. There is in fact nothing improbable about his getting out of prison in this span of time. It is improbable that on any given occasion he will flip 23 heads in a row. But when all these occasions are considered jointly, it becomes quite probable that he will be out of prison within the ten years' time. The prisoner's replicational resources comprise the number of occasions he has to produce 23 heads in a row. If his life expectancy is better than ten years, he has a good chance of getting out of prison. In short, replicational resources are adequate for getting out of prison.

If, however, the number of heads a prisoner must flip in a row is exorbitant, then his replicational resources will be inadequate for getting out of prison. Consider a prisoner sentenced to flip 100 heads in a row. The probability of getting 100 heads in a row on a given trial is so small that he has no practical hope of getting out of prison, even if his life expectancy was dramatically increased. If he could, for instance, make 10 billion attempts each year to obtain 100 heads in a row, then he stands only an even chance of getting out of jail in 1020 years. His replicational resources are so inadequate for obtaining the desired 100 heads that it's pointless to entertain hopes of freedoms.{11}

With replicational resources the question is how many opportunities exist for observing a specific event (in the preceding example the event was flipping n heads in a row). With specificational resources the question is how many opportunities are there for specifying an as yet undetemmined event. Lotteries provide the perfect vehicle for illustrating specificational resources. Indeed, each lottery ticket is a specification. To illustrate specificational resources, consider now the following lottery to end all lotteries: In the interest of eliminating the national deficit, the federal government agrees to hold a national lottery in which the grand prize is to be dictator of the United States for a single day- i.e., for twenty-four hours the winner will have full power over every aspect of government. If a white supremacist wins, he can order the wholesale execution of nonwhites. If a porn king wins, he can order this country turned into a giant debauch. If a pacifist wins, he can order the destruction of all our weapons .... The more moderate elements of the society will clearly want to prevent the loony fringe from winning, and will therefore be inclined to invest heavily in this lottery.

This natural inclination, however, is mitigated by the following consideration: the probability of any one ticket winning is 1 in 2100, or approximately 1 in 1030. To buy a ticket, the lottery player pays a fixed price and then records a 0-1 string of length 100-whichever string he chooses. He is permitted to purchase as many tickets as he wishes, subject only to his financial resources and the time it takes to record the 0-1 strings of length 100. The lottery is to be drawn at a special meeting of the United States Senate: By alphabetical order each senator is to flip a coin once and record the resulting coin toss.

Suppose now that the fateful day has arrived. A trillion tickets have been sold at ten dollars apiece. To prevent cheating, Congress has enlisted the services of the National Academy of Sciences. Following the NAS's recommendation, each ticket holder's name is duly entered onto a secure data base, together with the tickets purchased and the ticket numbers (i.e., the bit strings relevant to deciding the winner). All this information is now in place. After much fanfare the senators start flipping their coins. As soon as Senator Zygmund has announced his toss, the data base is consulted to determine whether the lottery had a winner. Lo and behold, the lottery did indeed have a winner-Joe "Killdozer" Skinhead, leader of the White Trash Nation. Joe's first act as dictator is to raise a swastika over the Capitol.

From a probabilist's perspective there is one overriding implausibility to this example. The implausibility rests not with the federal government's sponsoring a lottery to eliminate the national debt, nor with the fascistic prize of being dictator for a day, nor with the way the lottery is decided at a special meeting of the Senate, nor even with the fantastically poor odds of winning the lottery. The implausibility rests with the lottery's having a winner. Indeed, as a probabilist myself, I would encourage the federal government to institute such a lottery if it could redress the national debt, for I am convinced that if the lottery is run fairly, there will be no winner. The odds are simply too much against it.

Suppose, for instance, that a trillion tickets are sold at ten dollars apiece (this would cover the deficit as it stands in 1992). What is the probability that one of those tickets (= specifications) will match the winning string of 0's and l's drawn by the Senate? An elementary calculation shows that this probability can be no greater than 1 in 1018. This is a tiny probability. Even if we increase the number of lottery tickers sold by several orders of magnitude, there still won't be enough sold for the lottery to stand a reasonable chance of having a winner. Since lottery tickets are specifications, this is equivalent to saying there aren't enough specifications to specify the event in question (i.e., the winning of the lottery).

Often it is necessary to consider replicational and specificational resources in tandem. Suppose for instance in the preceding lottery that the Senate will hold up to a thousand drawings to determine a winner. Assume as before that a trillion tickets have been sold. It follows that for his probabilistic setup the specificational resources include a trillion specifications and that replicational resources include a thousand possible repetitions. An elementary calculation now shows that the probability of this modified lottery having a winner is no greater than 1 in 10". That too is a tiny probability. The joint replicational and specifcational resources are so inadequate that it remains exceedingly unlikely this lottery will have a winner.

In times past it used to be much easier to "inflate" probabilistic resources than it is now. The question whether the universe is finite or infinite used to be a philosophical, not an empirical question. Thomas Aquinas claimed it was only by revelation that we could know that the universe was finite. Reason, according to him. left open the possibility of an infinite universe. Spinoza's philosophical system required an infinite universe, but again on metaphysical, not empirical, grounds. Hume himself appreciated the benefits that accrue to scientific naturalism when a universe of infinite duration is presupposed:

A finite number of particles is only susceptible of finite transpositions: and it must happen, in an eternal duration, that every possible order or position must be tried an infinite number of times. This world, therefore, with all its events, even the most minute, has before been produced and destroyed, and will again be produced and destroyed without any bounds and limitations. No one, who has a conception of the power of the infinite, in comparison of the finite, will ever scruple this detemmination.{12}
In his younger days Einstein had been committed to Spinoza's God. Spinoza had identified God with Nature and assumed that this God was infinite in extent and duration. Consistent with Spinoza's conception, Einstein formulated his field equations to model such an infinite universe. Now "when in 1927 the Abbé Lemaître derived from Einstein's cosmological equations the expansion of the universe and correlated that rate with data on galactic red-shifts already available,"{13} the spatio-temporal extent of the universe became an empirical question. The "data on galactic red-shifts already available" was that of Hubble and Humason. When in the early 1930s Einstein visited Hubble in California and inspected this data, Einstein came away convinced that the universe was indeed finite.{14} The inflationary universe of Alan Guth and his successors, much like the steady state theory of the 1950s, attempts to recapture Spinoza's lost infinity. In my view, these theories arise solely out of a need to preserve scientific naturalism, in this case by increasing probabilistic resources and thereby rendering appeals to chance plausible.

What event exhausts the probabilistic resources inherent in the universe? The origin of life does so quite nicely. Anyone who grapples with the improbabilities inherent in life's origin is quickly confounded. Indeed, the improbabilities are truly staggering. Fred Hoyle, for instance. computes that a single cell might on the basis of chance be expected every 1040000 years if the entire universe were filled with a prebiotic liquid (an assumption that is incredibly generous).{15} Bernd-Olaf Küppers, a pupil of Manfred Eigen, commenting on merely a certain subunit of a virus, writes:

The RNA sequence that codes for the virus-speciftc subunit of the replicase complex consists of approximately a thousand nucleotides, . . . so that it already possess ln = 41000 » 10600 alternative sequences .... The spontaneous synthesis [of this system] . . . is therefore extremely improbable.{16}

He concludes that probability theory "does not bring us a single step further as regards the statistical aspect of the origin of life."{17} Lecomte du Noüy found similarly wild improbabilities back in the 1940s.{18 } Hubert Yockey and Michael Behe continue to compute them today.{19}

Is this exhausting of probabilistic resources any reason to postulate nonmaterial intervention, to invoke a supernatural designer, or to believe in God? I have tried throughout this discussion to be cautious. My sights have ever been set on scientific naturalism. My aim has been to show that scientific naturalism is incomplete. I have sketched the beginnings of such an argument, that science cannot adequately support naturalism and that nature does things to exhaust the empirical resources determined by science. One can now try to retain naturalism by introducing a metaphysical hypothesis that postulates a lot more naturalistic stuff than science can sanction.

On the other hand, one can dispense with naturalism and introduce an entirely different son of metaphysical hypothesis-God. These two choices do not exhaust all possibilities, but they are by far the most common.

Which is to be preferred? Since my aim has not been to pitch metaphysical hypotheses, but show that one of these metaphysical hypotheses, naturalism, cannot be redeemed in the coin of science, I shall not argue this question here. Nevertheless, it must be emphasized that science regularly has its empirical resources exhausted. Moreover, when its empirical resources are exhausted, science cannot plead momentary ignorance which it hopes some day to redress. When its empirical resources are exhausted, science is in no position to distribute promissory notes. When its empirical resources are exhausted, science itself closes the door to naturalistic explanation.

The door therefore remains wide open to a scientiftcally defensible account of intelligent design.{20}


{1} Quoted in Peter's Quotations, s.v. "Doubt."

{2} Richard Dawkins certainly thinks so. Consider his comment on the origin of the DNA/protein machine: '[To invoke] a supernatural Designer is to explain precisely nothing, for it leaves unexplained the origin of the Designer. You have to say something like 'God was always there', and if you allow yourself that kind of lazy way out, you might as well just say 'DNA was always there', or 'Life was always there', and be done with it" [Dawkins 1987:141].

{3} Quoted in Johnson [1991:114].

{4} I owe the idea of a talking pulsar to Charles Chastain. The pulsar is an oracle. Here I am using oracles to investigate the possibility of design. Oracles, however, illuminate a host of philosophical questions. I have, for instance, used oracles to investigate the mind-body problem. See Dembski [1990:203-205].

{5} Perhaps to make this story more convincing, both the questions and the answers should be in Hebrew. I'm not sure, however, what Hebrew looks like in Morse code, so I'll stick with English.

{6} This universal bound on computational speed is based on the Planck time, currently the smallest physically meaningful unit of time. Universal time bounds for electronic computers involve clock speeds between ten and twenty magnitudes slower. See Wegener [1987:2].

{7} Even at the atomic level quantum effects make reliable storage unworkable. Indeed, the smallest scale at which vast, reliable storage is known to be possible is at the next level up-the molecular level. We can thank molecular biologists for this insight.

{8} See Balcazár [1990: chapter 11] for the underlying theory.

{9} I've chosen factoring because factoring is easy to understand. There are problems that are not just thought to be hard, but are provably hard.

{10} See Turing [1950].

{11} This example appeared first in Dembski [1991: 104, note 6].

{12} Hume [1779:67].

{13} Jaki [1989:28].

{14} See Jastrow [1980].

{15} See Hoyle and Wickramasinghe [1981:1-33, 130-141], Hoyle [1982:1-65], and the appendix by Herman Eckelmann in Montgomery [1991].

{16} Küppers [1990:68]. Küppers is a pupil of Manfred Eigen.

{17} Küppers [1990:68].

{18} See chapter 3 of du Noüy [1947].

{19} See Yockey [1977] and Behe's article in this volume.

{20} Look for the upcoming book on intelligent design by William Dembski, Stephen Meyer, and Paul Nelson.


Balcázar, José L., Josep Díaz, and Joaquim Gabarr".
1990 Structural Complexity II. Berlin: Springer-Verlag.

Dawkins, Richard.
1987 The Blind Watchmaker. New York: W. W. Norton.

Dembski, William A.
1990 "Convening Matter into Mind: Alchemy and Philosopher's Stone in Cognitive Science." Perspectives on Science and Christian Faith 42(4), 1990:202-226.
1991 "Randomness by Design." Nous 25(1):75-106.

du Noüy, Lecomte.
1947 Human Destiny. New York: Longmans, Green and Company.

Hoyle, Fred.
1982 Cosmology and Astrophysics. Ithaca: Cornell University Press.

Hoyle, Fred and Chandra Wickramasinghe.
1981 Evolution from Space. New York: Simon & Schuster.

Hubble, Edwin P.
1936 The Realm of the Nebulae. New Haven: Yale University Press.

Hume, David.
1779 Dialogues Concerning Natural Religion. Buffalo: Prometheus Books. 1989.

Jaki, Stanley L.
1989 God and the Cosmologists. Washington, D.C.: Regnery Gateway.

Jastrow, Robert.
1980 God and the Astronomers. New York: Warner Books.

Johnson, Phillip E.
1991 Darwin on Trial. Downers Grove, Ill.: InterVarsity Press.

Küppers, Bernd-Olaf.
1990 Information and the Origin of Life. Cambridge, Mass.: MIT Press.

Montogomery John W.,ed.
1991 Evidence for Faith: Deciding the God Question. Dallas, Texas: Probe.

Peter, Laurence J.
1977 Peter's Quotations: Ideas for our Time. Toronto: Bantam-

Turing, Alan M.
1950 "Computing Machinery and Intelligence. Mind 59 (236)

Wegener, Ingo.
1987 The Complexity of Boolean Functions. Stuttgart: Wiley- Teubner.

Yockey, Hubert P.
1977 "A Calculation of the Probability of Spontaneous Biogenesis by information Teory." Journal of Theoretical Biology 67:377-398.

[ Previous | Table of Contents | Next ]