1996: Explanatory Power of Design / Stephen C Meyer, Discovery Institute

One of my first exposures to the concept of Intelligent Design was in a book published in 1996 called “Mere Creation: Sicence, Faith, & Intelligent Design”. Many of the chapters intrigued me, by I ended up being most impressed with the chapter contributed by Stephen C. Meyer called “The Explanatory Power of Design.”
This post consists of excerpts from this chapter, without the notes, and with the images not included because it is difficult in blogger to position images with any kind of precision.
See the book on Amazon.com at this link, and see the Discovery Institute at this link.

Thanks much,

Steve St.Clair
================
The Explanatory Power of Design
DNA and the Origin of Information
STEPHEN C. MEYER

SINCE THE LATE NINETEENTH CENTURY MOST BIOLOGISTS HAVE rejected the idea that biological organisms display evidence of intel­ligent design. While many acknowledge the appearance of design in biological systems, they insist that Darwinism, or neo-Darwin‑ism, can give a full account for how this appearance arose naturalistically—i.e., without invoking a directing intelligence or agency. Following Darwin, modem neo-Darwinists generally accept that natural selection acting on random variation (or mutations) suffices to explain the appearance of design in living organisms. As evolutionary biologist Francisco Ayala has explained,

The functional design of organisms and their features would … seem to argue for the existence of a designer. It was Darwin’s greatest accomplish­ment [however] to show that the directive organization of living beings can be explained as the result of a natural process, natural selection, without any need to resort to a Creator or other external agent (Ayala 1994, 4-5)Yet whatever the explanatory efficacy of the Darwinian program, the appearance of design in at least one important domain of biology cannot be so easily dismissed. Since the late 1950s advances in molecular biology and biochemistry have revolutionized our understanding of the miniature world within the cell. Modem molecular biology has revealed that living cells—the fundamental units of life—possess the ability to store, edit and transmit information and to use information to regulate their most fundamental metabolic processes. Far from characterizing cells as simple “homogeneous globules of plasm,” as did Ernst Haeckel (Haeckel 1905, 111) and other nineteenth-century biologists, modern biologists now describe cells as, among other things, “distributive real-time computers” and complex information processing systems.

Darwin, of course, neither knew about these intricacies nor sought to explain their origin. Instead, his theory of biological evolution sought to explain how life could have grown gradually more complex starting from “one or a few simple forms.” Strictly speaking, therefore, those who insist that the Darwinian mechanism can explain the appearance of design in biology naturalistically overstate their case. The complexities within the microcosm of the cell beg for some kind of explanation, yet they lie beyond the purview of strictly biological evolutionary theory which assumes, rather than explains, the existence of the first life and the information it would have required.

This essay will argue that the complexity and specifity of even the simplest living cells suggest more than just apparent design. Indeed, it will argue that actual or “intelligent” design now constitutes the best explanation for the origin of the information required to make a living cell in the first place. In so doing, this essay will also critique naturalistic theories of chemical, rather than biological, evolution. Whereas biological evolutionary theories such as Darwinism or neo-Darwinism seek to explain the origin of new biological forms from preexisting forms, theories of chemical evolution seek to explain the ultimate origin of life starting from inanimate matter.

The discussion that follows will evaluate and compare the explanatory power of competing classes of explanation with respect to the origin of biological information. It will show the causal inadequacy of explanations based upon both chance and necessity (and the two working in combina­tion). As it happens, the recent history of origin-of-life research can be understood nicely by reference to Jacques Monod’s famous categories “chance” and “necessity,” which were addressed by William Dembski in his discussion of the explanatory filter in the previous chapter. From the 1920s to the mid-1960s, chemical evolutionary theories emphasized the creative role of random variations (i.e., chance)—often working in tandem with so-called pre-biotic natural selection. Since the late 1960s, theorists have instead generally invoked deterministic “self-organizational properties,” i.e., necessity or law, the other naturalistic node on Dembski’s explanatory filter (see previous chapter). This essay will trace the recent history of origin-of-life research to show the inadequacy of scenarios invoking either chance or necessity (or the combination) as causal mechanisms for the origin of that third type of explanation—intelligent design—provides a better explanation for the origin of the information, including the information content present in large bio-macromolecules such as DNA, RNA and proteins.

The Problem of life’s Origin
After Darwin published the Origin of Species in 1859, many scientists began to think about a problem that Darwin had not addressed,’ namely, how life had arisen in the first place. While Darwin’s theory purported to explain how life could have grown gradually more complex starting from “one or a few simple forms,” it did not explain nor did it attempt to explain where life had originated.

Yet scientists in the 1870s and 1880s assumed that devising an explanation for the origin of life would be fairly easy. For one thing, they assumed that life was essentially a rather simple substance called protoplasm that could be easily constructed by combining and recombining simple chemicals such as carbon dioxide, oxygen and nitrogen. Thus Haeckel and others would refer to the cell as a simple “homogeneous globule of plasm” (Haeckel1905, 111; Huxley 1869, 129-45). To Haeckel a living cell seemed no more complex than a blob of gelatin. His theory of how life first came into existence reflected this simplistic view. His method likened cell “autogony,” as he called it, to the process of inorganic crystallization (Haeckel 1866, 179-80; 1892, 411-13; Kamminga 1980, 60, 61).

Haeckel’s English counterpart, T. H. Huxley, proposed a simple two-step method of chemical recombination to explain the origin of the first cell (Huxley 1869, 138-39). Just as salt could be produced spontaneously by adding sodium to chloride, so, thought Haeckel and Huxley, could a living cell be produced by adding several chemical constituents together and then allowing spontaneous chemical reactions to produce the simple protoplasmic substance that they assumed to be the essence of life.

Orthodox Chemical Evolutionary Theory: The Oparin Scenario
During the 1920s and 1930s a more sophisticated version of this so-called chemical evolutionary theory was proposed by a Russian biochemist named Alexander L Oparin. Oparin had a much more accurate understanding of the complexity of cellular metabolism, but neither he nor any one else in the 1930s fully appreciated the complexity of the molecules such as protein and DNA that make life possible. Oparin, like his nineteenth-century predecessors, suggested that life could have first evolved as the result of a series of chemical reactions. Unlike his predecessors, however, he envisioned that this process of chemical evolution would involve many more chemical transformations and reactions and many hundreds of millions or even billions of years.

Oparin’s theory envisioned a series of chemical reactions (see figure 5.1) that he thought would enable a complex cell to assemble itself gradually and naturalistically from simple chemical precursors. Oparin believed that simple gases such as ammonia (NH3), methane (CH4), water (1120), carbon dioxide (CO2) and hydrogen (Hs) would have rained down to the early oceans and combined with metallic compounds extruded from the core of the earth (Oparin 1938, 64103). With the aid of ultraviolet radiation from the sun, the ensuing reactions would have produced energy-rich hydrocar­bon compounds (Oparin 1938, 98, 107, 108). These in turn would have combined and recombined with various other compounds to make amino acids, sugars, phosphates and other building blocks of the complex mole­cules (such as proteins) necessary to living cells (Oparin 1938, 133-35). These constituents would eventually arrange themselves into simple cell-like enclosures that Oparin called coacervates (Oparin 1938, 148-59). Oparin then proposed a kind of Darwinian competition for survival among his coacervates. Those that developed increasingly complex molecules and metabolic processes would have survived and grown more complicated. Those that did not would have dissolved (Oparin 1938,195-96).

Thus cells would have become gradually more and more complex as they competed for survival over billions of years. Like Darwin, Oparin employed time, chance and natural selection to account for the origin of complexity from initial simplicity. Moreover, nowhere in his scenario did mind or intelligent design or a Creator play any explanatory role. For Oparin, a committed Marxist (Graham 1973, 262-63; Araujo 1981, 19), such notions were explicitly precluded from scientific consideration. Matter interacting chemically with other matter, if given enough time and the right conditions, could produce life. Complex cells could be built from simple chemical precursors without any guiding personal or intelligent agency.The Miller-Urey Experiment

The first experimental support for Oparin’s hypothesis came in December 1952. While doing graduate work under Harold Urey at the University of Chicago, Stanley Miller conducted the first experimental test of the Oparin chemical evolutionary model. Miller circulated a gaseous mixture of meth­ane (C1-14), ammonia (NH3),water vapor (H20) and hydrogen (H2) through a glass vessel containing an electrical discharge chamber (Miller 1953, 528-29). Miller sent a high-voltage charge of electricity into the chamber via tungsten filaments in an attempt to simulate the effects of ultraviolet light on prebiotic atmospheric gases. After two days Miller found a small (2 percent) yield of amino acids in the U-shaped water trap he used to collect ducts at the bottom of the vessel. While Miller’s initial experiment yielded only three of the twenty amino acids that occur naturally in proteins, subsequent experiments performed under similar conditions have produced all but one of the others. Other simulation experiments have produced fatty acids and the nucleotide bases found in DNA and RNA but not the sugar molecules deoxyribose and ribose necessary to build DNA and RNA molecules (Thaxton and Bradley 1994,182; Shapiro 1988, 71-95; Ferris 1987, 30; Thaxton, Bradley, and Olsen 1984, 24-38; Harada and Fox 1964, 335; Lemmon 1970, 95-96). Miller’s success in producing biologically relevant building blocks under ostensibly prebiotic conditions was heralded as a great breakthrough. His experiment seemed to provide experimental support for Oparin’s chemical evolutionary theory by showing that an important step in Oparin’s scenario—the production of biological building blocks from simpler atmospheric gases—was possible on the early earth. Miller’s work inspired many similar simulation experiments and an unprecedented optimism about the possibility of developing an adequate naturalistic explanation for the origin of life.

Thanks largely to Miller’s experimental work, chemical evolution is now routinely presented in both high school and college biology textbooks (e.g., Alberts et al. 1983, 4; Lehninger 1975, 23) as the accepted scientific expla­nation for the origin of life. Yet chemical evolutionary theory is now known to be riddled with difficulties, and Miller’s work is understood by the origin-of-life research community itself to have little if any relevance have to explaining how amino acids, let alone proteins or living cells, could arisen on the early earth.

Problems with the Oparin/Miller Hypothesis
Despite its status as textbook orthodoxy, the Oparin chemical evolutionary theory has in recent years encountered severe, even fatal, criticisms on many fronts. First, geochemists have failed to find evidence of the nitrogen-rich prebiotic soup required by Oparin’s model. Second, the remains of single-celled organisms in the very oldest rocks testify that, however life emerged, it did so relatively quickly; that is, fossil evidence suggests that chemical evolution had little time to work before life emerged on the early earth. Third, new geological and geochemical evidence suggests that prebiotic atmospheric con­ditions were hostile, not friendly, to the production of amino acids and other essential building blocks of life. Fourth, the revolution in the field of molecular biology has revealed so great a complexity and specificity of design in even the simplest cells and cellular components as to defy materialistic explanation. Even scientists known for a staunch commitment to materialistic philosophy now concede that materialistic science in no way suffices to explain the origin of life (Dose 1988, 348-56; Shapiro 1986). As origin-of-life biochemist Klaus Dose has said, “More than 30 years of experimentation on the origin of life in the fields of chemical and molecular evolution have led to a better perception of the –r the problem of the origin of life on Earth rather than to its solution. At present all discussions on principle theories and experiments in the field either end in stalemate or in a confession of ignorance” (Dose 1988, 348-56; cf. Crick 1981, 88).

To understand the crisis in chemical evolutionary theory, it will be necessary to explain in more detail the latter two difficulties, namely, the problem of hostile prebiotic conditions and the problem posed by the complexity of the cell and its components.

When Miller conducted his experiment simulating the production of amino acids on the early earth, he presupposed that the earth’s atmosphere was composed of a mixture of what chemists call reducing gases such as methane (C144), ammonia (NH3) and hydrogen (H2). He also assumed that the earth’s atmosphere contained virtually no free oxygen. Miller derived his assumptions about these conditions from Oparin’s 1936 book (Miller 1953, 528-29). In the years following Miller’s experiment, however, new geochemical evidence made it clear that the assumptions that Oparin and Miller had made about the early atmosphere could not be justified. Instead evidence strongly suggested that neutral gases such as carbon dioxide, nitrogen and water vapor (Walker 1977, 210, 246; 1978, 22; Kerr 1980, 4243; Thaxton, Bradley, annd Olsen 1984, 73-94)—not methane, ammonia and hydrogen—predominated in the early atmosphere. Moreover, a number of geochemical studies showed that signifi­cant amounts of free oxygen were also present even before the advent of plant life, probably as the result of volcanic outgassing and the photo-dissociation of water vapor (Berliner and Marshall 1965, 225; Brinkman 1969, 5355; Dimroth and Kimberly 1976, 1161; Carver 1981, 136; Holland, Lazar; and McCaffrey 1986, 27-33; !Castings, Liu, and Donahue 1979, 3097-3102; Kerr 1980, 4243; Thaxton, Bradley, and Olsen 1984, 73-94).

This new information about the probable composition of the early atmos­phere has forced a serious reevaluation of the significance and relevance of Miller-type simulation experiments. As had been well know even before Miller’s experiment, amino acids will form readily in an appropriate mixture of reducing gases. In a chemically neutral atmosphere, however, reactions among atmospheric gases will not take place readily, and those reactions that do take place will produce extremely low yields of biological building blocks.” Further, even a small amount of atmospheric oxygen will quench the production of biologically significant building blocks and cause any bio­molecules otherwise present to degrade rapidly.

The Molecular Biological Revolution and the Origin of Information
Yet a more fundamental problem remains for all chemical evolutionary scenarios. Even if it could be demonstrated that the building blocks of essential molecules could arise in realistic prebiotic conditions. the problem of assembling those building blocks into functioning proteins or DNA chains would remain. This problem of explaining the specific sequencing and thus the information within biopolymers lies at the heart of the current crisis in materialistic evolutionary thinking.

In the early 1950s, the molecular biologist Fred Sanger determined the structure of the protein molecule insulin. Sanger’s work made clear for the first time that each protein found in the cell comprises along and definitely arranged sequence of amino acids. The amino acids in protein molecules are linked together to form a chain, rather like individual railroad cars composing a long train. Moreover, the function of all such proteins, whether as enzymes, signal transducers or structural components in the cell, depends upon the specific sequencing of the individual amino acids (Alberts et at 1983, 91-141), just as the meaning of an English text depends upon the sequential arrangement of the letters. The various chemical interactions between amino acids in any given chain will determine the three-dimensional shape or topography that the amino acid chain adopts. This shape in turn determines what function, if any, the amino acid chain can perform within the cell.

For a functioning protein, its three-dimensional shape gives it a hand-in­-glove fit with other molecules in the cell, enabling it to catalyze specific chemical reactions or to build specific structures within the cell. The proteins histone 3 and 4, for example, fold into very well-defined three-dimensional shapes with a precise distribution of positive charges around their exteriors. This shape and charge distribution enable them to form part of the spool-like nucleosomes that allow DNA to coil efficiently around itself and to store information (Lodish et at 1995, 347-48). The information storage density of DNA, thanks in part to nucleosome spooling, is several trillion times that of our most-advanced computer chips (Girt 1989, 4).

To get a feel for the specificity of the three-dimensional charge distribution on these histone proteins, imagine a large wooden spool with grooves on the surface. Next picture a helical cord made of two strands. Then visualize wrapping the cord around the spool so that it lies exactly into perfectly hollowed-out grooves. Finally, imagine the grooves to be hollowed so that they exactly fit the shape of the coiled cord, with thicker parts nestling into deeper grooves, thinner parts into more shallow ones. In other words, the irregularities in the shape of the cord exactly match irregularities in the hollow grooves. In the case of histone and DNA there are not actually grooves, but there is an uncanny distribution of positively charged regions on the surface of the histone proteins that exactly matches the negatively charged regions of the double-stranded DNA that coils around it (Lodish et al. 1995, 347-48). Proteins that function as enzymes or that assist in the processing of information stored on DNA strands often have an even greater specificity of fit with the molecules to which they must bind. Almost all proteins function as a result of an extreme hand-in-glove three-dimensional specificity that derives from the precise sequencing of the amino acid building blocks.

The discovery of the complexity and specificity of protein molecules has raised serious difficulties for chemical evolutionary theory, even Wan abundant supply of amino acids is granted for the sake of argument. Amino acids alone do not make proteins, any more than letters alone make words, sentences or poetry. In both cases the sequencing of the constituent parts determines the function or lack of function of the whole. In the case of human languages the sequencing of letters and words is obviously performed by intelligent human agents. In the cell the sequencing of amino acids is directed by the informa­tion—the set of biochemical instructions–encoded on the DNA molecule.

Information Transfer: From DNA to Protein
During the 1950s and 1960s, at roughly the same time molecular biologists began to determine the structure and function of many proteins, scientists were able to explicate the structure and function of DNA, the molecule of heredity. After James Watson and Francis Crick elucidated the structure of DNA (Watson and Crick 1953, 737), molecular biologists soon discovered how DNA directs the process of protein synthesis within the cell. They discovered that the specificity of amino acids in proteins derives from a prior specificity within the DNA molecule—from information on the DNA molecule stored as millions of specifically arranged chemicals called nucleotides or bases along the spine of DNA’s helical strands (see figure 5.2). Chemists represent the four nucleotides with the letters A, T, G and C (for adenine, thymine, guanine and cytosine).

As in the case of protein, the sequence specificity of the DNA molecule strongly resembles the sequence specificity of human codes or languages. Just as the letters in the alphabet of a written language may convey a particular message depending on their sequence, so too do the sequences of nucleotides or bases in the DNA molecule convey precise biochemical messages that direct protein synthesis within the cell. Whereas the function of the protein molecule derives from the specific arrangement of twenty different amino acids (a twenty-letter alphabet), the function of DNA de­pends upon the arrangement ofjust four bases. Thus it takes a group of three nucleotides (or triplets, as they are called) on the DNA molecule to specify the construction of one amino acid. This process proceeds as long chains of nucleotide triplets (the genetic message) are first copied during a process known as DNA transcription and then transported (by the molecular mes­senger m-RNA) to a complex organelle called a ribosome (Borek 1969, 184). At the ribosome site the genetic message is translated with the aid of an ingenious adaptor molecule called transfer-RNA to produce a growing amino acid chain (Alberts et al. 1983, 108-9; see figure 5.3). Thus the sequence specificity in DNA begets sequence specificity in proteins. Or put differently, the sequence specificity of proteins depends upon a prior speci­ficity—upon information—encoded in DNA.

Naturalistic Approaches to the Problem of the Origin of Information
The explication of this system by molecular biologists in the 1950s and 1960s has raised the question of the ultimate origin of the specificity—the infor­mation—in both DNA and the proteins it generates. Many scientists now refer to the information problem as the Holy Grail of origin-of-life biology (Thaxton and Bradley 1994, 190). As Bernd-Olaf Kuppers recently stated, the problem of the origin of life is clearly basically equivalent to the prob­lem of the origin of biological information” (Koppers 1990, 170-72). As mentioned previously, the information contained or expressed in natural languages and computer codes is the product of intelligent minds. Minds routinely create informative arrangements of matter. Yet since the mid-nine­teenth century scientists have sought to explain all phenomena by reference to exclusively material causes (Gillespie 1979; Meyer 1994a, 29-40; Meyer 1993, Al4; Johnson 1991; Ruse 1982, 72-78). Since the 1950s three broad types of naturalistic explanation have been proposed by scientists to explain the origin of information.

Biological Information: Beyond the Reach of Chance
After the revolutionary developments within molecular biology in the 1950s and early 1960s made clear that Oparin had underestimated the complexity of life, he revised his initial theory. He sought to account for the sequence specificity of the large protein, DNA and RNA molecules (known collectively as biomacromolecules or biopolymers). In each case the broad outlines of his theory remained the same, but Oparin invoked the notion of natural selection acting on random variations within the sequences of the biopolym­ers to account for the emergence of the specificity within these molecules (Kamminga 1980, 326; Oparin 1968, 146-47). Others invoked the idea of a chance formation for these large information-bearing molecules by speak­ing of them as “frozen accidents” (Crick 1968, 367-79; Kamminga 1980, 303-4).

While many outside origin-of-life biology may still invoke chance as a causal explanation for the origin of biological information, few serious researchers still do (de Duve 1995, 112). Since molecular biologists began to appreciate the sequence specificity of proteins and nucleic acids in the 1950s and 1960s, many calculations have been made to determine the probability of formulating functional proteins and nucleic acids at random. Various methods of calculating probabilities have been offered (Morowitz 1968, 5-12; Cairns-Smith 1971, 92-96; Hoyle and Wickramasinghe 1981, 2427; Shapiro 1986,117-31; Yockey 1981,13-31; Yockey 1992, 246-58; Bowie and Sauer 1989, 2152-56; Bowie et al. 1990, 1306-10; Reidhaar-Olson and Sauer 1990, 306-16). For the sake of argument these calculations have generally assumed extremely favorable prebiotic conditions (whether real­istic or not) and theoretically maximal reaction rates among the constituent monomers (i.e., the constituent parts of the proteins, DNA and RNA). Such calculations have invariably shown that the probability of obtaining function­ally sequenced biomacromolecules at random is, in Prigogine’s words, ‘Vanishingly small … even on the scale of … billions of years” (Prigogine, Nicolis, and Babloyantz 1972, 23).5 As Cairns-Smith (1971, 95) wrote:

Blind chance … is very limited. Low-levels of cooperation he [i.e., blind Chance] can produce exceedingly easily (the equivalent of letters and small words), but he becomes very quickly incompetent as the amount of organization increases. Very soon indeed long waiting periods and mas­sive material resources become irrelevant.

Consider the probabilistic hurdles that must be overcome to construct even one short protein molecule of about 100 amino acids in length. (A typical protein consists of about 300 amino acids, and some are very much longer; Alberts et al. 1983, 118). First, all amino acids must form a chemical bond known as a peptide bond so as to join with other amino acids in the protein chain. Yet in nature many other types of chemical bonds are possible between amino acids; peptide and nonpeptide bonds occur with roughly equal probability. Thus at any given site along a growing amino acid chain the probability of having a peptide bond is roughly 1/2. The probability of attaining four peptide bonds is: (1/2 x 1/2 x 1/2 x 1/2) = 1/16 or (l/2)4. The probability of building a chain of one hundred amino acids in which all linkages involve peptide linkages is (l/2)1″ or roughly 1 chance in 10 to the fifthyeth power.

Second, in nature every amino acid has a distinct mirror image of itself, one left-handed version or L-form, and one right-handed version, or D-form. These mirror-image forms are called optical isomers. Functioning proteins tolerate only left-handed amino acids, yet the right-handed and left-handed isomers occurs in nature with roughly equal frequency. Taking this into consideration compounds the improbability of attaining a biologically func­tioning protein. The probability of attaining at random only L-amino acids in a hypothetical peptide chain 100 amino acids long is again (1/2)1″ or roughly 1 chance in 1090. The probability of building a 100 amino acid length chain at random in which all bonds are peptide bonds and all amino acids are L-form would be (1/4)100 or roughly 1 chance in 106° (zero for all practical purposes given the time available on the early earth).

Functioning proteins have a third independent requirement, the most important of all: their amino acids must link up in a specific sequential arrangement, just like the letters in a meaningful sentence. In some cases changing even one amino acid at a given site can result in a loss of protein function. Moreover, because there are 20 biologically occurring amino acids, the probability of getting a specific amino acid at a given site is small (i.e., l/20; the probability is even lower because there are many nonpro­teineous amino acids in nature). On the assumption that all sites in a protein chain require one particular amino acid, the probability of attaining a particular protein 100 amino acids long would be (l/20)14° or roughly 1 chance in le°.

We know now, however, that some sites along the chain do tolerate several of the twenty proteineous amino acids, while others do not. The biochemist Robert Sauer of MIT has used a technique known as “cassette mutagenesis” to determine just how much variance among amino acids can be tolerated at any given site in several proteins. His results have shown that, even taking the possibility of variance into account, the probability of achieving a functional sequence of amino acids6 in several functioning proteins at random is still ‘vanishingly small,” roughly I chance in 10 to the 65 power le—an astronomi­cally large number (there are 1065 atoms in our galaxy; Reidhaar-Olson and Sauer 1990, 306-16). In light of these results, biochemist Michael Behe has compared the odds of attaining proper sequencing in a 100 amino acid length protein to the odds of a blindfolded man finding a single marked grain of sand hidden in the Sahara Desert not once but three times (Behe 1994, 68-69). Moreover, if one also factors in the probability of attaining proper bonding and optical isomers, the probability of constructing a rather short functional protein at random becomes so small as to be effectively zero (I chance in less) even given our multibillion-year-old universe (Borel 1962, 28; Dembski 1998). All these calculations thus reinforce the opinion that has prevailed since the mid-1960s within origin-of-life biology: Chance is not an adequate explanation for the origin of biological specificity. What P. T. Mom said (1963, 215) still holds:

Statistical considerations, probability, complexity, etc., followed to their
logical implications suggest that the origin and continuance of life is not controlled by such principles. An admission of this is the use of a period of practically infinite time to obtain the derived result. Using such logic, however, we can prove anything.

Prebiolic Natural Selection: A Contradiction in Terms
At nearly the same time that many researchers became disenchanted with chance explanations, theories of prebiotic natural selection also fell out of favor. Such theories allegedly overcome the difficulties attendant pure chance theories by providing a mechanism by which complexity-increasing events in the cell would be preserved and selected. Yet these theories share many of the difficulties that afflict purely chance-based theories.

Oparin’s revised theory, for example, claimed that a kind of natural selection acted upon random polymers as they formed and changed within his coacervate protocells (Oparin 1968, 146-47). As more complex molecules accumulated, they presumably survived and reproduced more prolifically. Nevertheless Oparin’s discussion of differential reproduction seemed to presuppose a preexisting mechanism of self-replication. Self-replication in all extant cells depends upon functional and therefore to a high degree sequence-specific proteins and nucleic acids. Yet the origin of these mole. arks is precisely what Oparin needed to explain. Thus many rejected the postulation of prebiotic natural selection as question begging (Mom 1965, 311-12; Bertalanffy 1967, 82). Functioning nucleic acids and proteins (or molecules approaching their complexity) are necessary to self-replication, which in turn is necessary to natural selection. Yet Oparin invoked natural selection to explain the origin of proteins and nucleic acids. As the evolu­tionary biologist Theodosius Dobzhansky would proclaim, “prebiological natural selection is a contradiction in terms” (Dobzhansky 1965, 310). Or as H. H. Pattee (1970, 123) put it:

There is no evidence that hereditary evolution occurs except in cells which already have the complete complement of hierarchical constraints, the DNA, the replicating and translating enzymes, and all the control systems and structures necessary to reproduce themselves.

In any case, functional sequences of amino acids (i.e., proteins) cannot be counted on to arise via random events, even if some means of selecting them exists after they have been produced. Natural selection can only select what chance has first produced, and chance, at least in a prebiotic setting, seems an implausible agent for producing the information present in even a single functioning protein or DNA molecule. Oparin attempted to circumvent this problem by claiming that the first polymers need not have been terribly specific.

But lack of polymer specificity produces “error catastrophes” that efface the accuracy of self-replication and eventually render natural selection impossible. Further, the mathematician von Neumann (1966) showed that any system capable of self-replication would need to contain subsystems that were functionally equivalent to the information storage, replicating and processing systems found in extant cells. His calculations and similar ones by Wigner (1961, 231-35), Landsberg (1964, 928-30) and Morowitz (1966, 446-59; 1968, 10-11) showed that random fluctuations of molecules in all probability would not produce the minimal complexity needed for even a primitive replication system. The improbability of developing a replication system vastly exceeds the improbability of developing the protein or DNA components of such a system. Thus appeals to prebiotic natural selection increasingly appear indistinguishable from appeals to chance.

Nevertheless, Richard Dawkins (Dawkins 1986, 47-49) and Bernd-Olaf Kuppers (Koppers 1987, 355-69) recently have attempted to resuscitate prebiotic natural selection as an explanation for the origin of biological information. Both accept the futility of naked appeals to chance and invoke what Kuppers calls a “Darwinian optimization principle.” Both use a com­puter to demonstrate the efficacy of prebiotic natural selection. Each selects a target sequence to represent a desired functional polymer. After creating a crop of randomly constructed sequences and generating variations among them at random, they then program the computer to select those sequences that match the target sequence most closely. The computer then amplifies the production of those sequences and eliminates the others (thus simulat­ing differential reproduction) and repeats the process. As Wrappers puts it,

Every mutant sequence that agrees one bit better with the meaningful or reference sequence . . . will be allowed to reproduce more rapidly.(Rippers 1987, 366)

In Kuppers’s case, after a mere thirty-five generations his computer suc­ceeded in spelling his target sequence, “NATURAL SELECTION.”

Despite superficially impressive results, these “simulations” conceal an obvi­ous flaw: molecules in situ do not have a target sequence in mind, nor will they confer any selective advantage on a cell and thus differentially reproduce until they combine in a functionally advantageous arrangement Thus, nothing in nature corresponds to the role that the computer plays in selecting functionally non-advantageous sequences that happen to agree “one bit better” than others with a target sequence. The sequence “NORMAL ELECTION” may agree more with “NATURAL SELECTION” than does the sequence “MISTRESS DEFEC­TION,” but neither of the two yields any advantage in communication over the other if, that is, we are trying to communicate something about natural selection. If so, both are equally ineffectual. Similarly, a nonfunctional polypep­tide would confer no selective advantage on a hypothetical protocell, even if its sequence happens to “agree one bit better” with an unrealized target protein than some other nonfunctional polypeptide.

Indeed, both Kuppers’s and Dawkins’s published results of their simula­tions show the early generations of variant phrases awash in nonfunctional gibberish. In Dawkins’s simulation, not a single functional English word appears until after the tenth iteration (unlike the more generous example above, which starts with actual albeit incorrect words). Yet to make distinc­tions on the basis of function among sequences that have no function whatsoever would seem quite impossible. Such determinations can only be made if considerations of proximity to possible future function are allowed, but this requires foresight that molecules do not have. A computer, pro­grammed by a human being, can perform these functions. To imply that molecules can as well only illicitly personifies nature. Thus, if these computer simulations demonstrate anything, they subtly demonstrate the need for intelligent agents to elect some options and exclude others—that is, to create information.

Self-Organizational Scenarios
Because of the difficulties with appeals to prebiotic natural selection, many origin-of-life theorists after the mid-1960s attempted to address the problem of the origin of biological information in a new way. Rather than invoking prebiotic natural selection or “frozen accidents” (Crick 1968, 367-79; kamminga 1980, 303-4), many theorists suggested that the laws of nature and chemical attraction may themselves be responsible for the information in DNA and proteins. Some have suggested that simple chemicals might possess “self-ordering properties” capable of organizing the constituent parts of proteins, DNA and RNA into the specific arrangements they now possess (Morowitz 1968). Steinman and Cole, for example, suggested that differential bonding affinities or forces of chemical attraction between certain amino acids might account for the origin of the sequence specificity of proteins (Steinman and Cole 1967, 735-41; Steinman 1967, 533-39; for recent criticism see Kok, Taylor, and Bradley 1988, 135-42). Just as electrostatic forces draw sodium ion (Na+) and chloride ions (Cl-) together into highly ordered patternswithin a crystal of salt (NaCI),so too might amino adds with special affinities for each other arrange themselves to form proteins. This idea was developed in Biochemical Predestination by Kenyon and Steinman (1969). They argued that the origin of life might have been “bio­chemically predestined” by the properties of attraction that exist between constituent chemical parts, particularly between amino acids in proteins (Ken­yon and Steinman 1969, 199-211, 263-66).

In 1977 another self-organizational theory was proposed by Prigogine and Nicolis based on a thermodynamic characterization of living organisms. In Self Organisation in Non-equilibrium Systems, they classified living organisms as open, non-equilibrium systems capable of “dissipating” large quantities of energy and matter into the environment (Prigogine and Nicolis 1977, 339-53, 429-47).

They observed that open systems driven far from equilibrium often display self-order­ing tendencies. For example, gravitational energy will produce highly ordered vortices in a draining bathtub; thermal energy flowing through a heat sink will generate distinctive convection currents or “spiral wave activity” Prigogine and Nicolis then argued that the organized structures observed in living systems might have similarly self-originated with the aid of an energy source. In essence they conceded the improbability of simple building blocks arranging themselves into highly ordered structures under normal equilibrium conditions. But they suggested that under non-equilibrium conditions, where an external source of energy is supplied, biochemical building blocks might arrange themselves into highly ordered patterns.

Order Versus Information
For many current origin-of-life scientists, self-organizational models (see, e.g., Kauffman 1993; de Duve 1995) now seem to offer the most promising approach to explaining the origin of biological information. Nevertheless critics have called into question both the plausibility and the relevance of self-organizational models. Perhaps the most prominent early advocate of self-organization, Dean Kenyon, has now explicitly repudiated such theories also failed to solve the sequencing problem7—that is, the problem of explain­ing how information present in all functioning RNA molecules could have arisen in the first place … as both incompatible with empirical findings and theoretically incoherent (Kok, Taylor, and Bradley 1988, 13542).

First, empirical studies have shown that some differential affinities do exist between various amino acids (i.e., particular amino acids do form linkages more readily with some amino acids than others; Steinman and Cole 1967, 735-41; Steinman 1967, 533-39). Nevertheless these differences do not correlate to actual sequencing in large classes of known proteins (Kok, Taylor, and Bradley 1988, 135-42). In short, differing chemical affinities do not explain the multiplicity of amino acid sequences that exist in naturally occurring proteins or the sequential ordering of any single protein.

In the case of DNA this point can be made more dramatically. Figure 5.4 shows the structure of DNA depends upon several chemical bonds. There are bonds, for example, between the sugar and the phosphate molecules that form the two twisting backbones of the DNA molecule. There are bonds fixing individual nucleotide bases to the sugar-phosphate backbones on each side of the molecule. There are also hydrogen bonds stretching horizontally across the molecule between nucleotide bases making so-called complementary pain. These bonds, which hold two complementary copies of the DNA message text together, make replication of the genetic instructions possible. Most impor­tantly, however, notice that there are =chemical bonds between the nucleotide bases that run along the spine of the helix. Yet it is precisely along this axis of the molecule that the genetic instructions in DNA are encoded (Alberts et al. 1983, 105). In other words, the chemical constituents that are responsible for the message text in DNA do not interact chemically in any significant way.

Further, just as magnetic letters can be combined and recombined in any way to form various sequences on a metal surface, so too can each of the four bases A, T, Gand C attach to any site on the DNA backbone with equal facility, making all sequences equally probable (or improbable). Indeed, there are no differential affinities between any of the four bases and the binding sites along the sugar-phosphate backbone. The same type of so-called “n-glycosidic” bond occurs between the base and the backbone regardless of which base attaches. All four bases are acceptable; none is preferred. As Kuppers has noted,

the properties of nucleic acids indicates that all the combinatorially possible nucleotide patterns of a DNA are, from a chemical point of view, equivalent. (Kouppers 1987, 364)

Thus, “self-organizing” bonding affinities cannot explain the sequential ordering of the nucleotide bases in DNA because (l) there are no bonds between bases along the message-bearing axis of the molecule, and (2) there are no differential affinities between the backbone and the various bases that could account for variations in sequencing. Because the same holds for RNA molecules, researchers who speculate that life began in an “RNA world” have …………………………..

For those who want to explain the origin of life as the result of self-organ­izing properties intrinsic to the material constituents of living systems, these rather elementary facts of molecular biology have devastating implications. The most logical place to look for self-organizing properties to explain the origin of genetic information is in the constituent parts of the molecules carrying that information. But biochemistry and molecular biology make clear that forces of attraction between the constituents in DNA, RNA and proteins do not explain the sequence specificity of these large information-bearing biomolecules.

Significantly, information theorists insist that there is a good reason for this. If chemical affinities between the constituents in the DNA message text determined the arrangement of the text, such affinities would dramatically diminish the capacity of DNA to carry information. To illustrate, imagine receiving the following incomplete message over the wire: the “q-ick brown fox jumped over the lazy dog.” Obviously someone who knew the conven­tions of English could determine which letter had been rubbed out in the transmission. Because q and u always go together by grammatical necessity, the presence of one indicates the probable presence of the other in the initial transmission of the message. The u in all English communications is an example of what information theorists call “redundancy.” Given the gram­matical rule “it must always follow q,” the addition of the u adds no new information when q is already present. It is redundant or unnecessary to determining the sense of the message (though not to making it grammati­cally correct).

Now consider what would happen if the individual nucleotide letters (A, T, G, C) in a DNA molecule did interact by chemical necessity with each other. Every time adenine (A) occurred in a growing genetic sequence, it would attract thymine (T) to it. Every time cytosine (C) occurred, guanine (G) would follows As a result the DNA message text would be peppered with repeating sequences of A’s followed by T’s and C’s followed by G’s. Rather than having a genetic molecule capable of unlimited novelty with all the unpredictable and aperiodic sequences that characterize informative texts, we would have a highly repetitive text awash in redundant sequences, much as happens in crystals. In a crystal the forces of mutual chemical attraction do completely explain the sequential ordering of the constituent parts and consequently crystals cannot convey novel information. Sequencing in crys­tals is highly ordered or repetitive but not informative. Once one has seen Na followed by CI in a crystal of salt, for example, one has seen the extent of the sequencing possible. In DNA, however, where any nucleotide can follow any other, innumerable novel sequences are possible, and a countless variety of amino acid sequences can be built.

The forces of chemical necessity, like grammatical necessity in the q-and-u example, produce redundancy or monotonous order but reduce the capac­ity to convey information and create novelty. As Polanyi has said:

Suppose that the actual structure of a DNA molecule were due to the fact that the bindings of its bases were much stronger than the bindings would be for any other distribution of bases, then such a DNA molecule would have no information content Its code-like character would be effaced by an overwhelming redundancy. . .

Whatever may be the origin of a DNA configuration, it can function as a code only if its order is not due to the forces of potential energy. It must be as physically indeterminate as the sequence of words is on a printed page. (Polanyi 1968, 1309, emphasis added)

So, if chemists had found that bonding affinities between the nucleotides in DNA produced nucleotide sequencing, they would have also found that they had been mistaken about DNA’s information-bearing properties. To put the point quantitatively, to the extent that forces of attraction between constituents in a sequence determine the arrangement of the sequence, to that extent will the information-carrying capacity of the system be diminished.’ As Dretske has explained:

As p (si) [the probability of a condition or state of affairs] approaches 1 the amount of information associated with the occurrence of si goes to O. In the limiting case when the probability of a condition or state of affairs is unity [p (si) = l], no information is associated with, or generated by, the occurrence of si. This is merely another way to say that no information is generated by the occurrence of events for which there are no possible alternatives. (Dretske 1981, 12)

Bonding affinities, to the extent they exist, militate against the maximization of information (Yockey 1981, 18). They cannot therefore be used to explain the origin of information. Affinities create mantras, not messages.

The tendency to conflate the qualitative distinction between order and information has characterized self-organizational research efforts and calls into question the relevance of such work to the origin of life. As Yockey has argued, the accumulation of structural or chemical order does not explain the origin of biological complexity (i.e., genetic information).10 He concedes that energy flowing through a system may produce highly ordered patterns. Strong winds form swirling tornadoes and the eyes of hurricanes; Prigogine’s thermal baths do develop interesting “convection currents”; and chemical elements do coalesce to form crystals. Self-organizational theorists explain well what does not need explaining. What needs explaining is not the origin of order (in the sense of symmetry or repetition) but the origin of informa­tion—the highly improbable, aperiodic and yet specified sequences that make biological function possible.

To illustrate the distinction between order and information compare the sequence ABABABABABABAB to the sequence THE BIG RED HOUSE IS ON FIRE! The first sequence is repetitive and ordered but not complex or informative. The second sequence is not ordered in the sense of being repetitious, but it is complex and also informative. The second sequence is complex because its characters do not follow a rigidly repeating or predict­able pattern; that is, it is aperiodic. It is also informative because, unlike a merely complex sequence such as RFSXDCNCTQJ, the particular arrange­ment of characters is highly exact or specified so as to perform a (commu­nication) function. Systems that are characterized by both specificity and complexity (what information theorists call “specified complexity”) have “information content” Since such systems have the qualitative feature of complexity (aperiodicity), they are qualitatively distinguishable from systems characterized by simple periodic order. Thus attempts to explain the origin of order have no relevance to discussions of the origin of specified complex­ity or information content Significantly, the nucleotide sequences in the coding regions of DNA have by all accounts a high information content—that is, they are both highly specified and complex, just like meaningful English sentences (Thaxton and Bradley 1994, 173-210; Thaxton, Bradley and Olsen 1984, 127-66; Yockey 1992, 242-93).

Conflating order and information (or specified complexity) has led manyto attribute properties to brute matter that it does not possess. While energy in a system can create patterns of symmetric order such as whirling vortices,there is no evidence that energy alone can encode functionally specifiedsequences, whether biochemical or otherwise. As Yockey (1977, 380) warns:

Attempts to relate the idea of order … with biological organization or specificity must be regarded as a play on words which cannot stand careful scrutiny. Informational macromolecules can code genetic messages and therefore can carry information because the sequence of bases or residues is affected very little, if at all, by [self-organizing] physicochemical factors.

The Return of the Intelligent Design Hypothesis
The preceding discussion suggests that the properties of the material con­stituents of DNA, like those of any information-bearing medium, are not responsible for the information conveyed by the molecule. In all informa­tional systems, the information content or message is neither deducible from the properties of the material medium nor attributable to them. The prop­erties of matter do not explain the origin of the information.

To amplify this point, consider first that many different materials can express the same message. The headline of this morning’s New York limes was written with ink on paper. Nevertheless, many other materials could have been used to convey the same message. The information in the headline could have been written with chalk on a board, with neon-filled tubes in a series of signs, or by a skywriter over New York harbor. Clearly the peculiar chemical properties of ink are not necessary to convey the message. Neither are the physical properties (i.e., the geometric shapes) of the letters necessary to transmit the information. The same message could have been expressed in Hebrew or Greek using entirely different alphabetic characters.

Conversely the same material medium and alphabetic characters can express many different messages; that is the medium is not sufficient to determine the message. In November of an election year the Times will use ink and English characters to tell the reading public that either a Democrat, a Republican or a third-party candidate has won the presidential election. Yet the properties of the ink and the twenty-six letters available to the typesetter will not determine which headline will be published by the limes. Instead the ink and English characters will permit the transmission of whatever headline the election requires, as well as a vast ensemble of other possible arrangements of text, some meaningful and many more not Neither the chemistry of the ink nor the shapes of the letters determines the meaning of the text In short, the message transcends the properties of the medium.

The information in DNA also transcends the properties of its material medium. Because chemical bonds do not determine the arrangement of nucleotide bases, the nucleotides can assume a vast array of possible sequences and thereby express many different messages. (Conversely various materials can express the same messages, as happens in variant versions of the genetic code or when laboratory chemists use English instructions to direct the synthesis of naturally occurring proteins.) Thus, again, the properties of the constituents do not determine the function—the information transmitted—by the whole. As Polanyi (1968, 1309) has said,

As the arrangement of a printed page is extraneous to the chemistry of the printed page, so is the hate sequence in a DNA molecule extraneous to the chemical forces at work in the DNA molecule.

If the properties of matter (i.e., the medium) do not suffice to explain the origin of information, what does? Blind chance is a possibility but not, as we have seen in the case of DNA and proteins, where the amount of information or the improbability of arrangement gets too immense. The random selection and sequencing of Scrabble pieces out of a grab bag might occasionally produce a few meaningful words such as cat or ran. Nevertheless undirected selection will inevitably fail as the numbers of letters required to make a text increases. Fairly soon chance becomes clearly inadequate, as origin-of-life biologists have almost universally acknowledged.

Some have suggested that the discovery of new scientific laws might explain the origin of biological information. But this suggestion betrays confusion on two counts. First, scientific laws do not generally explain or cause natural phenomena; they describe them. For example, Newton’s law of gravitation described but did not explain the attraction between planetary bodies. Second, scientific laws describe (almost by definition) highly regular phe­nomena—that is, order. Thus to say that any scientific law can describe or generate an informational sequence is essentially a contradiction in terms. The patterns that laws describe are necessarily highly ordered, not complex. Thus, like crystals, all law-like patterns have an extremely limited capacity to convey information. One might perhaps find a complex set of material conditions capable of generating high information content on a regular basis, but everything we know suggests that the complexity and information content of such conditions would have to equal or exceed that of any system produced, thus again begging the question about the ultimate origin of information.

For example, the chemist J. C. Walton has argued (echoing earlier articles by Mom) that even the self-organization produced in Prigogine-style convec­tion currents does not exceed the organization or information represented by the experimental apparatus used to create the currents (Walton 1977, 16-35; Mora 1965, 41). Similarly, Maynard-Smith (1979, 445-46) and Dyson (1985, 9-11, 35-39, 65-66, 78) have shown that Manfred Eigen’s (Eigen and Schuster 1977, 541-65; 1978a, 7-41; 1978b, 341-69) so-called hypercycle model for generating information naturalistically is subject to the same law of information loss. They show, first, that Eigen’s hypercycles presuppose a large initial contribution of information in the form of a long RNA molecule and some forty specific proteins. More significantly, they show that because hypercycles lack an error-free mechanism of self-replication, they become susceptible to various error-catastrophes that ultimately diminish, not in­crease, the information content of the system over time.

Instead our experience with information-intensive systems, especially codes and languages, indicates that such systems always come from an intelligent source—that is, from mental or personal agents. This generalization holds not only for the information present in languages and codes but also for the non-grammatical information (also describable as specified complexity) inher­ent in machines or expressed in works of art. Like the text of a newspaper, the parts of a supercomputer and the faces on Mount Rushmore require many instructions to specify their shape or arrangement and consequently have a high information content. Each of these systems is also, not coincidentally, the result of intelligent design, not chance or material forces.

Our generalization about the cause of information has ironically received confirmation from origin-of-life research itself. During the last forty years, every naturalistic model proposed has failed to explain the origin of infor­mation. Thus mind or intelligence or what philosophers call “agent causa­tion” now stands as the only cause known to be capable of creating an information-rich system, including the coding regions of DNA, functional proteins and the cell as a whole.

Because mind or intelligent design is a necessary cause of an informative system, one can detect (or, logically, retrodict) the past action of an intelli­gent cause from the presence of an information-intensive effect, even if the cause itself cannot be directly observed (Meyer 1990, 79-99). Since informa­tion requires an intelligent source, the flowers spelling ‘Welcome to Victoria” in the gardens of Victoria harbor lead visitors to infer the activity of intelli­gent agents even if they did not see the flowers planted and arranged. Similarly the specifically arranged nucleotide sequences—the encoded in­formation—in DNA imply the past action of an intelligent mind, even if such mental agency cannot be directly observed.

Moreover, the logical calculus underlying such inferences follows a valid and well-established method used in all historical and forensic sciences. In historical sciences knowledge of the present causal powers of various entities and processes enables scientists to make inferences about possible causes in the past When a thorough study of various possible causes turns up just a single adequate cause for a given effect, historical or forensic scientists can make fairly definitive inferences about the past (Meyer 1990, 79-99; Sober 1988, 4-5; Scriven 1966, 249-50). Several years ago, for example, one of the forensic pathologists from the original Warren Commission that investigated the assassination of President John F. Kennedy spoke out to quash rumors about a second gunman firing from in front of the motorcade. Apparently the bullet hole in the back of President Kennedy’s skull evidenced a distinc­tive beveling pattern that clearly indicated its direction of entry. In this case it revealed definitely that the bullet had entered from the rear. The patholo­gist called the beveling pattern a “distinctive diagnostic” to indicate a necessary causal relationship between the direction of entry and the pres­ence of the beveling.

Inferences based on knowledge of necessary causes (distinctive diagnos­tics) are quite common in historical and forensic sciences and often lead to the detection of intelligent as well as natural causes. Since Criminal X’s fingers are the only known cause of Criminal X’s fingerprints, X’s prints on the murder weapon incriminate him with a high degree of certainty. In the same way, since intelligent design is the only known cause of information-rich systems, the presence of information, including the information-rich nucleo­tide sequences in DNA, implies an intelligent source.

Scientists in many fields recognize the connection between intelligence and information and make inferences accordingly. Archaeologists assume a mind produced the inscriptions on the Rosetta stone. Evolutionary anthropolo­gists try to demonstrate the intelligence of early hominids by arguing that certain chipped flints are too improbably specified to have been produced by natural causes. NASA’s Search for Extraterrestrial Intelligence (SE 11)14 presupposed that information imbedded in electromagnetic signals from space would indicate an intelligent source (McDonough 1987). As yet, however, radio astronomers have not found information-bearing signals coming from space. But closer to home, molecular biologists have identified encoded information in the cell. Consequently the presence of information in DNA justifies making what prob­ability theorist William A. Dembski (1998) calls the design inference (see also Behe 1996; Kenyon and Mills 1996, 9-16; Ayoub 1996,19-22; Moreland 1994; Bradley 1988, 72-83; Augros and Stanciu 1987; Denton 1986, 326-43; Thaxton, Bradley and Olsen 1984; Ambrose 1982; Walton 1977, 16-35).

An Argument from Ignorance?
Against all that has been said, many have maintained that this argument from information content to design constitutes nothing more than an argument from ignorance. Since we don’t yet know how biological information could have arisen we invoke the mysterious notion of intelligent design. Thus, say objectors, intelligent design functions not as a legitimate inference or explanation but as a kind of place holder for ignorance.

And yet, as Dembski has demonstrated (Dembski 1998, 9-35, 62-66) we often infer the causal activity of intelligent agents as the best explanation for events and phenomena. Moreover, we do so rationally, according to objecti­fiable, if often tacit, information and complexity theoretic criteria. His examples of design inferences—from archeology and cryptography to fraud detection andaiminal forensics—show that we make design inferences all the time, often for a very good reason (Dembski 1998, 9-35). Intelligent agents have unique causal powers that nature does not When we observe effects thatwe know only agents can produce, we rightly infer the antecedent presence of a prior intelligence even if we did not observe the action of the particular agent responsible. In other words, Dembski has shown that de­signed events leave a complexity and information-theoretic signature that allows us to detect intelligent design reliably. Specifically, when systems or artifacts have a high information content or (in his terminology) are both highly improbable and specified, intelligent design necessarily played a causal role in the origin of the system in question.

While admittedly the design inference constitutes a provisional, empiri­cally-based conclusion and not a proof (science can provide nothing more), it most emphatically does not constitute an argument from ignorance. Instead, the design inference from biological information constitutes an “inference to the best explanation.” Recent work on the method of “inference to the best explanation” (Lipton 1991; Meyer 1994b, 88-94) suggests that determining which among a set of competing possible explanations constitutes the best depends upon assessments of the causal powers of competing explana­tory entities.

Causes that have the capability to produce the evidence in question constitute better explanations of that evidence than those that do not This essay has evaluated and compared the causal efficacy of three broad categories of explanation—chance, necessity (and chance and necessity combined) and design—with respect to their ability to produce high information content. As we have seen, neither chance- nor necessity-based scenarios (nor those that combine the two) possess the ability to produce biological information in a prebiotic context This result comports with our ordinary, uniform human experience. Brute matter—whether acting randomly or by necessity—does not have the capability to produce information-intensive systems or sequencing.

Yet it is not correct to say that we do not know how information arises. We know from experience that intelligent agents create information all the time. Indeed, experience teaches that whenever high information content is present in an artifact or entity whose causal story is known, invariably creative intelligence—design—has played a causal role in the origin of that entity. Moreover, citing the activity of an intelligent agent really does explain the origin of certain features such as, for example, the faces on Mount Rushmore or the inscriptions on the Rosetta Stone. (Imagine the absurdity of an archeologist who refused to infer an intelli­gent cause for the incriptions on the Rosetta Stone because such an inference would constitute a scribe-of-the gaps fallacy.) Inferences to design need not depend upon our ignorance, but instead are often justified by our knowledge of the demonstrated causal powers of nature and agency, respectively. Recent developments in the information sci­ences formalize this knowledge, helping us to make inferences about the causal histories of various artifacts, entities or events based upon the information-theoretic signatures they exhibit (Dembski 1998, 62-66). Thus knowledge (albeit provisional) of established cause-effect relation­ships, not ignorance, justifies the design inference as the best explanation for the origin of biological information in a prebiotic context.

Conclusion
During the last forty years, molecular biology has revealed a complexity and intricacy of design that exceeds anything that was imaginable during the late nineteenth century. We now know that organisms display any number of distinctive features of intelligently engineered “high-tech” systems: infor­mation storage and transfer capability; functioning codes (Wolfe 1993, 671-79); sorting and delivery systems (Wolfe 1993, 835-44); regulatory and feedback loops; signal transduction circuitry (Wolfe 1993, 237-53); and everywhere complex, mutually interdependent networks of parts (Behe 1996). Indeed, the complexity of the biomacromolecules discussed in this essay does not begin to exhaust the full complexity of living systems.

Norbert Wiener once said, “Information is information, neither energy nor matter. No materialism that fails to take account of this can survive the present day” (quoted in Gitt 1989, 5). The informational properties of living systems suggest that “no materialism” can suffice to explain the origin of life. Indeed, as molecular biology and the information sciences have revolution­ized our understanding of the complexity of life, they have also made it progressively more difficult to conceive how life might have arisen natural­istically. The complexity and specificity of DNA, RNA and proteins simply exceed the creative capacities of the explanatory entities that scientific materialists employ. The origin-of-life research community has generated a multiplicity of explanations involving either random and/or deterministic interactions of matter and energy. It has refused on principle to consider explanations that involve intelligent design.

Yet this methodological commitment to naturalistic explanation at all costs has created an unnecessary impasse. Experience teaches that informa­tion-rich systems (or to use Dembski’s terminology, “small probability speci­fications”) invariably result from intelligent causes, not naturalistic ones. Yet origin-of-life biology has artificially limited its explanatory search to the naturalistic nodes of causation on Dembski’s explanatory filter: chance and necessity. Finding the best explanation, however, requires invoking causes that have the power to produce the effect in question. When it comes to information, we know of only one such cause. For this reason, the biology of the information age now requires a new science of design.

Advertisements

Leave a comment

Filed under Intelligent Design

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s