12 June 2012
Natural genetic engineering in bacteria.
Bipedal goats and dogs.
Maze-solving slime mold, ferrets that see with their auditory cortex, fruit flies with inverted visual fields, and humans who “see” with their tongues.
These are some of the phenomena I’ve looked at in previous weeks in order to make the case that living beings possess a general ability to respond to challenges by means of appropriate compensation or adaptation.
I’ve been arguing that the existence of such a general power of “adaptivity” or “intelligent agency” cannot be explained by the theory of natural selection, but rather is the tacit presupposition that gives that theory its superficial plausibility.
But if natural selection cannot explain this power or capacity, what can? How is it possible for a physical system—the cell—to possess such a remarkable property? How can we best try to understand it scientifically?
Once we finally succeed in freeing our minds from the Darwinian style of thinking, our real work has only just begun.
In the weeks to come, I will be looking at some specific proposals for future research on the fundamental nature of life. Today, though, by way of preparation, I will sketch in some background—and make some important distinctions—that will clarify what those proposals can and cannot hope to accomplish.
* * *
There are so many things wrongs with the Darwin-determined way most of us think about life, it is hard to know where to start.
Here is a short list of widely accepted ideas that dominate most discussions of life and evolution, either explicitly or implicitly:
- There is no deep difference between living and nonliving matter; therefore, it is idle to seek “essential” properties or a “definition” of life.
- In any case, the most fundamental fact about a living thing is its ability to undergo natural selection.
- Therefore, evolution—and hence replication—are conceptually more fundamental than physiology (or “metabolism”).
- Therefore, DNA is more important than the other main chemical components of the cell: proteins bound to water and/or lipid membranes.
- Therefore, genes are fundamental and the most important question to be asked about any functional trait is its evolutionary origin; everything else is just biochemical detail.
- In this way, the seemingly teleological and normative features of living things can be “reduced” to the effects of the genes, and so satisfactorily explained by the theory of random genetic mutation and natural selection.
I submit that every one of these ideas is fundamentally mistaken, and that progress towards understanding life depends upon affirming its contrary:
The Bioessentialist View of Life
- There is a fundamental difference in kind between living and nonliving systems; the main task of biology is to understand the distinctive nature of living matter.
- The most fundamental fact about a living thing is its ability, by doing work selectively, to maintain itself in existence as the kind of physical system that it is.
- Therefore, metabolism is conceptually more fundamental than evolution or replication; in fact, replication—and perhaps to some extent even evolution itself—are under metabolic control.
- Therefore, the active agents of the cell—proteins bound to water and/or lipid membranes—are more important than genes, which are just passive templates that the cell makes use of as needed to maintain itself in existence.
- Therefore, metabolism is fundamental and the most important thing to ask about any functional trait is not its evolutionary origin, but rather what contribution it makes to metabolism—that is, to maintaining the system of which it is a part in existence.
- Since the teleological and normative features of life cannot be reduced to the genes or adequately explained by the theory of natural selection, we must seek to explain them directly—as an irreducible (or “emergent”) property of the living state of matter itself.
There is no space here to discuss all of these claims separately in the detail they deserve. But I hope it is clear how each of these two diametrically opposed ways of looking at life stands or falls as a package deal.
What I wish to do in the rest of this column is discuss two further points, which are liable to be either misunderstood or else overlooked entirely:
- Adopting the bioessentialist view of life means rejecting the reductionist and mechanistic view of nature as fully reducible to its smallest constituent parts, in favor of an “emergentist” view of nature as a whole.
- On the bioessentialist view, the main thing we must seek to explain is not the origin of some trait or even of life itself, but rather how life really works—and this means understanding the form of stability that is distinctive of living systems.
Let’s look at these two points more closely.
What Is Emergence?
The idea of emergence is, roughly, the claim that at least some wholes are more than the sum of their parts—that there is something added when certain wholes come into being that was not already there, in some latent form, in the parts.
If true, this would mean that we cannot learn everything there is to know about such wholes by examining the parts alone—there is new knowledge to be gained by studying the whole as a whole.
The denial of emergence is called “reductionism”—the idea that wholes are “nothing but” the sum of their parts. If the world is as the reductionist says it is, then once we’ve learned everything there is to know about the pieces a whole is made up of, there is nothing more to know.
There is undoubtedly something satisfying about the reductive method. Revealing the “mechanisms” underlying a phenomenon intuitively feels like the best sort of explanation. Any explanation that falls short of that ideal seems second-class.
On the other hand, according to the standard big bang model in cosmology, the universe once contained nothing but “quark soup.” Now, it contains stars and planets and bacteria and ferns and dogs and cats—and us.
It’s a bit rich to say that somehow my dog Marty, barking at a passing car at this moment, was already there in the first femtosecond after the big bang, hidden somewhere deep inside the Schrödinger equation, needing only 13-odd billion years to become manifest.
It’s even richer to say that I myself am an “epiphenomenon” of the quark-level of reality—that only the quarks (or whatever you take the bottom level of the cosmic onion to be) are “really” real, while I and everything I see around me are nothing but a “mental projection” or some sort of “illusion.”
Whenever reductionists bring up illusions, I always want to ask: “Whose illusion?” How can I be subject to illusions if I don’t exist? And doesn’t the very concept of an illusion imply that some factual claims are right and others wrong? Where do these claims come from—where do right and wrong come from, where does science itself come from—if only the quarks “really” exist?
In short, it is self-refuting and incoherent to deny that the reality we see around us is really there. But if the macroscopic world is real, then a creative process must be at work in the world that brings new entities with new properties into existence as time goes by.
The real question is not whether emergence is real, it is how best to understand it. The most sensible suggestion ties the idea of emergence to our understanding of fundamental physical principles.
The basic idea was explained 40 years ago by the highly distinguished, Nobel Prize–winning, condensed-matter physicist, Philip W. Anderson, who wrote that:
The behavior of large and complex aggregates of elementary particles, it turns out, is not to be understood in terms of a simple extrapolation of the properties of a few particles. Instead, at each level of complexity entirely new properties appear, and the understanding of the new behaviors requires research which I think is as fundamental in its nature as any other. . . . At each stage entirely new laws, concepts, and generalizations are necessary, requiring inspiration and creativity to just as great a degree as in the previous one. Psychology is not applied biology, nor is biology applied chemistry.(1)
Anderson then goes on to give an example of how the modern theory of condensed matter (liquids and solids) enables us to understand a relatively simple system that is macroscopic in relation to elementary particles—an ammonia molecule. He explains in simple terms how such concepts as symmetry breaking and phase transitions allow us to understand the system’s macroscopic properties, which are not otherwise deducible from the properties of its constituent particles.
Anderson concludes with these words:
In this case, we can see how the whole becomes not only more than but very different from the sum of its parts.(2)
He ends his brief paper by observing:
Surely there are more levels of organization between human ehtology and DNA than there are between DNA and quantum electrodynamics, and each level can require a whole new conceptual structure.(3)
Though Anderson’s suggestion has been recently updated by another Nobel Prize-winning physicist, Robert B. Laughlin, and others, it is still not as widely known as it ought to be.(4) But it is slowly beginning to gain traction among both scientists and philosophers.
Philosopher of science Margaret Morrison, in particular, has stressed the fact that this physics-based approach provides at least a partial rebuttal to the familiar charge that emergence is little more than magic—”pixie dust,” as one critic has called it. Here is how she puts it:
Not only does [emergent physics] call into question the very idea that an understanding of the fundamental laws that govern the microphysical world can explain macro level phenomena, it also casts doubt on the claim that when the former strategy fails our understanding of physical behavior must be restricted to local models. The relation to higher level theoretical principles like symmetry breaking and localization shows that certain kinds of stable behavior, though not derivable from fundamental theory, can nevertheless be explained in a systematic way, one that doesn’t rely on the contingencies of particular situations.(5)
But, of course, even if emergent physics makes sense, as a general proposition, we are still a long way from understanding how adaptivity or intelligent agency can emerge as a property of cells, in particular.
Next, let’s look at a few of the difficulties involved in making sense of life within a general framework of emergence.
Life as Functional Stability
For everything that exists, we can ask the question: How does it manage to go on existing as the kind of system that it is? In physics parlance, to ask this question is the raise the question of the system’s “stability.”
For example, it is in some respects still an open question why there is solid matter at all. But the basic answer seems to be—according to modern quantum field theory (QFT)—that in a piece of crystalline matter of a given kind there is an “effective field” (i.e., emergent field) that binds the lattice into a coherent whole through the exchange of force-carrier particles known as “phonons” (collective vibrational modes of the lattice).(6)
Similar effective fields are present in all the various forms of matter, though the particulars of the exchange particles will be different in each case. Nevertheless, all such instances will have a number of factors in common, as well. For example, they are all explained at a more fundamental level by the Pauli exclusion principle.
Note that physicists do not say: “The pieces of matter that just happen to have stuck together and survived in the past are the ones we see today.”
Now, it is a striking fact that cells and other forms of living matter are very “stable” in the sense that they continue to persist as the kind of system that they are for a length of time that is very long in comparison with the thermodynamic relaxation time of their constituent parts.
We know from simple observation that this “stability” of living systems is due to two factors: the intricate coordination of thousands of chemical reactions in space and time; and the ability of cells to find new regimes of successful functioning in response to perturbation, whether from within or without.
We might speak of this sort of stability as “dynamical stability,” and some authors do so. Certainly, this terminology captures an important aspect of the difference between cells and crystals.
But “dynamical stability” still does not go to the heart of the matter. That is because a number of nonliving systems are also in dynamical equilibrium—and so in that sense are “stable”—even though they are away from thermodynamic equilibrium and are in constant flux internally.
Among the best known of these cases are such natural phenomena as candle flames, hurricanes, and the Great Spot of Jupiter (top of page). Scientists have also invented a number of artificial systems that illustrate the same principles, such as Bénard cells and the Belousov-Zhabotinsky reaction.(7)
In all of these cases, the stability of the system is “dynamic” in the sense that its constituents are in constant motion, even while the overall system persists as the kind of system that it is. And yet, in each of these cases, we can explain the stability by reference to free-energy minimization under the constraint of a particular combination of energy flows and boundary conditions.
That is by no means the case when it comes to living systems.(8) Therefore, the concept of “dynamical stability” is too general to help us understand what distinguishes living from nonliving systems. We need a more precise notion.
I have suggested that we call the kind of stability that is typical of living systems, functional stability.(9) This terminology makes clear the most distinctive aspect of living beings: What allows them to go on existing as the kind of system that they are is the functional, or teleological, coordination of all the chemical reactions occurring inside them.
Note the difference between this concept of functional stability and the functional organization of a manmade machine. In the former case, the stability arises from within, presumably under some sort of global constraint arising from living matter itself. The stability in question is robust, flexible, adaptive, and—in a word—intelligent. The stability and the inherent intelligence—the teleology and the agency—are two sides of the same coin, and both must emerge somehow from the physical properties of the living state of matter.
In the case of a true machine, the functional order has nothing whatever to do with the matter out of which the machine is composed. It is imposed upon the matter entirely from without—by us. The material parts out of which a machine is made are supremely indifferent to the purpose the whole is designed by us to serve. Moreover, the stability of a machine resides in the rigidity—not the flexibility, much less the inherent intelligence—of its parts.
In contrast to what happens inside a machine, everything that goes on within a living being possesses an inherent purpose—namely, maintaining the organism in existence. That is the essential difference between living and nonliving things.
And that, above all, is what requires scientific explanation.
* * *
The pseudo-explanation of natural selection has blinded us to the importance of functional stability for too long. But the phenomenon is there, right in front of our eyes—both in the massive coherence and coordination of the biochemistry of life, and in the amazing adaptivity of living things to perturbation.
All the empirical evidence points to the existence of a fundamental power of intelligent agency underlying life. All we have to do is throw aside our mental blinders and look.
This does not mean that we currently possess the conceptual resources to explain intelligent agency as an emergent property of the living state of matter. It does mean that we need to start trying to develop such resources, if we ever wish to understand life and evolution in a fundamental way.
In future columns in this series, we’ll be looking at a number of scientists who are attempting to do just that.
(1) Philip W. Anderson, “More Is Different,” Science, 1972, 177: 393–396; p. 393.
(2) Ibid.; p. 395.
(3) Ibid.; p. 396.
(4) Robert B. Laughlin, et al., “The Middle Way,” Proceedings of the National Academy of Sciences, USA, 2000, 97: 32–37.
(5) Margaret Morrison, “Emergence, Reduction, and Theoretical Principles: Rethinking Fundamentality,” Philosophy of Science, 2006, 73: 876–887; p. 882.
(6) Howard M. Georgi, “Effective Quantum Field Theories,” in Paul Davies, ed., The New Physics (Cambridge UP, 1989), pp. 446–457.
(7) For a readable introduction to nonequilibrium thermodynamics, see Eric D. Schneider and Dorion Sagan, Into the Cool: Energy Flow, Thermodynamics, and Life (University of Chicago Press, 2005). For a more rigorous treatment, see Dilip Kondepudi and Ilya Prigogine, Modern Thermodynamics: From Heat Engines to Dissipative Structures (John Wiley & Sons, 1998).
(8) I merely assert the point here as more or less self-evident, but for a closely reasoned argument, see Ernest Nagel, “Teleology Revisited,” Philosophy of Science, 1977, 74: 261–301 (reprinted in Ernest Nagel, Teleology Revisited and Other Essays in the Philosophy and History of Science [Columbia UP, 1979], pp. 275–316). For two other incisive discussions importantly related to this point, see Howard H. Pattee, “The Physics of Symbols: Bridging the Epistemic Cut,” BioSystems, 2001, 60: 5–21; and David L. Abel, The First Gene (LongView Press, 2011).
(9) See my Ph.D. dissertation, “Teleological Realism in Biology” (University of Notre Dame, 2011); pp. 181–188.