27 March 2012
It’s never much fun to be beaten to the punch. But when the person who beats you into print with a more elegant and incisive version of your own argument is Roger Scruton (left)—well, one really has no right to complain.
Yesterday, I wrote a column called “Human Nature Watch 10: What Is Love?,” which was an effort to clear up some of the confusions contained in an otherwise pretty sensible New York Times column by Diane Ackerman about neural-imaging studies of the mental phenomena of bonding and love.
This morning, I learned that Scruton had written a similar piece a couple of weeks ago. His piece, which appeared in The Spectator on March 17, is called “Brain Drain,” and utterly destroys the pretensions of scientists and journalists alike to shed any real light on human nature—to teach us anything we did not already know—by means of fMRI technology.
Scruton’s essay could hardly be improved upon—except in one respect. But before criticizing him, I first want to share some of the juiciest tidbits to demonstrate what is so very right with his way of approaching this important issue.
He begins by reviewing some of the well-known, commonsense reasons for believing the brain is the physical locus of the mind:
There are many reasons for believing the brain is the seat of consciousness. Damage to the brain disrupts our mental processes; specific parts of the brain seem connected to specific mental capacities; and the nervous system, towhich we owe movement, perception, sensation and bodily awareness, is a tangled mass of pathways, all of which end in the brain. This much was obviousto Hippocrates. Even Descartes, who believed in a radical divide between soul and body, acknowledged the special role of the brain in tying them together.
Next, he succinctly summarizes the reasoning of the scientists, philosophers, and journalists who believe that neural-imaging technology promises to shed light on human nature:
The discovery of brain imaging techniques has given rise to the belief that we can look at people’s thoughts and feelings, and “see how “information” is ‘processed” in the head. The brain is seen as a computer, ”hardwired” by evolution to deal with the long vanished problems of our hunter-gatherer ancestors, and operating in ways that are more transparent to the person withthe scanner than to the person being scanned. Our own way of understanding ourselves must therefore be replaced by neuroscience, which rejects the whole enterprise of a specifically ‘humane’ understanding of the human condition.
Then, Scruton expertly demonstrates that the facts rehearsed in the first paragraph by no means support an inference to the attitudes depicted in the second paragraph—to make the inference is to fall victim to a non sequitur:
. . . One by one, real but non-scientific disciplines are being rebranded as infant sciences, even though the only science involved has as yet little or nothing to say about them.
It seems to me that aesthetics, criticism, musicology and law are real disciplines, but not sciences. They are not concerned with explaining some aspect of the human condition but with understanding it, according to its own internal procedures. Rebrand them as branches of neuroscience and you don’t necessarily increase knowledge: in fact you might lose it. Brain imaging won’t help you to analyse Bach’s Art of Fugue or to interpret King Lear any more than it will unravel the concept of legal responsibility or deliver a proof of Goldbach’s conjecture; it won’t help you to understand the concept of God or to evaluate theproofs for His existence, nor will it show you why justice is a virtue and cowardice a vice. And it cannot fail to encourage the superstition which says that I am not a whole human being with mental and physical powers, but merely a brain in a box.
Finally, he identifies the nature of the failure of reasoning responsible for the non sequitur. It is the “homunculus” fallacy.
A homunculus is a “little man” inside the real, natural man, or human person. A thinker commits the homunculus fallacy when he ascribes a mental state that is properly attributable to a human person to the little man inside the person.
Why is the homunculus idea fallacious? For one thing, it doesn’t solve the problem of mental states, but merely moves them from one location (the person) to a different one (the homunculus), where the same problem arises all over again. Then, in order to explain how the homunculus itself is capable of having mental states, we must look inside of it for an even smaller homunculus, and so on. This obviously gives rise to an infinite regress.
For another thing, the homunculus idea commits a category mistake. To say that the primary visual cortex (V1) “sees” or that the anterior cingulate cortex “loves” is to speak nonsense, because these verbs are properly ascribable only to persons, not to brain tissue. The homunculus idea tempts us into applying mental concepts in contexts where they are simply not meaningful.
Though the homunculus fallacy has been known and discussed in the philosophical literature for decades, the new generation of scientists investigating human behavior and emotions by means of fMRI technology still fall victim to it. And where the scientists lead—even into a morass of fallacious reasoning—the science journalists are sure to follow.
Here is how Scruton puts this point:
Traditional attempts to understand consciousness were bedevilled by the “homunculus fallacy,” according to which consciousness is the work of the soul,the mind, the self, the inner entity that thinks and sees and feels and which isthe real me inside. We cast no light on the consciousness of a human being simply by redescribing it as the consciousness of some inner homunculus. Onthe contrary, by placing that homunculus in some private, inaccessible and possibly immaterial realm, we merely compound the mystery.
As Max Bennett and Peter Hacker have argued (Philosophical Foundations of Neuroscience, 2003), this homunculus fallacy keeps coming back in another form. The homunculus is no longer a soul, but a brain, which “processes information,” “maps the world,” “constructs a picture’ of reality,” and so on—all expressions that we understand, only because they describe conscious processes with which we are familiar. To describe the resulting “science” as an explanation of consciousness, when it merely reads back into the explanationthe feature that needs to be explained, is not just unjustified—it is profoundly misleading, in creating the impression that consciousness is a feature of thebrain, and not of the person.
This is all expressed with admirable lucidity. However, there is a respect in which Scruton’s (admittedly brief) essay falls short.
He very properly alludes to Wilhelm Dilthey’s famous distinction between two fundamentally different ways we have of acquiring knowledge of the world around us. Dilthey said that we comprehend the physical world through scientific explanation (erklären), but we comprehend the meaning of human actions through interpretative understanding (verstehen).(1)
Where Scruton goes wrong, in my judgment, is in leaving the reader with the impression that this division between scientific explanation and interpretative understanding is not merely contingent upon the present nature of scientific explanation, but rather is essential and necessary.
If that were indeed the case, then no future advances in science could narrow the divide even in principle, and the chasm between scientific explanation and interpretative understanding must remain forever unbridgeable. Scruton does not say this in so many words, but by failing to consider the alternative view that the division is merely contingent, he implies that he thinks it’s necessary.
This is a major mistake. Moreover, there’s nothing in what Scruton says in “Brain Drain,” or elsewhere (to my knowledge), that forces him to take such a position. He is a conservative thinker, but not a man of conventional religious faith. Though he’s quite sympathetic to religion, there’s no reason to believe he’s a substance dualist on religious grounds.(2)
Rather, I suspect he simply assumes there’s no coherent position in between materialist reductionism—which throws out interpretative understanding altogether as a legitimate mode of knowledge—and dualism—which views the human mind as occupying an ontological domain that stands separate and apart from the rest of reality.
In fact, however, there’s no need to choose between these two equally unattractive positions, because a third, intermediate position is available.
Actually, the two positions are not quite equally unappealing. If forced to choose between reductionism and dualism, one would have to opt for the latter. At least dualism preserves our ordinary, everyday sense of ourselves as autonomous agents whose actions are only fully intelligible in normative terms (which is what interpretative understanding boils down to).
But dualism has the very serious drawback of being inherently unstable. Dualism is inherently unstable because unification is an essential feature of all comprehension, of both scientific explanation and interpretative understanding. We comprehend things to the extent that they hang together. To the extent that the various aspects of our experience fail to fit together, we fail to comprehend them.
This means we have every right to seek to comprehend how the human person fits in with the rest of nature. After all, human beings are both persons and physical objects in the world. Therefore, there must be some story to be told about how the two sides of the human being are connected with each other—even if present-day science is incapable of telling that story.
For this reason, dualism always runs the risk of degenerating into either materialist reductionism or supernaturalism. Otherwise, it seems tantamount to an arbitrary barrier against the advance of knowledge—an unprincipled refusal to inquire further.
To many of us, none of these options is acceptable. But what is the third way—the alternative to both reductionism and dualism—that I mentioned earlier?
In a nutshell, it’s emergentism.
What is emergentism? That’s controversial, and not easy to summarize. But briefly, it’s the idea that the world consists of layers, and that while the higher layers are of course governed by the laws characteristic of the lower levels, in addition the phenomena populating the higher levels possess properties and are governed by laws that are distinctive of those higher levels. Such phenomena simply do not exist at the lower levels.
Or, more succinctly: Not everything about the world is determined by the lowest-level laws. Cosmic evolution is a creative process which over time has repeatedly brought into existence new phenomena with new properties governed by new laws.
With respect to the topic of this post—the relationship between the human mind and the brain—the emergentist admits that the mind is “nothing but” the activity of the brain, but he sees the brain very differently from the way the reductionist sees it.
For the emergentist, the activity of the brain is far more complex than merely the activity of individual neurons—the formation and dissolution of synaptic connections, the release of neurotransmitters, and so on.
Moreover, this complexity is a qualitative, not just a quantitative, matter. It involves the globally coherent and cooperative activity of large-scale nerve-cell assemblies, each with its own modicum of agency contributing to the unified agency of the whole brain, as described in the work of Walter J. Freeman (right) and others.(3)
In this way, the normative agency of the human person can be understood simultaneously as corresponding to brain activity and as utterly irreducible to anything visible by means of today’s horribly crude fMRI technology.
On this sort of emergentist view of the mind-brain relationship, everything Scruton says about the mistake of conflating reductionist explanation with interpretative understanding remains true. What is added is this:
Our present inability to bridge the gap between the two domains is merely a function of our ignorance—of our contingent position in the history of science—and not due to any essential feature of reality or permanent incapacity of the human mind.
(1) I have discussed Scruton’s use of Dilthey’s distinction at somewhat greater length in an earlier column: ”A Cure Worse Than the Disease—Darwinism vs. Relativism II: Roger Scruton.”
(2) See Scruton’s recent Gifford Lectures, just published in the U.K. as The Face of God (Continuum, 2012).
(3) Walter J. Freeman, How Brains Make Up Their Minds (Columbia UP, 2000). See, also, Anthony Chemero, Radical Embodied Cognitive Science (MIT Press, 2009); Robert Hanna and Michelle Maiese, Embodied Minds in Action (Oxford UP, 2009); Alicia Juarrero, Dynamics in Action: Intentional Behavior as a Complex System (MIT Press, 1999); Fred Keijzer, Representation and Reality (MIT Press, 2001); J.A. Scott Kelso, Dynamic Patterns: The Self-Organization of Brain and Behavior (MIT Press, 1995); Robert F. Port and Timothy van Gelder, eds., Mind as Motion: Explorations in the Dynamics of Cognition (MIT Press, 1995); Evan Thompson, Mind in Life: Biology, Phenomenology, and the Sciences of Mind (Harvard UP, 2007); and Giuseppe Vitiello, My Double Unveiled: The Dissipative Quantum Model of Brain (John Benjamins, 2001).