Being a bit
behind the curve, I had only just heard of the digital revolution
when Louis Rossetto, co-founder of Wired 1 magazine, wearing a
shirt with no collar and his hair as long as Felix Mendelssohn’s,
looking every inch the young California visionary, gave a speech
before the Cato Institute announcing the dawn of the twenty-first
century’s digital civilization. As his text, he chose Teilhard de
Chardin’s prediction fifty years ago that radio, television, and
computers would create a “noosphere,” an electronic membrane
covering the earth and wiring all humanity together in a single
nervous system. Geographic locations, national boundaries, the old
notions of markets and political processes—all would become
irrelevant. With the Internet spreading over the globe at an
astonishing pace, said Rossetto, that marvelous modem-driven moment
is almost at hand.
Could be. But something tells me that
within ten years, by 2010, the entire digital universe is going to
seem like pretty mundane stuff compared to a new technology that
right now is but a mere glow radiating from a tiny number of
American and Cuban (yes, Cuban) hospitals and laboratories. It is
called brain imaging, and anyone who cares to get up early and
catch a truly blinding twenty-first-century dawn will want to keep
an eye on it.
Brain imaging refers to techniques for
watching the human brain as it functions, in real time. The most
advanced forms currently are three-dimensional
electroencephalography using mathematical models; the more familiar
PET scan (positron-emission tomography); the new fMRI (functional
magnetic resonance imaging), which shows brain bloodflow patterns,
and MRS (magnetic resonance spectroscopy), which measures
biochemical changes in the brain; and the even newer PET reporter
gene/PET reporter probe, which is, in fact, so new that it still
has that length of heavy lumber for a name. Used so far only in
animals and a few desperately sick children, the PET reporter
gene/PET reporter probe pinpoints and follows the activity of
specific genes. On a scanner screen you can actually see the genes
light up inside the brain.
By the standards of the year 2000,
these are sophisticated devices. Ten years from now, however, they
may seem primitive compared to the stunning new windows into the
brain that will have been developed.
Brain imaging was invented for medical
diagnosis. But its far greater importance is that it may very well
confirm, in ways too precise to be disputed, current
neuroscientific theories about “the mind,” “the self,” “the soul,”
and “free will.” Granted, all those skeptical quotation marks are
enough to put anybody on the qui vive right
away, but Ultimate Skepticism is part of the brilliance of the dawn
I have promised.
Neuroscience, the science of the brain
and the central nervous system, is on the threshold of a unified
theory that will have an impact as powerful as that of Darwinism a
hundred years ago. Already there is a new Darwin, or perhaps I
should say an updated Darwin, since no one ever believed more
religiously in Darwin the First than does he: Edward O.
Wilson.
As we have seen, Wilson has created and
named the new field of sociobiology, and he has compressed its
underlying premise into a single sentence. Every human brain, he
says, is born not as a blank tablet (a tabula
rasa) waiting to be filled in by experience but as “an
exposed negative waiting to be slipped into developer fluid.” (See
page 81, above.) You can develop the negative well or you can
develop it poorly, but either way you are going to get precious
little that is not already imprinted on the film. The print is the
individual’s genetic history, over thousands of years of evolution,
and there is not much anybody can do about it. Furthermore, says
Wilson, genetics determine not only things such as temperament,
role preferences, emotional responses, and levels of aggression but
also many of our most revered moral choices, which are not choices
at all in any free-will sense but tendencies imprinted in the
hypothalamus and limbic regions of the brain, a concept expanded
upon in 1993 in a much-talked-about book, The Moral
Sense, by James Q. Wilson (no kin to Edward
O.).
This, the neuroscientific view of life, has become the strategic high ground in the academic world, and the battle for it has already spread well beyond the scientific disciplines and, for that matter, out into the general public. Both liberals and conservatives without a scientific bone in their bodies are busy trying to seize the terrain. The gay rights movement, for example, has fastened onto a study, published in July 1993 by the highly respected Dean Hamer of the National Institutes of Health, announcing the discovery of “the gay gene.” Obviously, if homosexuality is a genetically determined trait, like left-handedness or hazel eyes, then laws and sanctions against it are attempts to legislate against Nature. Conservatives, meantime, have fastened upon studies indicating that men’s and women’s brains are wired so differently, thanks to the long haul of evolution, that feminist attempts to open up traditionally male roles to women are the same thing: a doomed violation of Nature.
Wilson himself has wound up in deep
water on this score; or cold water, if one need edit. In his
personal life Wilson is a conventional liberal, PC, as the saying
goes—he is, after all, a member of the Harvard faculty—concerned
about environmental issues and all the usual things. But he has
said that “forcing similar role identities” on both men and women
“flies in the face of thousands of years in which mammals
demonstrated a strong tendency for sexual division of labor. Since
this division of labor is persistent from hunter-gatherer through
agricultural and industrial societies, it suggests a genetic
origin. We do not know when this trait evolved in human evolution
or how resistant it is to the continuing and justified pressures
for human rights.”
“Resistant” was Darwin II, the
neuroscientist, speaking. “Justified” was the PC Harvard liberal.
He was not PC or liberal enough. As we have already seen,
protesters invaded the annual meeting of the American Academy for
the Advancement of Science, where Wilson was appearing, dumped a
pitcher of ice water, cubes and all, over his head, and began
chanting, “You’re all wet! You’re all wet!” The most prominent
feminist in America, Gloria Steinem, went on television and, in an
interview with John Stossel of ABC, insisted that studies of
genetic differences between male and female nervous systems should
cease forthwith.
But that turned out to be mild stuff in
the current political panic over neuroscience. In February 1992,
Frederick K. Goodwin, a renowned psychiatrist, head of the federal
Alcohol, Drug Abuse, and Mental Health Administration, and a
certified yokel in the field of public relations, made the mistake
of describing, at a public meeting in Washington, the National
Institute of Mental Health’s ten-year-old Violence Initiative. This
was an experimental program whose hypothesis was that, as among
monkeys in the jungle—Goodwin was noted for his monkey studies—much
of the criminal mayhem in the United States was caused by a
relatively few young males who were genetically predisposed to it;
who were hardwired for violent crime, in short. Out in the jungle,
among mankind’s closest animal relatives, the chimpanzees, it
seemed that a handful of genetically twisted young males were the
ones who committed practically all the
wanton murders of other males and the physical abuse of females.
What if the same were true among human beings? What if, in any
given community, it turned out to be a handful of young males with
toxic DNA who were pushing statistics for violent crime up to such
high levels? The Violence Initiative envisioned identifying these
individuals in childhood, somehow, some way, someday, and treating
them therapeutically with drugs. The notion that crime-ridden urban
America was a “jungle,” said Goodwin, was perhaps more than just a
tired old metaphor.
That did it. That may have been the
stupidest single word uttered by an American public official in the
year 1992. The outcry was immediate. Senator Edward Kennedy of
Massachusetts and Representative John Dingell of Michigan (who, it
became obvious later, suffered from hydrophobia when it came to
science projects) not only condemned Goodwin’s remarks as racist
but also delivered their scientific verdict: Research among
primates “is a preposterous basis” for analyzing anything as
complex as “the crime and violence that plagues our country today.”
(This came as surprising news to NASA scientists who had first
trained and sent a chimpanzee called Ham up on top of a Redstone
rocket into suborbital space flight and then trained and sent
another one, called Enos, which is Greek for “man,” up on an Atlas
rocket and around the earth in orbital space flight and had thereby
accurately and completely predicted the physical, psychological,
and task-motor responses of the human astronauts Alan Shepard and
John Glenn, who repeated the chimpanzees’ flights and tasks months
later.) The Violence Initiative was compared to Nazi eugenic
proposals for the extermination of undesirables. Dingell’s Michigan
colleague, Representative John Conyers, then chairman of the
Government Operations Committee and senior member of the
Congressional Black Caucus, demanded Goodwin’s resignation—and got
it two days later, whereupon the government, with the Department of
Health and Human Services now doing the talking, denied that the
Violence Initiative had ever existed. It disappeared down the
memory hole, to use Orwell’s term.
A conference of criminologists and
other academics interested in the neuroscientific studies done so
far for the Violence Initiative—a conference ttnderwritten in part
by a grant from the National Institutes of Health—had been
scheduled for May 1993 at the University of Maryland. Down went the
conference, too; the NIH drowned it like a kitten. A University of
Maryland legal scholar named David Wasserman tried to reassemble
the troops on the Q.T., as it were, in a hall all but hidden from
human purview in a hamlet called Queenstown in the foggy, boggy
boondocks of Queen Annes County on Maryland’s Eastern Shore. (The
Clinton administration tucked Elian Gonzalez away in this same
county while waiting for the Cuban-American vote to chill before
the Feds handed the boy over to Fidel Castro.) The NIH, proving it
was a hard learner, quietly provided $133,000 for the event, but
only after Wasserman promised to fireproof the proceedings by also
inviting scholars who rejected the notion of a possible genetic
genesis of crime and scheduling a cold-shower session dwelling on
the evils of the eugenics movement of the early twentieth century.
No use, boys! An army of protesters found the poor cringing devils
anyway and stormed into the auditorium chanting, “Maryland
conference, you can’t hide—we know you’re pushing genocide!” It
took two hours for them to get bored enough to leave, and the
conference ended in a complete puddle, with the specially recruited
fireproofing PC faction issuing a statement that said: “Scientists
as well as historians and sociologists must not allow themselves to
provide academic respectability for racist pseudoscience.” Today,
at the NIH, the term Violence Initiative is a synonym for
taboo. The present moment resembles that
moment in the Middle Ages when the Catholic Church forbade the
dissection of human bodies, for fear that what was discovered
inside might cast doubt on the Christian doctrine that God created
man in his own image.
Even more radioactive is the matter of
intelligence, as measured by IQ tests. Privately—not many care to
speak out—the vast majority of neuroscientists believe the genetic
component of an individual’s intelligence is remarkably high. Your
intelligence can be improved upon, by skilled and devoted mentors,
or it can be held back by a poor upbringing-i. e., the negative can
be well developed or poorly devcloped-but your genes are what
really make the difference. The recent ruckus over Charles Murray
and Richard Herrnstein’s The Be// Curve is
probably just the beginning of the bitterness the subject is going
to create.
Not long ago, according to two
neuroscientists I interviewed, a firm called Neurometrics sought
out investors and tried to market an amazing but simple invention
known as the IQ Cap. The idea was to provide a way of testing
intelligence that would be free of “cultural bias,” one that would
not force anyone to deal with words or concepts that might be
familiar to people from one culture but not to people from another.
The IQ Cap recorded only brain waves; and a computer, not a
potentially biased human test-giver, analyzed the results. It was
based on the work of neuroscientists such as E. Roy
John,2 who is now one of the major
pioneers of electroencephalographic brain imaging; Duilio
Giannitrapani, author of The Electrophysiology of
Intellectual Functions; and David Robinson, author of
The Wechsler Adult Intelligence Scale and
Personality Assessment: Toward a
Biologically Based Theory of Intelligence and Cognition and
many other monographs famous among neuroscientists. I spoke to one
researcher who had devised an IQ Cap himself by replicating an
experiment described by Giannitrapani in The
Electrophysiology of Intellectual Functions. It was not a
complicated process. You attached sixteen electrodes to the scalp
of the person you wanted to test. You had to muss up his hair a
little, but you didn’t have to cut it, much less shave it. Then you
had him stare at a marker on a blank wall. This particular
researcher used a raspberry-red thumbtack. Then you pushed a toggle
switch. In sixteen seconds the Cap’s computer box gave you an
accurate prediction (within one-half of a standard deviation) of
what the subject would score on all eleven subtests of the Wechsler
Adult Intelligence Scale or, in the case of children, the Wechsler
Intelligence Scale for Children—all from sixteen seconds’ worth of
brain waves. There was nothing culturally biased about the test
whatsoever. What could be cultural about staring at a thumbtack on
a wall? The savings in time and money were breathtaking. The
conventional IQ test took two hours to complete; and the overhead,
in terms of paying test-givers, test-scorers, test-preparers, and
the rent, was $100 an hour at the very least. The IQ Cap required
about fifteen minutes and sixteen seconds-it took about fifteen
minutes to put the electrodes on the scalp—and about a tenth of a
penny’s worth of electricity. Neurometrics’s investors were rubbing
their hands and licking their chops. They were about to make a
killing.
In fact—nobody wanted
their damnable IQ Cap!
It wasn’t simply that no one
believed you could derive IQ scores from
brain waves—it was that nobody wanted to
believe it could be done. Nobody wanted to
believe that human brainpower is … that
hardwired. Nobody wanted to learn in a flash that …
the genetic fix is in. Nobody wanted to
learn that he was … a hardwired genetic
mediocrity … and that the best he could hope for in this
Trough of Mortal Error was to live out his mediocre life as a
stress-free dim bulb. Barry Sterman of UCLA, chief scientist for a
firm called Cognitive Neurometrics, who has devised his own
brain-wave technology for market research and focus groups, regards
brain-wave IQ testing as possible—but in the current atmosphere you
“wouldn’t have a Chinaman’s chance of getting a grant” to develop
it.
Here we begin to sense the chill that emanates from the hottest field in the academic world. The unspoken and largely unconscious premise of the wrangling over neuroscience’s strategic high ground is: We now live in an age in which science is a court from which there is no appeal. And the issue this time around, at the beginning of the twenty-first century, is not the evolution of the species, which can seem a remote business, but the nature of our own precious inner selves.
The elders of the field, such as
Wilson, are well aware of all this and are cautious, or cautious
compared to the new generation. Wilson still holds out the
possibility—I think he doubts it, but he still holds out the
possibility—that at some point in evolutionary history culture
began to influence the development of the human brain in ways that
cannot be explained by strict Darwinian theory. But the new
generation of neuroscientists are not cautious for a second. In
private conversations, the bull sessions, as it were, that create
the mental atmosphere of any hot new science—and I love talking to
these people—they express an uncompromising
determinism.
They start with the second most famous
statement in all of modern philosophy, Descartes’s “Cogito ergo sum,” “I think, therefore I am,” which they
regard as the essence of “dualism,” the old-fashioned notion that
the mind is something distinct from its mechanism, the brain and
the body. (I will get to the most famous statement in a moment.)
This is also known as the “ghost in the machine” fallacy, the
quaint belief that there is a ghostly “self” somewhere inside the
brain that interprets and directs its operations. Neuroscientists
involved in three-dimensional electroencephalography will tell you
that there is not even any one place in the brain where
consciousness or self-consciousness (Cogito ergo
sum) is located. This is merely an illusion created by a
medley of neurological systems acting in concert. The young
generation takes this yet one step further. Since consciousness and
thought are entirely physical products of your brain and nervous
system-and since your brain arrived fully imprinted at birth—what
makes you think you have free will? Where is it going to come from?
What “ghost,” what “mind,” what “self,” what “soul,” what anything
that will not be immediately grabbed by those scornful quotation
marks is going to bubble up your brain stem to give it to you? I
have heard neuroscientists theorize that, given computers of
sufficient power and sophistication, it would be possible to
predict the course of any human being’s life moment by moment,
including the fact that the poor devil was about to shake his head
over the very idea. I doubt that any Calvinist of the sixteenth
century ever believed so completely in predestination as these, the
hottest and most intensely rational young scientists in the United
States in the twenty-first.
Since the late 1970s, in the Age of
Wilson, college students have been heading into neuroscience in job
lots. The Society for Neuroscience was founded in 1970 with 1,100
members. Today, one generation later, its membership exceeds
26,000. The society’s latest convention, in Miami, drew more than
20,000 souls, making it one of the biggest professional conventions
in the country. In the venerable field of academic philosophy,
young faculty members are jumping ship in embarrassing numbers and
shifting into neuroscience. They are heading for the laboratories.
Why wrestle with Kant’s God, Freedom, and Immortality when it is
only a matter of time before neuroscience, probably through brain
imaging, reveals the actual physical mechanism that fabricates
these mental constructs, these illusions?
Which brings us to the most famous
statement in all of modern philosophy: Nietzsche’s “God is dead.”
The year was 1882. The book was Die Fröhliche
Wissenschaft (The Gay Science).
Nietzsche said this was not a declaration of atheism, although he
was in fact an atheist, but simply the news of an event. He called
the death of God a “tremendous event,” the greatest event of modern
history. The news was that educated people no longer believed in
God, as a result of the rise of rationalism and scientific thought,
including Darwinism, over the preceding 250 years. But before you
atheists run up your flags of triumph, he said, think of the
implications. “The story I have to tell,” wrote Nietzsche, “is the
history of the next two centuries.” He predicted (in Ecce Homo) that the twentieth century would be a century
of “wars such as have never happened on earth,” wars catastrophic
beyond all imagining. And why? Because human beings would no longer
have a god to turn to, to absolve them of their guilt; but they
would still be racked by guilt, since guilt is an impulse instilled
in children when they are very young, before the age of reason. As
a result, people would loathe not only one another but themselves.
The blind and reassuring faith they formerly poured into their
belief in God, said Nietzsche, they would now pour into a belief in
barbaric nationalistic brotherhoods: “If the doctrines … of the
lack of any cardinal distinction between man and animal, doctrines
I consider true but deadly”—he says in an allusion to Darwinism in
Untimely Meditations—“are hurled into the
people for another generation … then nobody should be surprised
when … brotherhoods with the aim of the robbery and exploitation of
the non-brothers … will appear in the arena of the
future.”
Nietzsche’s view of guilt,
incidentally, is also that of neuroscientists a century later. They
regard guilt as one of those tendencies imprinted in the brain at
birth. In some people the genetic work is not complete, and they
engage in criminal behavior without a twinge of remorse—thereby
intriguing criminologists, who then want to create Violence
Initiatives and hold conferences on the subject.
Nietzsche said that mankind would limp
on through the twentieth century “on the mere pittance” of the old
decaying God-based moral codes. But then, in the twenty-first,
would come a period more dreadful than the great wars, a time of
“the total eclipse of all values” (in The Will to
Power). This would also be a frantic period of
“revaluation,” in which people would try to find new systems of
values to replace the osteoporotic skeletons of the old. But you
will fail, he warned, because you cannot believe in moral codes
without simultaneously believing in a god who points at you with
his fearsome forefinger and says “Thou shalt” or “Thou shalt
not.”
Why should we bother ourselves with a
dire prediction that seems so far-fetched as “the total eclipse of
all values”? Because of man’s track record, I should think. After
all, in Europe, in the peaceful decade of the 1880s, it must have
seemed even more far-fetched to predict the world wars of the
twentieth century and the barbaric brotherhoods of Nazism and
Communism. Ecce vates! Ecce vates! Behold
the prophet! How much more proof can one demand of a man’s powers
of prediction?
A hundred years ago those who worried
about the death of God could console one another with the fact that
they still had their own bright selves and their own inviolable
souls for moral ballast and the marvels of modern science to chart
the way. But what if, as seems likely, the greatest marvel of
modern science turns out to be brain imaging? And what if, ten
years from now, brain imaging has proved, beyond any doubt, that
not only Edward O. Wilson but also the young generation are, in
fact, correct?
The elders, such as Wilson himself and
Daniel C. Dennett, the author of Darwin’s Dangerous
Idea: Evolution and the Meanings of Life, and Richard
Dawkins, author of The Selfish Gene and
The Blind Watchmaker, insist that there is
nothing to fear from the truth, from the ultimate extension of
Darwin’s dangerous idea. They present elegant arguments as to why
neuroscience should in no way diminish the richness of life, the
magic of art, or the righteousness of political causes, including,
if one need edit, political correctness at Harvard or Tufts, where
Dennett is Director of the Center for Cognitive Studies, or Oxford,
where Dawkins is something called Professor of Public Understanding
of Science. (Dennett and Dawkins, every bit as much as Wilson, are
earnestly, feverishly, politically correct.) Despite their best
efforts, however, neuroscience is not rippling out into the public
on waves of scholarly reassurance. But rippling out it is, rapidly.
The conclusion people out beyond the laboratory walls are drawing
is: The fix is in! We’re all hardwired! That, and:
Don’t blame me! I’m wired wrong!
This sudden switch from a belief in Nurture, in the form of social conditioning, to Nature, in the form of genetics and brain physiology, is the great intellectual event, to borrow Nietzsche’s term, of the late twentieth century. Up to now the two most influential ideas of the century have been Marxism and Freudianism (see page 82). Both were founded upon the premise that human beings and their “ideals”—Marx and Freud knew about quotation marks, too—are completely molded by their environment. To Marx, the crucial environment was one’s social class; “ideals” and “faiths” were notions foisted by the upper orders upon the lower as instruments of social control. To Freud, the crucial environment was the Oedipal drama, the unconscious sexual plot that was played out in the family early in a child’s existence. The “ideals” and “faiths” you prize so much are merely the parlor furniture you feature for receiving your guests, said Freud; I will show you the cellar, the furnace, the pipes, the sexual steam that actually runs the house. By the mid-1950s even anti-Marxists and anti-Freudians had come to assume the centrality of class domination and Oedipally conditioned sexual drives. On top of this came Pavlov, with his “stimulus-response bonds,” and B. F. Skinner, with his “operant conditioning,” turning the supremacy of conditioning into something approaching a precise form of engineering.
So how did this brilliant intellectual
fashion come to so screeching and ignominious an end?
The demise of Freudianism can be summed
up in a single word: lithium. In 1949 an Australian psychiatrist,
John Cade, gave five days of lithium therapy—for entirely the wrong
reasons—to a fifty-one-year-old mental patient who was so
manic-depressive, so hyperactive, unintelligible, and
uncontrollable, he had been kept locked up in asylums for twenty
years. By the sixth day, thanks to the lithium buildup in his
blood, he was a normal human being. Three months later he was
released and lived happily ever after in his own home. This was a
man who had been locked up and subjected to two decades of Freudian
logorrhea to no avail whatsoever. Over the next twenty years
antidepressant and tranquillizing drugs completely replaced
Freudian talk-talk as treatment for severe mental disturbances. By
the mid-1980s, neuroscientists looked upon Freudian psychiatry as a
quaint relic based largely upon superstition (such as dream
analysis—dream analysis!), like phrenology
or mesmerism. In fact, among neuroscientists, phrenology now has a
higher reputation than Freudian psychiatry, since phrenology was in
a certain crude way a precursor of electroencephalography. Freudian
psychiatrists are now regarded as quacks with sham medical degrees,
as ears that people with more money than sense can hire to talk
into.
Marxism was finished off even more
suddenly—in a single year, 1973—with the smuggling out of the
Soviet Union and the publication in France of the first of the
three volumes of Aleksandr Solzhenitsyn’s The Gulag
Archipelago. Other writers, notably the British historian
Robert Conquest, had already exposed the Soviet Union’s vast
network of concentration camps, but their work was based largely on
the testimony of refugees, and refugees were routinely discounted
as biased and bitter observers. Solzhenitsyn, on the other hand,
was a Soviet citizen, still living on Soviet soil, a zek himself for eleven years, zek
being Russian slang for concentration-camp prisoner. His
credibility had been vouched for by none other than Nikita
Khrushchev, who in 1962 had permitted the publication of
Solzhenitsyn’s novella of the gulag, One Day in the
Life of Ivan Denisovich, as a means of cutting down to size
the daunting shadow of his predecessor Stalin. “Yes,” Khrushchev
had said in effect, “what this man Solzhenitsyn has to say is true.
Such were Stalin’s crimes.” Solzhenitsyn’s brief fictional
description of the Soviet slave labor system was damaging enough.
But The Gulag Archipelago, a
two-thousand-page, densely detailed, nonfiction account of the
Soviet Communist Party’s systematic extermination of its enemies,
real and imagined, of its own countrymen, by the
tens of millions, through an enormous, methodical,
bureaucratically controlled “human sewage disposal system,” as
Solzhenitsyn called it—The Gulag Archipelago
was devastating. After all, this was a century in which there was
no longer any possible ideological detour around the concentration
camp. Among European intellectuals, even French intellectuals,
Marxism collapsed as a spiritual force immediately. Ironically, it
survived longer in the United States before suffering a final,
merciful coup de grace on November 9, 1989,
with the breaching of the Berlin Wall, which signaled in an
unmistakable fashion what a debacle the Soviets’ seventy-two-year
field experiment in socialism had been. (Marxism still hangs on,
barely, acrobatically, in American universities in a Mannerist form
known as Deconstruction, a literary doctrine that depicts language
itself as an insidious tool used by the powers that be to deceive
the proles and peasants.)
Freudianism and Marxism—and with them,
the entire belief in social conditioning—were demolished so
swiftly, so suddenly, that neuroscience has surged in, as if into
an intellectual vacuum. Nor do you have to be a scientist to detect
the rush.
Anyone with a child in school knows the
signs all too well. I am intrigued by the faith parents now
invest—the craze began about 1990—in psychologists who diagnose
their children as suffering from a defect known as attention
deficit disorder, or ADD. Of course, I have no way of knowing
whether this “disorder” is an actual, physical, neurological
condition or not, but neither does anybody else in this early stage
of neuroscience. The symptoms of this supposed malady are always
the same. The child or, rather, the boy—forty-nine out of fifty
cases are boys—fidgets around in school, slides off his chair,
doesn’t pay attention, distracts his classmates during class, and
performs poorly. In an earlier era he would have been pressured to
pay attention, work harder, show some self-discipline. To parents
caught up in the new intellectual climate of the 1990s, that
approach seems cruel, because my little boy’s problem is …
he’s wired wrong! The poor little
tyke—the fix has been in since birth!
Invariably the parents complain, “All he wants to do is sit in
front of the television set and watch cartoons and play Sega
Genesis.” For how long? “How long? For hours at a time.” Hours at a
time; as even any young neuroscientist will tell you, that boy may
have a problem, but it is not an attention deficit.
Nevertheless, all across America we
have the spectacle of an entire generation of little boys, by the
tens of thousands, being closed up on ADD’s magic bullet of choice,
Ritalin, the CIBA—Geneva Corporation’s brand name for the stimulant
methylphenidate. I first encountered Ritalin in 1966, when I was in
San Francisco doing research for a book on the psychedelic or
hippie movement. A certain species of the genus hippie was known as
the Speed Freak, and a certain strain of Speed Freak was known as
the Ritalin Head. The Ritalin Heads loved Ritalin. You’d see them
in the throes of absolute Ritalin raptures … Not a wiggle, not a
peep … They would sit engrossed in anything at
all … a manhole cover, their own palm wrinkles …
indefinitely . . through shoulda-been mealtime after mealtime …
through raging insomnias … Pure methylphenidate nirvana … From 1990
to 1995, CIBA-Geneva’s sales of Ritalin rose 600 percent; and not
because of the appetites of subsets of the species Speed Freak in
San Francisco, either. It was because an entire generation of
American boys, from the best private schools of the Northeast to
the worst sludge-trap public schools of Los Angeles and San Diego,
was now strung out on methylphenidate, diligently doled out to them
every day by their connection, the school nurse. America is a
wonderful country! I mean it! No honest writer would challenge that
statement! The human comedy never runs out of material! It never
lets you down!
Meantime, the notion of a self—a self
who exercises self-discipline, postpones gratification, curbs the
sexual appetite, stops short of aggression and criminal behavior—a
self who can become more intelligent and lift itself to the very
peaks of life by its own bootstraps through study, practice,
perseverance, and refusal to give up in the face of great odds—this
old-fashioned notion (what’s a bootstrap, for God’s sake?) of
success through enterprise and true grit is already slipping away,
slipping away … slipping away … The peculiarly American faith in
the power of the individual to transform himself from a helpless
cypher into a giant among men, a faith that ran from Emerson
(“Self-Reliance”) to Horatio Alger’s Luck and
Pluck stories to Dale Carnegie’s How to Win
Friends and Influence People to Norman Vincent Peale’s
The Power of Positive Thinking to Og
Mandino’s The Greatest Salesman in the
World—that faith is now as moribund as the god for whom
Nietzsche wrote an obituary in 1882. It lives on today only in the
decrepit form of the “motivational talk,” as lecture agents refer
to it, given by retired football stars such as Fran Tarkenton to
audiences of businessmen, most of them woulda-been athletes (like
the author of this article), about how life is like a football
game. “It’s late in the fourth period and you’re down by thirteen
points and the Cowboys got you hemmed in on your own one-yard line
and it’s third and twenty-three. Whaddaya do? …”
Sorry, Fran, but it’s third and
twenty-three and the genetic fix is in, and the new message is now
being pumped out into the popular press and onto television at a
stupefying rate. Who are the pumps? They are a new breed who call
themselves “evolutionary psychologists.” You can be sure that
twenty years ago the same people would have been calling themselves
Freudian; but today they are genetic determinists, and the press
has a voracious appetite for whatever they come up
with.
The most popular study currently—it is
still being featured on television news
shows—is David Lykken and Auke Tellegen’s study at the University
of Minnesota of two thousand twins that shows, according to these
two evolutionary psychologists, that an individual’s happiness is
largely genetic. Some people are hardwired to be happy and some are
not. Success (or failure) in matters of love, money, reputation, or
power is transient stuff; you soon settle back down (or up) to the
level of happiness you were born with genetically. Fortune devoted a long takeout, elaborately illustrated,
of a study by evolutionary psychologists at Britain’s University of
Saint Andrews showing that you judge the facial beauty or
handsomeness of people you meet not by any social standards of the
age you live in but by criteria hardwired in your brain from the
moment you were born. Or, to put it another way, beauty is not in
the eye of the beholder but embedded in his genes. In fact, today,
in the year 2000, if your appetite for newspapers, magazines, and
television is big enough, you will quickly get the impression that
there is nothing in your life, including the fat content of your
body, that is not genetically predetermined. If I may mention just
a few things the evolutionary psychologists have illuminated for me
recently:
One widely publicized study found that
women are attracted to rich or powerful men because they are
genetically hardwired to sense that alpha males will be able to
take better care of their offspring. So if her current husband
catches her with somebody better than he is, she can say in all
sincerity, “I’m just a lifeguard in the gene pool, honey.”
Personally, I find that reassuring. I used to be a cynic. I thought
the reason so many beautiful women married ugly rich men was that
they were schemers, connivers, golddiggers. Another study found
that the male of the human species is genetically hardwired to be
polygamous, i.e., unfaithful to his legal mate, so that he will
cast his seed as widely as humanly possible. Well … men can read,
too! “Don’t blame me, honey. Four hundred thousand years of
evolution made me do it.” Another study showed that most murders
are the result of genetically hardwired compulsions. Well …
convicts can read, too, and hoping for parole, they report to the
prison psychiatrist: “Something came over me … and then the knife
went in.”3 Another showed that teenage
girls, being in the prime of their fecundity, are genetically
hardwired to be promiscuous and are as helpless to stop themselves
as minks or rabbits. Some public school systems haven’t had to be
told twice. They provide not only condoms but also special
elementary, junior high, and high schools where teenage mothers can
park their offspring in nursery rooms while they learn to read
print and do sums.
Where does that leave “self-control”?
In quotation marks, like many old-fashioned notions—once people
believe that this ghost in the machine, “the self,” does not even
exist and brain imaging proves it, once and for all.
So far, neuroscientific theory is based
largely on indirect evidence, from studies of animals or of how a
normal brain changes when it is invaded (by accidents, disease,
radical surgery, or experimental needles). Darwin II himself,
Edward O. Wilson, has only limited direct knowledge of the human
brain. He is a zoologist, not a neurologist, and his theories are
extrapolations from the exhaustive work he has done in his
specialty, the study of insects. The French surgeon Paul Broca
discovered Broca’s area, one of the two speech centers of the left
hemisphere of the brain, only after one of his patients suffered a
stroke. Even the PET scan and the PET reporter gene/PET reporter
probe are technically medical invasions, since they require the
injection of chemicals or viruses into the body. But they offer
glimpses of what the noninvasive imaging of the future will
probably look like. A neuroradiologist can read a list of topics
out loud to a person being given a PET scan, topics pertaining to
sports, music, business, history, whatever, and when he finally
hits one the person is interested in, a particular area of the
cerebral cortex actually lights up on the screen. Eventually, as
brain imaging is refined, the picture may become as clear and
complete as those see-through exhibitions, at auto shows, of the
inner workings of the internal combustion engine. At that point it
may become obvious to everyone that all we are looking at is a
piece of machinery, an analog chemical computer, that processes
information from the environment. “All,” since you can look and
look and you will not find any ghostly self inside, or any mind, or
any soul.
Thereupon, in the year 2010 or 2030,
some new Nietzsche will step forward to announce: “The self is
dead”—except that being prone to the poetic, like Nietzsche the
First, he will probably say: “The soul is dead.” He will say that
he is merely bringing the news, the news of the greatest event of
the millennium: “The soul, that last refuge of values, is dead,
because educated people no longer believe it exists.” Unless the
assurances of the Wilsons and the Dennetts and the Dawkinses also
start rippling out, the madhouse that will ensue may make the
phrase “the total eclipse of all values” seem tame.
If I were a college student today, I don’t think I could resist going into neuroscience. Here we have the two most fascinating riddles of the twenty-first century: the riddle of the human mind and the riddle of what happens to the human mind when it comes to know itself absolutely. In any case, we live in an age in which it is impossible and pointless to avert your eyes from the truth.
Ironically, said Nietzsche, this
unflinching eye for truth, this zest for skepticism, is the legacy
of Christianity (for complicated reasons that needn’t detain us
here). Then he added one final and perhaps ultimate piece of irony
in a fragmentary passage in a notebook shortly before he lost his
mind (to the late nineteenth century’s great venereal scourge,
syphilis). He predicted that eventually modern science would turn
its juggernaut of skepticism upon itself, question the validity of
its own foundations, tear them apart, and self-destruct. I thought
about that in the summer of 1994, when a group of mathematicians
and computer scientists held a conference at the Santa Fe Institute
on “Limits to Scientific Knowledge.” The consensus was that since
the human mind is, after all, an entirely physical apparatus, a
form of computer, the product of a particular genetic history, it
is finite in its capabilities. Being finite, hardwired, it will
probably never have the power to comprehend human existence in any
complete way. It would be as if a group of dogs were to call a
conference to try to understand The Dog. They could try as hard as
they wanted, but they wouldn’t get very far. Dogs can communicate
only about forty notions, all of them primitive, and they can’t
record anything. The project would be doomed from the start. The
human brain is far superior to the dog’s, but it is limited
nonetheless. So any hope of human beings arriving at some final,
complete, self-enclosed theory of human existence is doomed,
too.
This, science’s Ultimate Skepticism,
has been spreading ever since then. Over the past two years even
Darwinism, a sacred tenet among American scientists for the past
seventy years, has been beset by … doubts. Scientists—not
religiosi—notably the mathematician David Berlinski (“The Deniable
Darwin,” Commentary, June 1996) and the
biochemist Michael Behe (Darwin’s Black Box,
1996) have begun attacking Darwinism as a mere theory, not a
scientific discovery, a theory woefully unsupported by fossil
evidence and featuring, at the core of its logic, sheer mush.
(Dennett and Dawkins, for whom Darwin is the Only Begotten, the
Messiah, are already screaming. They’re beside themselves, utterly
apoplectic. Wilson, the giant, keeping his cool, has remained above
the battle.) Noam Chomsky has made things worse by pointing out
that there is nothing even in the highest apes remotely comparable
to human speech, which is in turn the basis of recorded memory and,
therefore, everything from skyscrapers and missions to the moon to
Islam and little matters such as the theory of evolution. He says
it’s not that there is a missing link; there is nothing to link up
with. By 1990 the physicist Petr Beckmann of
the University of Colorado had already begun going after Einstein.
He greatly admired Einstein for his famous equation of matter and
energy, E=mc2, but called his theory of
relativity mostly absurd and grotesquely untestable. Beckmann died
in 1993. His Fool Killer’s cudgel has been taken up by Howard
Hayden of the University of Connecticut, who has many admirers
among the upcoming generation of Ultimately Skeptical young
physicists. The scorn the new breed heaps upon quantum mechanics
(“has no real-world applications” … “depends entirely on goofball
equations”), Unified Field Theory (“Nobel worm bait”), and the Big
Bang Theory (“creationism for nerds”) has become withering. If only
Nietzsche were alive! He would have relished every minute of
it!
Recently I happened to be talking to a
prominent California geologist, and she told me: “When I first went
into geology, we all thought that in science you create a solid
layer of findings, through experiment and careful investigation,
and then you add a second layer, like a second layer of bricks, all
very carefully, and so on. Occasionally some adventurous scientist
stacks the bricks up in towers, and these towers turn out to be
insubstantial and they get torn down, and you proceed again with
the careful layers. But we now realize that the very first layers
aren’t even resting on solid ground. They are balanced on bubbles,
on concepts that are full of air, and those bubbles are being burst
today, one after the other.”
I suddenly had a picture of the entire
astonishing edifice collapsing and modern man plunging headlong
back into the primordial ooze. He’s floundering, sloshing about,
gulping for air, frantically treading ooze, when he feels something
huge and smooth swim beneath him and boost him up, like some
almighty dolphin. He can’t see it, but he’s much impressed. He
names it God.