Difference between revisions of "Cautions on the Superhuman Transition"

From Higher Intellect Documents
Jump to navigation Jump to search
(Created page with "<pre> This is the unedited version of an article which appeared in WHole Earth Review, 81:96-98 (Winter 1993). Copyright 1993, William H. Calvin. Copies may be reproduced fo...")
 
(No difference)

Latest revision as of 13:59, 29 July 2020

This is the unedited version of an article which appeared in WHole
Earth Review, 81:96-98 (Winter 1993).  Copyright 1993, William
H. Calvin.  Copies may be reproduced for personal use; for other
uses, contact the author at [email protected]


       Cautions on the Superhuman Transition 

                  William H. Calvin



William H. Calvin is a neurophysiologist at the University of
Washington.  He is the author of such books as _The River that
Flows Uphill:  A Journey from the Big Bang to the Big Brain_
(Sierra Club Books, 1987) and co-author of the forthcoming book
_Conversations with Neil's Brain:  The Neural Nature of Thought
and Language_ (Addison-Wesley, 1994).
                  E-mail:  [email protected]


Machine intelligence will have profound effects when a computer
begins to converse like a human, even engaging in social chats. 
Lots of humanlike behaviors will be missing in this first-order
approximation but even a partial workalike will set in motion one
of those historical transitions -- after which nothing is the same. 
Perhaps it won't qualify as a singularity (an instant shift into totally
unpredictable consequences) but we surely have a major transition
coming up in the next several generations of humankind, and it
needs discussing now.
      As a neurophysiologist interested in how the circuitry of
human cerebral cortex allows us to construct sentences and
speculate about tomorrow, I suspect that "downloading" of an
individual's brain to a workalike computer is unlikely to work;
dementia, psychosis, and seizures are all too likely.  But on the
basic question, of whether we can build a computer that talks like a
human, is as endearing as our pets, thinks in metaphor and multiple
levels of abstraction - there, I think that it will be relatively easy to
construct a first-order workalike that reasons, categorizes, and
understands speech.  We'll even be able to make it run on
principles closely analogous to those used in our brains (1).  I can
already see one way of doing this, extrapolating from known wiring
principles of human cerebral cortex, and there might be _ad hoc_
ways of doing it too, e.g., AI.
      Even the first-order workalike will be recognizably
"conscious," likely as self-centered as we are.  And I don't mean
trivial aspects of consciousness such as aware, awake, sensitive, and
arousable.  It will likely include focussing attention, mental
rehearsal, abstraction, imagery, subconscious processing, "what-if"
planning, decision making -- and especially the narratives we
humans tell ourselves when awake or dreaming.  To the extent that
such functions can operate far faster than they do in our own
millisecond-scale brains, we'll see an aspect of "superhuman"
emerging from the "workalike."
      But that's the easy part, just extrapolation of existing trends
in computing technology, AI, and neurophysiological understanding. 
There are at least three hard parts.
                       ------
One hard part will be to make sure it fits into an ecology comprised
of animal species.  Such as us.
      Especially us.  That's because competition is most intense
between closely related species, the reason why none of our
Australopithecine and _Homo erectus_ cousins are still around, the
reason why only two omnivorous ape species have survived
(chimpanzee and bonobo).  Our more immediate ancestors probably
wiped out the others as competitors.
      "To keep every wheel and cog," said Aldo Leopold in 1948
(2), "is the first precaution of intelligent tinkering."  Introducing a
powerful new species into the ecosystem is not a step to be taken
lightly.
      When automation rearrangements occur so gradually that no
one starves, they are often beneficial.  Everyone used to gather or
hunt their own food, but agricultural technologies have gradually
reduced the percentage of our population that farms to about 3
percent.  And that's freed up many people to spend their time at
other pursuits.  The relative mix of those "occupations" changes
over time, as in the shift from manufacturing jobs to service jobs in
recent decades.
      Workalikes will change it even more, displacing even some
of the more educated workers.  But there would be some significant
benefits to humans:  imagine a superhuman teaching machine as a
teacher's assistant, one which could hold actual conversations with
the student, never got bored with drilling, always remembered to
provide the necessary variety to keep the student interested, could
tailor the offerings to the student's particular needs, and routinely
scanned for signs of a developmental disorder, say in reading or
attention span.
      Silicon superhumans could also apply such talents to
teaching the next generation of superhumans, evolving still smarter
ones just by variation and selection:  after all, their star silicon
pupil could be cloned.  Each offspring would be educated
somewhat differently thereafter.  With varied experiences, some
might have desired traits, _values_ such as sociability or concern
for human welfare.  Again we could select the star pupil for
cloning.  Since the copying includes memories to date (that's the
advantage of intelligence _in silico_; you can include readout
capabilities for use in cloning), experience would be cumulative,
truly Lamarckian:  the offspring wouldn't have to repeat their
parent's mistakes.
                       ------
Values are the second hard part, agreeing on them and
implementing them _in silico._
      The first-order workalikes will be totally amoral, just raw
intelligence and language ability.  They won't even come with the
inherited qualities that make our pets safe to be around.  We
humans tend to be treated by our pets as either their mother (in the
case of cats) or as their pack leader (in the case of dogs); they defer
to us.  This cognitive confusion on their part allows us humans to
benefit from their inborn social behaviors.
      How do we build in safeguards, especially something as
abstract as Asimov's Laws of Robotics or Good's Meta-Golden
Rule?  My guess is that it will require a lot of the star-pupil
cloning.  This gradual evolution over many superhuman generations
might partially substitute for biological inheritance at birth, perhaps
minimizing sociopathic tendencies in silicon superhumans and
limiting their risk-taking behaviors.
      If that's true, it will take many decades to get from raw
intelligence (that first-order workalike) to a safe-without-constant-
supervision superhuman.  The early models could be smart and
talkative without being cautious or wise, a very risky combination,
potentially sociopathic.  They'd have the top-end abilities without
their well-tested evolutionary predecessors as the underpinning.
                       ------
The third hard part is moderating the reactions of humanity to the
perceived challenge.  Just as an overenthusiastic reaction by your
immune system to a challenge can cripple you via allergies and
autoimmune diseases (and perhaps kill you via anaphylactic shock),
so human reactions to silicon superhumans could create enormous
strains in our present civilization.  A serious reaction, once
workalikes were already playing a significant role in the economy,
could disrupt the system that allows the farmers to support the other
97 percent of us.  Remember that famines kill because the
distribution system fails, not because there isn't enough food grown
somewhere in the world.
      The Luddites and _sabots_ of the 21st Century will be aided
by some very basic features of human ethology, ones which played
little role in 19th-century Europe.  Groups try to distinguish
themselves from others.  Despite the benefits of a common
language, most tribes in history have exaggerated linguistic
differences with their neighbors so as to tell friend from foe.  You
can be sure that the Turing Test will be in regular use, people
trying to tell whether a real human bureaucrat is at the other end of
the phone line.  Machines could be required to speak in a
characteristic voice to dampen this vigilence, but it won't prevent
"us and them" tensions.
      Workalikes and superhumans could also be restricted to
certain "occupations."  Their entry into other areas could be subject
to an evaluation process that carefully tested a new model against a
sample of real human society.  When the potential for serious side
effects is so great, and the rate of introduction so potentially rapid,
we would be well advised to adopt procedures similar to how the
FDA tests new drugs and medical instruments for efficacy, safety,
and side effects.  This doesn't slow the development of the
technology so much as it slows its widespread use and allows
retreats.
      Workalikes might be restricted to a limited sphere of
interactions; to use the Internet or telephone networks, they might
require stringent licensing.  There might be a one-day delay rule for
distributing output from superhumans that only had a beginner's
license, to address some of the "program trading" hazards.  For
some, we might want the computer equivalent of our P4
containments for replicating viruses.
                       ------
It does start to raise the question:  "Just what _is_ the proper
business of this society of ours?"  Making humans "all they can be"
by removing shackles and optimizing upbringing?  Or making
computers better than humans?  Maybe we can do both (as in those
teacher's assistants), but during our headlong rush to superhumans
-- a major form of tinkering -- we need to protect humanity.
      The ways that we could introduce caution are, however,
constrained by the various drives that are leading us to this
intelligence transition:
o  Curiosity is my primary motivation -- How does intelligence
      come about? -- and surely that of many computer scientists. 
      But even if curiosity for its own sake were somehow
      hobbled (as various religions have attempted), other drives
      lead us in the same direction.
o  "It takes all the running you can do," said the Red Queen to
      Alice, "to keep in the same place."  If we don't improve the
      technology, someone else will.   Historically, losing
      technological races has often meant being taken over (or
      eliminated by) your competitor -- and on the scale of
      nations, not just companies (3).  Given those growth curves
      in MIPS and megabytes over the last several decades, the
      rest of the world probably wouldn't slow down even if the
      majority decided to do so.
o  Serious threats demand the development of huge computing
      resources anyway.  For example, our climate is now known
      to "shift gears" in only a matter of a few years (4), probably
      via rearranging the ocean currents.  Many times in the past,
      Europe has suddenly switched from its present climate of
      winter warmth (thanks to a preheated North Atlantic Ocean)
      to a cold and dry climate like Canada's.  Europe's
      agriculture would then only feed one person in 25.  Such a
      flip now (and global warming appears to make a flip more
      likely, not less) would set off World War Three as everyone
      (and not just the Europeans) struggled for _lebensraum_.  It
      is urgent that we understand how to manipulate those
      climatic gearshifts for our own survival.  The big machines
      needed for global climatic modeling are very similar to what
      one needs for simulating brain processes.
I don't see realistic ways of "buying time" to make this superhuman
transition at a more deliberate pace.  And so the problems of
superintelligent machines will simply need to be faced head-on in
the next several decades, not somehow postponed by slowing
technological progress itself.  An Asilomar-like conference, like the
one convened when genetic engineering was getting underway in
the 1970s, will probably mark the beginning of a serious response
to these challenges.  Sociologists, sociobiologists, philosophers,
psychologists, historians, primate ethologists, evolutionary theorists,
cognitive neuroscientists, and science-fiction authors probably have
more expertise in the three "hard parts" than do the people building
the machines.
      Our civilization will, of course, be "playing God" in an
ultimate sense of the phrase:  evolving a greater intelligence than
currently exists on earth.  It behooves us to be a considerate
creator, wise to the world and its fragile nature, sensitive to the
need for stable footings that prevent backsliding.  Or collapse.
�1.    William H. Calvin, _The Cerebral Symphony:  Seashore
Reflections on the Structure of Consciousness_ (Bantam 1989).

2.    Aldo Leopold, _Sand County Almanac_, p. 190. 

3.    Paul Colinvaux, _The Fates of Nations_ (Penguin 1982).

4.    William H. Calvin, _The Ascent of Mind:  Ice Age
Climates and the Evolution of Intelligence_ (Bantam 1990).  Also
"Greenhouse and Icehouse:  Might catastrophic cooling be triggered
by global warming?"  _Whole Earth Review_ 73:106-111 (1991).

                       --end--