9
Leaders
“S
hould we automate
away all the jobs, including the ful
filling ones? Should we develop nonhuman minds that
might eventually outnumber, outsmart...and replace us? Should
we risk loss of control of our civilisation?” These questions were
asked last month in an open letter from the Future of Life Insti
tute, an
ngo
. It called for a sixmonth “pause” in the creation of
the most advanced forms of artificial intelligence (
AI
), and was
signed by tech luminaries including Elon Musk. It is the most
prominent example yet of how rapid progress in
AI
has sparked
anxiety about the potential dangers of the technology.
In particular, new “large language models” (
LLM
s)—the sort
that powers Chat
GPT
, a chatbot made by Open
AI
, a startup—have
surprised even their creators with their unexpected talents as
they have been scaled up. Such “emergent” abilities include
everything from solving logic puzzles and writing computer
code to identifying films from plot summaries written in emoji.
These models stand to transform humans’ relationship with
computers, knowledge and even with themselves (see Essay).
Proponents of
AI
argue for its potential to solve big problems by
developing new drugs, designing new materials to help fight cli
mate change, or untangling the complexities of fusion power. To
others, the fact that
ai
s’ capabilities are already outrunning their
creators’ understanding risks bringing to life the sciencefiction
disaster scenario of the machine that outsmarts
its inventor, often with fatal consequences.
This bubbling mixture of excitement and
fear makes it hard to weigh the opportunities
and risks. But lessons can be learned from other
industries, and from past technological shifts.
So what has changed to make
AI
so much more
capable? How scared should you be? And what
should governments do?
In a special Science section, we explore the workings of
llm
s
and their future direction. The first wave of modern
AI
systems,
which emerged a decade ago, relied on carefully labelled train
ing data. Once exposed to a sufficient number of labelled exam
ples, they could learn to do things like recognise images or tran
scribe speech. Today’s systems do not require prelabelling, and
as a result can be trained using much larger data sets taken from
online sources.
LLM
s can, in effect, be trained on the entire in
ternet—which explains their capabilities, good and bad.
Those capabilities became apparent to a wider public when
Chat
GPT
was released in November. A million people had used it
within a week; 100m within two months. It was soon being used
to generate school essays and wedding speeches. Chat
GPT
’s pop
ularity, and Microsoft’s move to incorporate it into Bing, its
search engine, prompted rival firms to release chatbots too.
Some of these produced strange results. Bing Chat suggested
to a journalist that he should leave his wife. Chat
GPT
has been
accused of defamation by a law professor.
LLM
s produce answers
that have the patina of truth, but often contain factual errors or
outright fabrications. Even so, Microsoft, Google and other tech
firms have begun to incorporate
LLM
s into their products, to
help users create documents and perform other tasks.
The recent acceleration in both the power and visibility of
AI
systems, and growing awareness of their abilities and defects,
have raised fears that the technology is now advancing so quick
ly that it cannot be safely controlled. Hence the call for a pause,
and growing concern that
AI
could threaten not just jobs, factual
accuracy and reputations, but the existence of humanity itself.
Dostları ilə paylaş: