18
The Economist
April 22nd 2023
Essay
Artificial intelligences
mans and their institutions, and which had interests
that were not aligned with those of humankind, would
be a dangerous place.
It became common for people within and around
the field to say that there was a “nonzero” chance of
the development of superhuman
AI
s leading to human
extinction. The remarkable boom in the capabilities of
large language models (
LLM
s), “foundational” models
and related forms of “generative”
AI
has propelled
these discussions of existential risk into the public
imagination and the inboxes of ministers.
As the special Science section in this issue makes
clear, the field’s progress is precipitate and its promise
immense. That brings clear and present dangers which
need addressing (see Leader). But in the specific con
text of
GPT-4
, the
LLM
du jour
, and its generative ilk,
talk of existential risks seems rather absurd. They pro
duce prose, poetry and code; they generate images,
sound and video; they make predictions based on pat
terns. It is easy to see that those capabilities bring with
them a huge capacity for mischief. It is hard to imagine
them underpinning “the power to control civilisa
tion”, or to “replace us”, as hyperbolic critics warn.
Love song
But the lack of any “Minds that are to our minds as ours
are to those of the beasts that perish, intellects vast
and cool and unsympathetic [drawing] their plans
against us”, to quote H.G. Wells, does not mean that the
scale of the changes that
AI
may bring with it can be ig
nored or should be minimised. There is much more to
life than the avoidance of extinction. A technology
need not be worldending to be worldchanging.
The transition into a world filled with computer
programs capable of human levels of conversation and
language comprehension and superhuman powers of
data assimilation and pattern recognition has just be
gun. The coming of ubiquitous pseudocognition along
these lines could be a turning point in history even if
the current pace of
AI
progress slackens (which it
might) or fundamental developments have been
tapped out (which feels unlikely). It can be expected to
have implications not just for how people earn their
livings and organise their lives, but also for how they
think about their humanity.
For a sense of what may be on the way, consider
three possible analogues, or precursors: the browser,
the printing press and practice of psychoanalysis. One
changed computers and the economy, one changed
how people gained access and related to knowledge,
and one changed how people understood themselves.
The humble web browser, introduced in the early
1990s as a way to share files across networks, changed
the ways in which computers are used, the way in
which the computer industry works and the way infor
mation is organised. Combined with the ability to link
computers into networks, the browser became a win
dow through which first files and then applications
could be accessed wherever they might be located. The
interface through which a user interacted with an ap
plication was separated from the application itself.
The power of the browser was immediately obvi
ous. Fights over how hard users could be pushed to
wards a particular browser became a matter of high
commercial drama. Almost any business with a web
address could get funding, no matter what absurdity it
promised. When boom turned to bust at the turn of the
century there was a predictable backlash. But the fun
damental separation of interface and application con
tinued. Amazon, Meta (
née
Facebook) and Alphabet
(
née
Google) rose to giddy heights by making the
browser a conduit for goods, information and human
connections. Who made the browsers became inci
dental; their role as a platform became fundamental.
The months since the release of Open
AI
’s Chat
GPT
,
a conversational interface now powered by
GPT-4
,
have seen an entrepreneurial explosion that makes the
dotcom boom look sedate. For users, apps based on
LLM
s and similar software can be ludicrously easy to
use; type a prompt and see a result. For developers it is
not that much harder. “You can just open your laptop
and write a few lines of code that interact with the
model,” explains Ben Tossell, a British entrepreneur
who publishes a newsletter about
AI
services.
And the
LLM
s are increasingly capable of helping
with that coding, too. Having been “trained” not just
on reams of text, but lots of code, they contain the
building blocks of many possible programs; that lets
them act as “copilots” for coders. Programmers on
GitHub. an opensource coding site, are now using a
GPT-4
based copilot to produce nearly half their code.
There is no reason why this ability should not even
tually allow
LLM
s to put code together on the fly, ex
plains Kevin Scott, Microsoft’s chief technology offi
cer. The capacity to translate from one language to an
other includes, in principle and increasingly in prac
tice, the ability to translate from language to code. A
prompt written in English can in principle spur the
production of a program that fulfils its requirements.
Where browsers detached the user interface from the
software application,
LLM
s are likely to dissolve both
categories. This could mark a fundamental shift in
both the way people use computers and the business
models within which they do so.
Dostları ilə paylaş: