|
![](/i/favi32.png) April 22nd-28th 2023 Ukraine’s game planThe EconomistA major appliance
But even in a scenario where
LLM
s stopped
improving this year, and a blockbuster
lawsuit drove Open
AI
to bankruptcy, the
power of large language models would re
main. The data and the tools to process it
are widely available, even if the sheer scale
achieved by Open
AI
remains expensive.
Opensource implementations, when
trained carefully and selectively, are alrea
dy aping the performance of
GPT
4. This is
a good thing: having the power of
LLM
s in
many hands means that many minds can
come up with innovative new applica
tions, improving everything from medi
cine to the law.
But it also means that the catastrophic
risk which keeps the tech elite up at night
has become more imaginable.
LLM
s are al
ready incredibly powerful and have im
proved so quickly that many of those work
ing on them have taken fright. The capabil
ities of the biggest models have outrun
their creators’ understanding and control.
As the next article explains, that creates
risks, of all kinds.
n
Faster, higher, more calculations
Computing power used in training AI systems
Selected systems, floating-point operations, log scale
Sources: Sevilla et al., 2023; Our World in Data
10
5
10
10
10
15
10
20
10
25
1
Academia
Research consortium
Industry
1950
60
70
80
90 2000 10
23
GPT-3
GPT-4
Transformer
Stable Diffusion
Theseus
012
72
The Economist
April 22nd 2023
Science & technology
The new AI (2)
How generative models
Dostları ilə paylaş: |
|
|