|
April 22nd-28th 2023 Ukraine’s game planThe EconomistExtinction? Rebellion?
The fear that machines will steal jobs is centuries old. But so far
new technology has created new jobs to replace the ones it has
destroyed. Machines tend to be able to perform some tasks, not
others, increasing demand for people who can do the jobs ma
chines cannot. Could this time be different? A sudden disloca
tion in job markets cannot be ruled out, even if so far there is no
sign of one (see Schumpeter). Previous technology has tended to
replace unskilled tasks, but
LLM
s can perform some whitecollar
tasks, such as summarising documents and writing code.
The degree of existential risk posed by
AI
has been hotly de
bated. Experts are divided. In a survey of
AI
researchers carried
out in 2022, 48% thought there was at least a 10% chance that
AI
’s
impact would be “extremely bad (eg, human extinction)”. But
25% said the risk was 0%; the median researcher put the risk at
5%. The nightmare is that an advanced
AI
causes harm on a mas
sive scale, by making poisons or viruses, or persuading humans
to commit terrorist acts. It need not have evil
intent: researchers worry that future
AI
s may
have goals that do not align with those of their
human creators.
Such scenarios should not be dismissed. But
all involve a huge amount of guesswork, and a
leap from today’s technology. And many imag
ine that future
AI
s will have unfettered access to
energy, money and computing power, which
are real constraints today, and could be denied to a rogue
AI
in
future. Moreover, experts tend to overstate the risks in their
area, compared with other forecasters. (And Mr Musk, who is
launching his own
AI
startup, has an interest in his rivals down
ing tools.) Imposing heavy regulation, or indeed a pause, today
seems an overreaction. A pause would also be unenforceable.
Regulation is needed, but for more mundane reasons than
saving humanity. Existing
AI
systems raise real concerns about
bias, privacy and intellectualproperty rights. As the technology
advances, other problems could become apparent. The key is to
balance the promise of
AI
with an assessment of the risks, and to
be ready to adapt.
So far governments are taking three different approaches. At
one end of the spectrum is Britain, which has proposed a “light
touch” approach with no new rules or regulatory bodies, but ap
plies existing regulations to
AI
systems. The aim is to boost in
vestment and turn Britain into an “
AI
superpower”. America has
taken a similar approach, though the Biden administration is
now seeking public views on what a rulebook might look like.
The
eu
is taking a tougher line. Its proposed law categorises
different uses of
AI
by the degree of risk, and requires increas
ingly stringent monitoring and disclosure as the degree of risk
rises from, say, musicrecommendation to selfdriving cars.
Dostları ilə paylaş: |
|
|