24 JUNE 2016 • VOL 352 ISSUE 6293
uppose that a driverless car is headed
course and kill them or swerve into
a concrete wall, killing its passenger.
On page 1573 of this issue, Bonnefon
(1) explore this social dilemma in
a series of clever survey experiments. They
show that people generally approve of cars
programmed to minimize the total amount
of harm, even at the expense of their passen-
gers, but are not enthusiastic about riding in
such “utilitarian” cars—that is, autonomous
vehicles that are, in certain emergency situ-
ations, programmed to sacrifice their pas-
sengers for the greater good. Such dilemmas
may arise infrequently, but once millions
of autonomous vehicles are on the road,
the improbable becomes probable, perhaps
even inevitable. And even if such cases never
arise, autonomous vehicles must be pro-
grammed to handle them. How should they
be programmed? And who should decide?
Bonnefon et al. explore many interesting
variations, such as how attitudes change
when a family member is on board or when
the number of lives to be saved by swerving
gets larger. As one might expect, people are
even less comfortable with utilitarian sac-
rifices when family members are on board
and somewhat more comfortable when sac-
rificial swerves save larger numbers of lives.
But across all of these variations, the social
dilemma remains robust. A major determi-
nant of people’s attitudes toward utilitar-
ian cars is whether the question is about
utilitarian cars in general or about riding in
In light of this consistent finding, the au-
thors consider policy strategies and pitfalls.
They note that the best strategy for utilitar-
ian policy-makers may, ironically, be to give
up on utilitarian cars. Autonomous vehicles
are expected to greatly reduce road fatalities
(2). If that proves true, and if utilitarian cars
are unpopular, then pushing for utilitarian
cars may backfire by delaying the adoption
of generally safer autonomous vehicles.
Our driverless dilemma
When should your car be willing to kill you?
Department of Psychology, Center for Brain Science,
Harvard University, Cambridge, MA 02138, USA.
P E R S P E C T I V E S
Published by AAAS
on June 23, 2016
24 JUNE 2016 • VOL 352 ISSUE 6293
As the authors acknowledge, attitudes
toward utilitarian cars may change as na-
tions and communities experiment with
different policies. People may get used to
utilitarian autonomous vehicles, just as
some Europeans have grown accustomed
to opt-out organ donation programs (3)
and Australians have grown accustomed
to stricter gun laws (4). Likewise, attitudes
may change as we rethink our transpor-
tation systems. Today, cars are beloved
personal possessions, and the prospect
of being killed by one’s own car may feel
like a personal betrayal to be avoided at all
costs. But as autonomous vehicles take off,
car ownership may decline as people tire
of paying to own vehicles that stay parked
most of the time (5). The cars of the future
may be interchangeable units within vast
transportation systems, like the cars of to-
day’s subway trains. As our thinking shifts
from personal vehicles to transportation
systems, people might prefer systems that
maximize overall safety.
In their experiments, Bonnefon et al.
assume that the autonomous vehicles’
emergency algorithms are known and that
their expected consequences are trans-
parent. This need not be the case. In fact,
the most pressing issue we face with re-
spect to autonomous vehicle ethics may be
transparency. Life-and-death trade-offs are
unpleasant, and no matter which ethical
principles autonomous vehicles adopt, they
will be open to compelling criticisms, giving
manufacturers little incentive to publicize
their operating principles. Manufacturers
of utilitarian cars will be criticized for their
willingness to kill their own passengers.
Manufacturers of cars that privilege their
own passengers will be criticized for devalu-
ing the lives of others and their willingness
to cause additional deaths. Tasked with sat-
isfying the demands of a morally ambiva-
lent public, the makers and regulators of
autonomous vehicles will find themselves
in a tight spot.
Software engineers—unlike politicians,
philosophers, and opinionated uncles—
don’t have the luxury of vague abstraction.
They can’t implore their machines to respect
people’s rights, to be virtuous, or to seek jus-
tice—at least not until we have moral theo-
ries or training criteria sufficiently precise
to determine exactly which rights people
have, what virtue requires, and which trade-
offs are just. We can program autonomous
vehicles to minimize harm, but that, appar-
ently, is not something with which we are
Bonnefon et al. show us, in yet another
way, how hard it will be to design autono-
mous machines that comport with our
moral sensibilities (6–8). The problem, it
seems, is more philosophical than technical.
Before we can put our values into machines,
we have to figure out how to make our val-
ues clear and consistent. For 21st-century
moral philosophers, this may be where the
rubber meets the road.
R E F E R E N C E S
1. J.-F. Bonnefon et al., Science 352, 1573 (2016).
2. P. Gao, R. Hensley, A. Zielke, A Road Map to the Future for
(McKinsey & Co., Washington, DC, 2014).
3. E. J. Johnson, D. G. Goldstein, Science 302, 1338 (2003).
4. S. Chapman et al., Injury Prev. 12, 365 (2006).
5. D. Neil, “Could self-driving cars spell the end of car owner-
ship?”, Wall Street Journal, 1 December 2015; www.wsj.
6. I. Asimov, I, Robot [stories] (Gnome, New York, 1950).
7. W. Wallach, C. Allen, Moral Machines: Teaching Robots
(Oxford Univ. Press, 2010).
8. P. Lin, K. Abney, G. A. Bekey, Robot Ethics: The Ethical and
Social Implications of Robotics
(MIT Press, 2011).
Gut microbiota affect
T cell plasticity in the
ffective immune responses rely on
plasticity. Lymphocytes have regula-
tory circuits that control phenotypic
and functional identity. Stable circuits
maintain homeostasis and prevent
autoimmunity. But plasticity is needed to
integrate new environmental inputs and
generate immune responses that subdue
the eliciting agent without damaging tis-
sue. Regulatory T cells (T
) are a subset
T cells that control effector T cell
mation and autoimmunity (1, 2). On page
1581 in this issue, Sujino et al. (3) report
that intestinal T
convert into CD4
local intestinal environment, thus identify-
ing the intestinal epithelium as a compart-
ment that enforces lymphocyte plasticity.
responses, including tolerance to dietary
antigens (4). They originate from CD4
pria, and can produce interferon-γ (IFN-γ),
a cytokine that triggers immune responses
to infection, as well as promote cytoly-
sis. Differentiation of T cells into CD4
is governed by the reduced expression of
ThPOK (T helper–inducing POZ/Kruppel
factor), a transcription factor that drives
T helper cell program. More-
related transcription factor 3) drives the
T cell program, i.e. IFN-γ production
Should autonomous vehicles protect
their passengers or minimize the total amount of harm?
cells might rapidly
convert into another T cell
Published by AAAS
Joshua D. Greene (June 23, 2016)
Visit the online version of this article to access the personalization and
Obtain information about reproducing this article:
is a registered trademark of AAAS.
Advancement of Science; all rights reserved. The title
Avenue NW, Washington, DC 20005. Copyright 2016 by the American Association for the
in December, by the American Association for the Advancement of Science, 1200 New York
(print ISSN 0036-8075; online ISSN 1095-9203) is published weekly, except the last week