Causal Analytics for Applied Risk Analysis Louis Anthony Cox, Jr


Uncertain Causation Encourages Ineffective and Potentially Harmful Regulations



Yüklə 12,64 Mb.
səhifə52/57
tarix25.07.2018
ölçüsü12,64 Mb.
#58662
1   ...   49   50   51   52   53   54   55   56   57

Uncertain Causation Encourages Ineffective and Potentially Harmful Regulations
The central problem to be investigated in the remainder of this chapter is how to identify socially beneficial choices when causation is uncertain and the sizes of the benefits caused by alternative choices are therefore unknown. In many important applications, the benefits caused by costly regulations, policy interventions, or investments are uncertain. Whether they exceed the costs may then also be uncertain. In such settings, BCA principles must be modified to deal with risk aversion, rather than considering only expected net benefits.Moreover, uncertainty about causation can encourage regulations that are socially harmful and that would not pass a BCA test that accounts for correlated uncertainties about its effects on many individuals. This creates a need for review and accountability that regulatory agencies are not well equipped to provide for themselves. The following paragraphs develop these points.
Uncertain Causation Encourages Socially Reckless Regulation
To understand how uncertainty about causation can lead to adoption of regulations whose risk-adjusted costs exceed their benefits, suppose that a regulatory ban or restriction on some activity or class of activities, such as emissions of a regulated air pollutant, imposes an expected cost of c on each of N economic agents and yields an uncertain benefit of b for each of them, with the expected value of b denoted by E(b) and its variance denoted by Var(b). If the uncertain benefit b has a normal distribution, then a standard calculation in decision analysis shows that its certainty-equivalent value to a decision maker with an exponential utility function is

CE(b) = E(b) – k*Var(b),
where k is proportional to the decision maker’s risk aversion. To a risk-averse decision maker (i.e., having k > 0), the uncertain benefit is worth less than a certain benefit of size E(b) by the amount of the “risk premium,” k*Var(b). If the uncertainty about b is due to uncertainty about the size of the effect that a policy or regulation would cause, e.g., the size of the reduction in annual mortality risk that would be achieved by a reduction or ban on a source of exposure, and if the size of this uncertain effect is the same for all N agents, then the net social benefit summed over all N agents is
N*E(b) – N2kVar(b) – Nc
(since the variance of N times b is N2 times the variance of b). This can be written as
N*[E(b) – cNkVar(b)],
showing that the per capita net benefit after adjusting for risk aversion is [E(b) – cNkVar(b)]. This will necessarily be negative if N is sufficiently large, since kVar(b) > 0. However, a regulatory agency that implements regulations with expected benefits exceeding their expected costs, i.e., with E(b) – c > 0, pays no attention to the risk premium term NkVar(b) or to the size of N. It ignores the fact that individual benefits received are positively correlated, since a regulation that does not cause its intended and expected benefit can result in a net loss for each of the N agents. For all sufficiently large N, the risk of these N simultaneous losses will outweigh the positive expected net benefit, i.e., E(b) – cNkVar(b) will be negative even though E(b) – c is positive. (The Arrow-Lind theorem showing that government investments should be risk-neutral does not apply here, due to the correlated losses.) Agencies that focus only on expected benefits can therefore undertake activities or impose regulations whose risk-adjusted values are negative. This is most likely for regulations with widely distributed but uncertain benefits (i.e., large N), such as air pollution or food safety regulations. In essence, ignoring risk aversion to correlated losses if the causal hypothesis that the regulation will create its intended benefits turns out to be wrong encourages reckless expenditures and costly regulations with large net losses if the expected benefits do not occur.
Warnings from Behavioral Economics and Decision and Risk Psychology: The Tyranny of Misperceptions

Regulatory agencies, as well as courts and corporations, are staffed by human beings with a full array of psychological foibles – heuristics and biases such as those discussed in Chapter 12 – that shape their beliefs and behaviors in addressing uncertain risks. Well-documented weaknesses in individual and group decision-making under uncertainty include forming opinions and take actions based on too little information (related to Kahneman’s “what you see is all there is” heuristic) (Kahneman, 2011); making one decision at a time in isolation rather than evaluating each in the context of the entire portfolio or stream of decisions to which it belongs (narrow framing); believing what it pays us to believe, what it feels most comfortable to believe, or what fits our ideological world view (motivated reasoning, affect heuristic); seeking and interpreting information to confirm our existing opinions while failing to seek and use potentially disconfirming information (confirmation bias); and being unjustifiably confident in both our judgments and our level of certainty about them (overconfidence bias) (Kahneman, 2011; Schoemaker and Tetlock, 2016; Thaler, 2015). In short, judgments and choices about how to manage risks when the effects of actions are highly uncertain are often shaped by emotions and psychology (“System 1”) far more than by facts, data, and calculations (“System 2”). Such decisions can be passionately advocated, strongly felt to be right, and confidently approved of without being causally effective in producing desired results. These common weaknesses of human judgment and opinion formation have most opportunity to shape policy when the consequences caused by different choices are least certain.

Regulatory agencies face additional challenges to learning to act effectively under uncertainty, stemming from social and organizational psychology. Consider the following selection mechanism by which people with similar views might assemble into a regulatoryagency with an organizational culture prone to groupthink (Coglianese, 2001, p. 106) and holding more extreme perceptions of the risks that they regulate than most of the population. Suppose that people whose ideologies and beliefs suggest that it is highly worthwhile to regulate substance, activity, or industry X are more likely to work for agencies that regulate X than are people who do not share those beliefs. Then an organizational culture might develop that is inclined to regulate well beyond what most people would consider reasonable or desirable. The result is a tyranny of misperceptions, somewhat analogous to the Winner’s Curse in auctions, in which those whose perceptions are most extreme are most likely to invest the time and effort needed to stimulate regulatory interventions. In this case, when the true hazards caused by a regulated substance or activity are highly uncertain, those who believe them to be worse than most people judge them to be may be disproportionately likely to shape the regulations that restrict them. If average judgments are typically more accurate than extreme ones (Tetlock and Gardner, 2015), such regulations may tend to reflect the misperceptions of those advocating regulation that risks are higher than they actually are.

Of course, the same self-selection bias can function throughout the political economy of a democracy: those who care most about making a change are most likely to work to do so, whether through advocacy and activism, litigation, regulation, legislation, or journalism directed at influencing perceptions and political actions. But regulatory agencies are empowered to take costly actions to promote social benefits even when the benefits caused by actions are highly uncertain, so that decisions are most prone to System 1 thinking. Moreover, empirical evidence suggests that simply recognizing this problem and making an effort to correct for it, e.g., by instituting internal review procedures, is likely to have limited value: the same heuristics and biases that give rise to a policy are also likely to affect reviews, making misperceptions about risk and biased judgments about how best to manage them difficult to overcome (Kahneman, 2011). External review by people who do not share the same information and world view can be far more valuable in flagging biases, introducing discordant information to consider, and improving the effectiveness of predictions and decisions (Kahneman, 2011; Tetlock and Gardner, 2015).



It is distressing, and perhaps not very plausible a priori, to think that people and organizations that devote themselves to promoting the public interest might inadvertently harm it by falling into the familiar pitfalls of System 1 thinking (Chapter 10) when causation is uncertain. After all, do not well-run organizations anticipate and correct for such limitations by using relatively explicit and objective criteria and rationales for their decisions, well-documented reasoning and data based on peer-reviewed publications, multiple rounds of internal review, and invited critiques and reviews by external experts and stakeholders? Indeed, all of these steps play important roles in modern rule-making procedures and regulatory procedures in the United States and elsewhere. Yet, there is evidence that they are not sufficient to guarantee high-quality regulations, or to block regulations for which there is no good reason to expect that the predicted benefits will actually occur. Such regulations are too often “arbitrary and capricious” in the sense that there is no rational connection (although there may be many irrational ones) between the facts presented to support projected benefits of regulations and the belief that these benefits will actually occur, and hence that regulations are worthwhile. The following examples illustrate the real-world importance of these concerns. Possible explanations and remedies will then be explored, including judicial review that insists on more rigorous and trustworthy standards of evidencefor causality than regulatory agencies customarily use. We will argue that courts are often the “cheapest misperception corrector” and are best positioned to correct regulatory excesses by enforcing a higher standard of causal inference before uncertain benefits are accepted as justifying costly actions.
Example: The Irish Coal-Burning Bans
Recall from Chapter 1 and several subsequent discussions that, between 1990 and 2015, coal-burning was banned by regulators in many parts of Ireland, based on beliefs that the local government summarized as follows:
“Benefits of a smoky coal ban include very significant reductions in respiratory problems and indeed mortalities from the effects of burning smoky coal. The original ban in Dublin has been cited widely as a successful policy intervention and has become something of an icon of best practice within the international clean air community. It is estimated that in the region of 8,000 lives have been saved in Dublin since the introduction of the smoky coal ban back in 1990 and further health, environmental and economic benefits (estimated at 53m euro per year) will be realised, if the ban is extended nationwide” (Department of Housing, Planning, Community, and Local Government, 2016).
The underlying scientific studies that led to these beliefs (Clancy et al., 2002), still widely and approvingly cited by regulators and activists, clearly showed that particulate matter levels from coal smoke and mortality rates dropped significantly after the bans. For over a decade, activists, regulators, and the media have celebrated such findings as showing a clear causal link between coal burning and mortality and a clear opportunity to reduce mortality by reducing coal burning, leading to substantial estimated health and economic benefits from extending the bans nation-wide and substantial unnecessary deaths from delaying (Kelly, 2015).

Yet, as emphasized in Chapters 1 and 2, there is a clear logical fallacy at work here. Although the claimed successes of the bans in reducing mortality appear might appeal to common sense, wishful thinking, and confirmation bias, no potential disconfirming evidence that might conflict with this causal conclusion was sought or used in the studies that led to the claim. For example, the original study (Clancy et al., 2002) did not examine whether the drop in mortalities following the bans had causes unrelated to the ban, nor whether it also occurred in other countries and in areas unaffected by the bans (Wittmaack, 2006). When a team including some of the original investigators examined these possibilities a decade later, long after successive waves of bans had already been implemented, they found no evidence that the bans had actually caused the reductions in total mortality rates that had originally been attributed to them: mortality rates had fallen just as much in areas not affected by the bans as in areas affected by them (Dockery et al., 2013). The bans had no detectable effect on reducing total mortality rates. Rather, mortality rates had been declining over time throughout Ireland and much of the developed world since long before the bans began, and they continued to do so without interruption during and following them. Thus, mortality rates were indeed lower after the bans than before them, even though the bans themselves had no detectable effect on them. (Data-dredging that sought associations of pollution reductions with reductions in cause-specific mortality rates, but without controlling for multiple testing bias, found a few associations, but disconfirmed the original claim of significant reductions in cardiovascular mortality associated with the bans.) If the ban left more elderly people cold in the winter, and thereby increased their mortality rates – a possibility that was not investigated – then this effect was masked by the historical trend of improving life expectancy and reduced mortality risks.


In this example, it appears that confirmation bias led to widespread and enthusiastic misinterpretation of an ongoing historical trend of declining elderly mortality rates – brought about largely by improved prevention, diagnosis, and treatment of cardiovascular diseases and reduced cigarette smoking – as evidence that the coal-burning bans were causally effective in protecting public health. This meme has continued to drive regulatory and media accounts and regulatory policy until the present as Ireland pushes to extend the bans nation-wide (Kelly, 2015). The finding that the bans actually had no detectable effect in reducing all-cause or cardiovascular mortality (Dockery et al., 2013) continues to be widely ignored. This example illustrates how regulations can be enthusiastically supported and passed based on unsound reasoning about causality, such as neglecting to use control groups in assessing the effects of bans. It also shows how they can be perceived and evaluated favorably in retrospect by regulators, activists, environmental scientists, and the media as having been highly successful in creating substantial public health benefits, even if they actually had no beneficial effects.
Example: Estimated Benefits of Fine Particulate Matter (PM2.5) Regulation in the United States
An analogous process is currently unfolding on a much larger scale in the United States. The majority of total estimated benefits from all Federal regulations in the U.S. are attributed to the effects of Clean Air Act regulations in reducing fine particulate matter (PM2.5) air pollution and thus reducing estimated elderly mortality risks. The United States Environmental Protection Agency (EPA) credits its regulation of fine particulate matter with creating nearly two trillion dollars per year of health benefits (EPA 2011a, 2011b). Yet, notwithstanding widespread impressions and many published claims to the contrary in scientific journals and the news media, it has never been established that reducing air pollution actually causes these benefits, as opposed to being associated with them in a historical context where both air pollution levels and mortality rates have been declining over time. As the EPA’s own benefits assessment states in a table, their “analysis assumes a causal relationship between PM exposure and premature mortality based on strong epidemiological evidence… However, epidemiological evidence alone cannot establish this causal link” (EPA, 2011, Table 5-11) (EPA, 2011b). The reason that the epidemiological evidence cannot establish the assumed causal link is that it deals only with association, and not with causation, as detailed in Chapter 2.

In the absence of an established causal relation, historical data showing that both PM2.5 and mortality rates are both higher than average in some places and times (e.g., during cold winter days compared to milder days, or in earlier decades compared to later ones), and thus that they are positively correlated, is widely treated as if it were evidence of causation. This again illustrates the practical importance of confirmation bias in shaping perceptions and economic evaluations of the benefits attributed to (but not necessarily caused by) major regulations by those advocating them. Potential disconfirming evidence, such as that mortality risks declined just as much in cities and counties where pollution levels increased as where they decreased (see Chapter 10), has been neither sought nor used by advocates of PM2.5 reductions in attributing health and benefits to such reductions. As pointed out recently, “Many studies have reported the associations between long-term exposure to PM2.5 and increased risk of death. However, to our knowledge, none has used a causal modeling approach” (Wang et al., 2016).The relatively rare exceptions that report positive causal relations rest on unverified modeling assumptionsto interpret associations causally, as discussed in greater detail later. Approaches that seek to avoid making such assumptions by using nonparametric analyses of whether changes in exposure concentrations predict changes in mortality rates have concluded that “A causal relation between pollutant concentrations and [all-cause or cardiovascular disease] mortality rates cannot be inferred from these historical data, although a statistical association between them is well supported” (Cox and Popken, 2015) and that, for 100 U.S. cities with historical data on PM2.5 and mortality, “we find no evidence that reductions in PM2.5 concentrations cause reductions in mortality rates” (Cox et al., 2013).



On the other hand, hundreds of peer-reviewed articles and media accounts claim that reducing PM2.5 causes reductions in mortality risks (Wang et al, 2016). These often present sensational conclusions such as that “An effective program to deliver clean air to the world's most polluted regions could avoid several hundred thousand premature deaths each year” (Apte et al., 2015). Similar to the original mistaken claims about effects of coal-burning bans on all-cause mortality risks in Ireland, such conclusions conflate correlation and causation. This confusion is facilitated by the increasing use of computer models to project hypothetical benefits based on assumptions of unknown validity. For example, the EPA provides a free computer program, BenMAP, to enable investigators to quantify the human health benefits attributed to further reductions in criterion air pollutants such as ozone (O3) and fine particulate matter (PM2.5) based on embedded expert opinions about their concentration-response correlations. Activist organizations such as the American Thoracic Society (ATS) have used BenMAP simulations to develop impressive-looking and widely publicized estimates of health benefits from further reductions in air pollution, such as that “Approximately 9,320 excess deaths (69% from O3; 31% from PM2.5), 21,400 excess morbidities (74% from O3; 26% from PM2.5), and 19,300,000 adversely impacted days (88% from O3; 12% from PM2.5) in the United States each year are attributable to pollution exceeding the ATS-recommended standards” (Cromar et al., 2016). But the concentration-response relations assumed in the computer simulations are not established causal relations. To the contrary, as clearly and repeatedly stated in the technical documentation for BenMAP (March 2015, Table E-1, Health Impact Functions for Particulate Matter and Long-Term Mortality, pages 60-61), there is "no causality included" in BenMAP’s summary of health impact functions based on expert judgments. In more detail, the documentation explains that "Experts A, C, and J indicated that they included the likelihood of causality in their subjective distributions. However, the continuous parametric distributions specified were inconsistent with the causality likelihoods provided by these experts. Because there was no way to reconcile this, we chose to interpret the distributions of these experts as unconditional and ignore the additional information on the likelihood of causality." Similar caveats hold forother instances of the increasingly prevalent practice of predictingreductions in mortality caused by reductions in exposure concentration by applying previously estimated concentration-response associations and slope factors, without any independent effort to establish whether they are causal. For example, Lin et al. (2016) “estimate the number of deaths attributable to PM2.5, using concentration-response functions derived from previous studies” and conclude “that substantial mortality reductions could be achieved by implementing stringent air pollution mitigation measures” without noting that the previous studies referred to only assessed associations and not causation.

In summary, similar to the case of the coal-burning bans in Ireland, substantial health benefits are attributed to tighter Clean Air Act regulations in the United States, with many calls for further reductions being voiced byactivists, regulators, public health researchers and advocacy groups, and the media. Yet,it has not been shown that the regulations actually cause the benefits that are being attributed to them, and causal analysis approaches that do not make unverified modeling assumptions do not find any detectable beneficial effect of reducing current ambient concentrations of PM2.5 or ozone in recent decades, despite a voluminous scientific and popular literature projecting substantial health benefits that should be easily detectable if they were occurring (Cox and Popken, 2015).



Example: Food Safety Regulation based on Assumed Causation
Between 2000 and 2005, the Food and Drug Administration’s Center for Veterinary Medicine (FDA-CVM), in conjunction with activist and advocacy organizations such as the Alliance for Prudent Use of Antibiotics (APUA) and the Union of Concerned Scientists, successfully pushed to ban the antibiotic enrofloxacin, a fluoroquinolone antibiotic, from use in chickens, on the grounds that its use might select for antibiotic-resistant strains of the common bacterium Campylobacter, potentially causing cases of antibiotic-resistant food poisoning that would be more difficult to treat than non-resistant cases. This concern certainly sounds plausible. It received extensive media coverage via stories that usually linked it to frightening statistics on the tens of thousands of cases per year of “superbug” infections with multi drug resistant bacteria occurring in the United States. Few stories explained that those cases were from different bacteria, not from Campylobacter; that campylobacteriosis was specifically associated with consuming undercooked chicken in fast food restaurants, and not with chicken prepared at home or in hospitals; or that molecular fingerprinting showed that superbug infections overwhelmingly were caused by hospital use of antibiotics in people, rather than by animal antibiotic use on the farm. A quantitative risk assessment model used by the FDA simply assumed that reducing use of enrofloxacin in chickens would proportionally reduce the prevalence of fluoroquinolone-resistant cases of campylobacteriosis food poisoning: “A linear population risk model used by the U.S. Food and Drug Administration (FDA) Center for Veterinary Medicine (CVM) estimates the risk of human cases of campylobacteriosis caused by fluoroquinolone-resistant Campylobacter. Among the cases of campylobacteriosis attributed to domestically produced chicken, the fluoroquinolone resistance is assumed to result from the use of fluoroquinolones in poultry in the United States” (Bartholomew et al., 2005). This assumption swiftly made its way into risk numbers cited in activist reports and media headlines, where it was treated as a fact.

Industry and animal safety experts argued that this causal assumption was amply refuted by real-world data showing that the strains of fluoroquinolone-resistant campylobacter found in people were hospital-acquired and not the same as those from animals; that campylobacteriosis was usually a self-limiting disease that caused diarrhea and then resolved itself, with no clear evidence that antibiotic therapy made any difference; that in the rare cases of severe infections, typically among AIDS patients or other immunocompromised people, physicians and hospitals did not treat campylobacteriosis with fluoroquinolones but generally prescribed a different class of antibiotics (macrolides); that even when fluoroquinolones (specifically, ciprofloxacin) was prescribed as empiric therapy, resistance did not inhibit its effectiveness because therapeutic doses are high enough to overcome the resistance; that, evidence from earlier antibiotic bans for farm animals in Europe showed that reducing use in animals increased illnesses in animals (and hence total bacterial loads on meat) but did not benefit human health; that fluoroquinolone-resistant strains of campylobacter occur naturally whether or not enrofloxacin is used; and that the main effect of continued use of enrofloxacin was to keep animals healthy and well-nourished, reducing risks of foodborne bacteria, both resistant and non-resistant. These arguments were heard by an Administrative Law Judge (ALJ), a career FDA employee with a record of deciding cases in favor of his employer. The ALJ found the industry arguments unpersuasive, and the FDA withdrew approval of enrofloxacin use in poultry in 2005. Meanwhile, during the run-up to this decision from 2000 to 2005, consumer advocacy groups scored major successes in persuading large-scale food producers and retailers to reduce or eliminate use of antibiotics in chickens. Advocates for bans on animal antibiotics, from the Centers for Disease Control and Prevention (CDC), APUA, and elsewhere, many of whom had testified for FDA, quickly declared the enrofloxacin ban a “public health success story” in both the press and in scientific journals (Nelson et al., 2007).



After more than a decade, the causal aspects of this case are easier to see clearly. Already by 2007, some of the researchers who had most strongly advocated the ban were beginning to observe that the original FDA assumption that withdrawing enrofloxacin would reduce fluoroquinolone-resistant Campylobacter proportionally now appeared to be mistaken: the resistant strains persisted, as industry had warned (Price et al., 2007). By 2016, it was clear that the dramatic improvements in food safety and reductions in campylobacteriosis risk in the population that had been taking place prior to the voluntary cessations of antibiotic use in farm animals and the enrofloxacin ban, including a nearly 50% reduction in risk between 1996 and 2004, had stopped and reversed course, as shown in Figure 14.1. Advocates who had been vocal between 2000 and 2005 in explaining to Congress and the public why they thought that banning enrofloxacin would protect public health moved on to advocate banning other antibiotics. No post mortem or explanation has yet been offered for the data in Figure 14.1.
Figure 14.1 Reductions in campylobacteriosis risk stopped and reversed around 2005



Source: Powell, 2016
Yet, understanding why the enrofloxacin ban failed to produce the benefits that had been so confidently predicted for it (or, if benefits did occur, why they are not more apparent) might produce valuable lessons that would help future efforts to protect public health more effectively. Such lessons remain unlearned when the process for passing new regulatory actions relies on unproved causal assumptions for purposes of advocacy and calculation of hypothetical benefits of regulation – essentially, making a prospective case for regulation – with no need to revisit assumptions and results after the fact to assess how accurate they were or why they failed, if they prove inaccurate. In this example, it appears that the FDA’s causal assumption that risk each year is proportional to exposure (Bartholomew et al., 2005) was simply mistaken (Price et al., 2007). But there is no formal regulatory process at present for learning from such mistakes, for correcting them, or for preventing them from being made again in future calls for further bans.
Lessons from the Examples
The foregoing examples illustrate that regulators and administrative law judges sometimes deal with uncertainties about the benefits caused by a regulation by making large, unproved, simplifying assumptions. This may be done with the best of intentions. Uncertainty invites Rorschach-like projection of beliefs and assumptions based on the rich panoply of System 1 (“Gut”) thinking (see Chapter 12), genuinely felt concerns about the currently perceived situation, and hopes to be able to improve it by taking actions that seem sensible and right to System 1. Such projection often feels like, and is described as, careful and responsible reflection and deliberation followed by formation of considered expert judgments based on careful weighing ofthe totality of the evidence. The resulting judgments are typically felt to be valuable guides to action under uncertainty, not only by those who provide them, but also by those who receive them (Tetlock and Gardner, 2015). Beliefs suggested by System 1 (“Gut”) in the absence of adequate data or opportunity for System 2 (“Head”) analysis are often confidently held, easy to reinforce with confirming evidence, and difficult to dislodge with disconfirming evidence – but they are also often objectively poor guides to what will actually happen (ibid, Gardner, 2009).

For air pollution and food safety alike, one such large assumption about causation is that risk of adverse health effects decreases in proportion to reductions in exposure to a regulated substance or activity. This is easy to understand and appeals to intuition. It leads to readily calculated predictions based on aggregate data: simply divide estimated cases of adverse health outcomes per year by estimated average annual exposure, if exposure is assumed to be the sole cause of the adverse health effects, as in the FDA example. Otherwise, regress adverse health outcomes against exposure, allowing for an intercept and other hypothesized contributing factors to explain any cases not attributed to exposure, as in air pollution health effects modeling. Either way, there is no need for complex modeling of effects of different combinations of risk factors for different individuals, or of interactions and dependencies among the hypothesized explanatory variables, as in the causal DAG models of Chapter 2. Such simplicity is often seen as a virtue (Bartholomew et al., 2005) rather than as omitting the very details that are essential for correctly understanding and quantifying effects caused specifically by exposure, and not by other factors with which it is associated. Assuming that all-cause or cardiovascular disease mortality risks will decrease in direct proportion to reduction of ambient concentrations of PM2.5 in air, or that drug-resistant foodborne illness counts will decrease in proportion to reduction of antibiotic used on the farm, provides simple, plausible-seeming slope factors for calculating hypothetical benefits of further regulation without the difficulty of building and validating models of a more complex reality.

Historically, the resulting numbers have been sensational enough to garner prominent coverage in both scientific journals and popular media, where they are usually presented as if they were facts rather than assumptions (e.g., Cromar et al., 2016). Such coverage attracts the anxiety and resolution of activists to take action to reduce exposures and encourages funding from agencies and other stakeholders to support further, similar assumption-driven research on how large the benefits of regulation might be. This cycle, and associated phenomena such as the social amplification of perceived risks as concern attracts more concern, are well documented in the social science and psychology of risk (Gardner, 2009). They are well served by simple assumptions and large risk numbers. By contrast, more complex and nuanced System 2 calculations suggesting that the quantitative difference in public health made by reducing exposures is at most vanishingly small (e.g., on the order of at most one extra case of compromised treatment of campylobacteriois per hundred million person-years, and plausibly zero (Hurd and Malladi, 2008)) typically attract far less attention, and may be viewed with suspicion because they require more detailed data and calculations (Bartholomew et al., 2005).

The “risk reduction is proportional to exposure reduction” formulation of regulatory benefits encourages another System 1 habit that makes life seem simpler and more manageable (Kahneman, 2011): narrowly focusing on just what one cares about and what one can do about it. For example, in banning enrofloxacin, the FDA focused exclusively on preventing cases of drug-resistant food poisoning by controlling what they could control – use of an animal antibiotic. The historical evidence from Europe that such control caused no detectable reductions in human illness risks was irrelevant for this focus, as FDA’s risk-is-proportional-to-exposure model assumes no other possibilities. That adequate cooking of meats prior to consumption is the only known control measure that demonstrably reduces illness risks was likewise irrelevant for an agency that does not oversee food preparation. It was excluded from the FDA risk assessment by considering only the ratio of drug-resistant illnesses to drug use on the farm. Similarly, estimates of human health benefits from reducing PM2.5 have seldom inquired about other effects, such as whether cleaner air promotes warmer temperatures, with consequent man-made climate change implications for economic and human health risks.



In summary, although a sound BCA approach unambiguously requires assessing the total costs and benefits caused by a regulation, regulatory agencies, like most of us, cope with uncertainty and complexity in the causal impacts of actions by adopting narrowly focused agendas, restricted jurisdictions, and greatly simplified causal models that focus on just a few things. These typically include some actions that we can take (never mind that other, less costly, actions by us or others might work much better); the consequences we want them to produce (never mind their unintended, unanticipated, or out-of-scope consequences); and at most a very few other factors (never mind the large, complex, and uncertain outside world that may make the consequences of our actions quite different from what was intended or predicted). It is much easier to understand and make predictions with these simplified models than to develop and validate more complex and realistic causal models (Kahneman, 2011). Disregarding or downplaying most of the causal web in which our potential actions and desired consequences are embedded makes the effects of our own actions, and their potential benefits, loom larger in our calculations than they really are. This very human tendency to substitute simplified causal models for fuller and more realistic ones in the face of uncertainty and complexity (Kahneman, 2011; Thaler, 2015) is inconsistent with the requirements of the BCA principle, but may be the best that can be done in the absence of institutions that enforce a higher standard.
Yüklə 12,64 Mb.

Dostları ilə paylaş:
1   ...   49   50   51   52   53   54   55   56   57




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə