Causal Analytics for Applied Risk Analysis Louis Anthony Cox, Jr



Yüklə 12,64 Mb.
səhifə48/57
tarix25.07.2018
ölçüsü12,64 Mb.
#58662
1   ...   44   45   46   47   48   49   50   51   ...   57

Decision-Making by Homo Economicus
BCA was developed by economists, and is most applicable to societies of purely rational individual decision-makers. Homo economicus, or ideally rational economic man, has several admirable characteristics not widely shared by real people (Gilboa and Schmeidler, 1989; Smith and von Winterfeldt, 2004); these are briefly recalled now. He does not engage in activities whose costs are clearly greater than their benefits, or make purchases or lifestyle choices that he is certain to regret later – unlike many rueful real-world recipients of predictable credit card bills and doctor’s admonishments. He does not over-value present as opposed to delayed rewards, or certain as opposed to uncertain ones, or losses compared to gains (neither of which distracts him from a dispassionate focus on final outcomes, independent of framing and reference point effects). He does not succumb to temptations that he knows are against his rational long-term self-interest, in the sense of making current choices that he knows he will later regret. He welcomes any relevant information that increases the ex ante expected utility of his decisions, whether or not it supports his preconceptions. He seeks and uses such information rationally and effectively whenever the cost of acquiring it is less than its benefits in increased expected utility. He learns from new information by conditioning crisp, coherent priors on it and then acts optimally – that is, to maximize subjective expected utility (SEU) – in light of the resulting posteriors and what is known about future opportunities and constraints.

Homo economicus is a dispassionate fellow, unswayed by useless regrets (no crying over spilt milk), endowment effects (grapes are not sweetened by ownership), status quo bias (he neither fears nor seeks change for its own sake), or sunk cost bias (being in for a penny does not affect his decision about whether to be in for a pound. No business or investment strikes him as being too big to fail if failure has become the rational choice). He experiences neither thrills nor anxiety from gambling once his bets have been optimally placed; he does not hold on to losing stocks to avoid the pain of selling them and acknowledging a loss; and he never seeks to win back with an unfavorable bet what he has already lost. His Prospect Theory weighting function for probabilities is a 45-degree line, so that he neither over-estimates the probabilities of rare events (thus driving over-investment in protecting against them), nor underestimates the probabilities of more common and familiar ones (thus driving under-investments in prudent investments to protect against predictable risks, e.g., from floods or hurricanes). He does not especially favor his own prior opinions, intuitions and beliefs (no confirmation bias) or eschew uncertain probabilities or outcomes compared to known ones (no Allais or Ellsberg paradoxes, no ambiguity aversion). His choices are dynamically consistent: what he plans today for his future self to do, it actually does when the future arrives. These and other characteristics of homo economicus can be succinctly summarized by saying that he is a subjective expected utility (SEU) decision-maker, conforming to the usual (Savage-style) axioms for rational behavior (Gilboa and Schmeidler, 1989; Smith and von Winterfeldt, 2004).

However, perfect individual rationality does not necessarily promote effective collective choice. Numerous impossibility results in game theory and the theory of collective choice reveal the difficulty of constructing collective choice procedures (“mechanisms”) that will produce desirable results based on voluntary participation by rational people. Tradeoffs must be made among desirable characteristics such as budget balance (a mechanism should not run at a net loss), ex post Pareto-efficiency (a mechanism should not select an outcome that every participants likes worse than one that was rejected), voluntary participation, and nondictatorship (a mechanism should reflect the preferences of more than one of the participants) (e.g., Mueller 2003; Man and Takayama, 2013; Othman and Sandholm, 2009). Similar tradeoffs, although less well known, hold when collective decisions must be made by rational individuals with different beliefs about outcomes (Hylland and Zeckhauser 1979; Nehring 2007), as well as when they have different preferences for outcomes.


Example: Pareto-Inefficiency of BCA with disagreements about probabilities

Suppose that members of a society (or an elected subset of members representing the rest) must collectively decide whether to pay for an expensive regulation with uncertain health benefits (or othetr uncertain benefits). Uncertainties for individuals will be represented by subjectively assessed probabilities, and the fact that these probabilities are not objectively determined is reflected in the fact that different people assess them differently. For concreteness, suppose that the collective choice to be made is whether to implement a costly proposed regulation to further reduce fine particulate air pollution in order to promote human health and longevity. Each individual believes that the benefits of the proposed regulation will exceed its costs if and only if (a) Air pollution at current levels causes significantly increased mortality risks; and (b) The proposed regulation would reduce those (possibly unknown) components of air pollution that, at sufficiently high exposure concentrations and durations, harm health. Each individual favors the regulation if and only if the joint probability of events (a) and (b) exceeds 20%. That is, the product of the probabilities of (a) and (b) must exceed 0.2 for the estimated benefits of the proposed regulation to exceed its costs (as these two events are judged to be independent).

As a mechanism to aggregate their individual beliefs, the individuals participating in the collective choice have agreed to use the arithmetic averages of their individual probabilities for relevant events, here (a) and (b), They will then multiply the aggregate probability for (a) and the aggregate probability for (b) and pass the regulation if and only if the resulting product exceeds 0.2. (Of course, many other approaches to aggregating or reconciling expert probabilities can be considered, but the point illustrated here with simple arithmetic averaging holds generally.)

Individual beliefs can be described by two clusters with quite different world views and subjective probability assessments. Half of the community (“pessimists”) fear both man-made pollution and our inability to control its consequences: they believe that air pollution probably does increase mortality risk, but that not enough is known for a regulation to reliably target and control the unknown components that harm human health. Specifically, they assign probability 0.8 to event (a) (exposure causes risk) and probability 0.2 to event (b) (regulation reduces relevant components of exposures). The other half of the community (“optimists”) is skeptical that that exposure increases risk, but believe that, if it does, then it is probably the components targeted by the regulation that do so (i.e., fine particulate matter rather than sulfates or something else). They assess a probability of only 0.2 for event (a) and a probability of 0.8 for event (b). Note that both sets of beliefs are consistent with the postulates that all individuals are perfectly rational, since the axioms of rationality do not determine how prior probabilities should be set (in this case, reflecting two different world views about the likely hazards of man-made pollution and our ability to control them).

Using arithmetic averaging to combine the subjective probability estimates of participating individuals (assumed to be half optimists and half pessimists), the average probability for event (a) is (0.8 + 0.2)/2 = 0.5, and the average probability for event (b) is likewise (0.2 + 0.8)/2 = 0.5. These group probability assessments imply that the collective joint probability of events (a) and (b) is 0.5*0.5 = 0.25. Since this is above the agreed-to decision threshold of 0.2, the regulation would be passed. On the other hand, every individual computes that the joint probability of events (a) and (b) is only 0.8*0.2 = 0.16. Since this is below the decision threshold of 0.2 required for projected benefits to exceed costs, no individual wants the regulation passed. Thus, aggregating individual beliefs about events leads to a decision that no one agrees with – a regrettable outcome.

The important point illustrated by this example is not that one should not average probabilities, or that other mechanisms might work better. To the contrary, an impossibility theorem due to Nehring (2007) demonstrates that no method of aggregating individual beliefs and using them to make group decisions can avoid selecting dominated decisions (other than such trivial procedures as selecting a single individual as a “dictator” and ignoring everyone else’s beliefs). For any aggregation and decision rule that treats individuals symmetrically, one can construct examples in which the group’s decision is not favored by any of its members. (For example, using a geometric mean instead of an arithmetic means would resolve the specific problem in this example, but such a procedure would also select dominated choices in slightly modified versions of the example.) Thus, the general lesson, illustrated here for the specific aggregation mechanism of averaging individual probabilities to get collective ones, is that when probabilities of events are not known and agreed to, and opinions about them are sufficiently diverse, then calculations (collective decision mechanisms) that combine the probability judgments of multiple experts or participants to determine what acts should be taken in the public interest risk producing regrettable collective choices with which no one agrees.


Example: Impossibility of Pareto-Efficient choices with sequential selection
A possible remedy for the Pareto-inefficient outcomes in the preceding example would be not to combine individual beliefs about component events at all, but instead to elicit from individuals their final, holistic preferences for, or evaluations of, collective actions. For example, each individual might estimate his own net benefit from each alternative action (pass or reject the proposed regulation, with a proposed tax or other measure to pay for it if it is passed), and then society might take the action with the largest sum of estimated individual net benefits. This would work well in the preceding example, where everyone favors the same collective choice (albeit for different reasons, based on mutually inconsistent beliefs). But it leaves the resulting decision process squarely in the domain of other well-known impossibility theorems that apply when individuals directly express preferences for alternatives.

As an example, suppose a society of three people (or a Congress of three representatives of a larger society) makes collective choices by voting among various proposed regulatory alternatives as the relevant bills are brought forward for consideration. Suppose that the legislative history is such that, in the following list of possible alternatives, the choice between A and B comes to a vote first (e.g., because advocates for PM2.5 reduction organize themselves first or best), and that later the winner of that vote is run off against alternative C (perhaps because O3 opponents propose their bill later, and it is assumed that the current cost-constrained political environment will allow at most one such pollution reduction bill to be passed in the current session). Finally (maybe in the next session, with an expanded regulatory budget, or perhaps as a rider to an existing bill), alternative D is introduced, and run off against whichever of alternatives A-C has emerged as the collective choice so far. Here are the four alternatives considered:

A: Do not require further reductions in any pollutant

B: Require further reductions in fine particulate matter (PM2.5) emissions only

C: Require further reductions in ozone (O3) only

D: Require further reductions in both PM2.5 and O3.

Individual preferences are as follows (with “>” interpreted as “is preferred to”):


  1. A > D > C > B

  2. B > A > D > C

  3. C > B > A > D

For example, individual 1 might believe that further reducing air pollution creates small (or no) health benefits compared to its costs, but believes that, if needless costs are to be imposed, they should be imposed on both PM2.5 and O3 producers (with a slight preference for penalizing the latter, if a choice must be made). Individual 2 believes that PM2.5 is the main problem, and that dragging in ozone is a waste of cost and effort; individual 3 believes that ozone is the main problem.

Applying these individual preferences to determine majority votes, it is clear that B will be selected over A (since B is preferred to A by both of individuals 2 and 3). Then, B will lose to C (since 1 and 3 prefer C to B). Finally, D will be selected over C (since 1 and 2 prefer D to C). So, the predictable outcome of this sequence of simple majority votes is that alternative D will be the society’s final collective choice, i.e., require further reductions in both pollutants. But this choice is clearly Pareto-inefficient (and, in that sense, regrettable): everyone prefers option A (no further reduction in pollutants), which was eliminated in the first vote, to option D (further reductions in all pollutants), which ended up being adopted.



A central theme of collective choice theory for societies of rational individuals is that such perverse outcomes occur, in the presence of sufficiently diverse preferences, for all possible collective choice mechanisms (including those in which BCA comparisons are used to compare pairs of alternatives), provided that non-dictatorship or other desired properties hold (e.g., Mueller 2003; Man and Takayama, 2013).

How Real People Evaluate and Choose Among Alternatives
Real people are quite different from homo economicus (Gilboa and Schmeidler, 1989; Smith and von Winterfeldt, 2004). Psychologists, behavioral economists, marketing scientists, and neuroscientists studying choices have demonstrated convincingly that most people (including experts in statistics and decision science) depart systematically from all of the features of purely rational decision-making discussed above (e.g., Kahneman, 2011). To a very useful first approximation, most of us can be described as making rapid, intuitive, emotion-informed judgments and evaluations of courses of action (“System 1” judgments, in the current parlance of decision psychology), followed (time and attention permitting) by slower, more reasoned adjustments (“System 2” thinking) (ibid).
The Affect Heuristic Effects Risky Choice and BCA Evaluations via a Network of Decision Biases
Much of System 1 thinking, in turn, can be understood in terms of the affect heuristic, according to which gut reaction – a quick, automatically generated feeling about whether a situation, choice, or outcome is good or bad – drives decisions. For most decisions and moral judgments, including those involving how to respond in risky situations, the alternative choices, situations, or outcomes are quickly (perhaps instinctively) categorized as “bad” (to be avoided) or “good” (to be sought). Beliefs, perceptions, and System 2 rationalizations and deliberations then tend to align behind these prompt evaluations. This approximate account, while over-simplified, successfully explains many of the departures of real preferences and choice behaviors from those prescribed by expected utility theory, and is consistent with evidence from neuroeconomics studies of how the brain processes risks, rewards, delays, and uncertainties (including unknown or “ambiguous” ones) in arriving at decisions. For example, immediate and certain rewards are “good” (positive valence). They are evaluated by different neural circuits than rewards that are even modestly delayed, or uncertain, perhaps explaining the observed “certainty effect” of relative over-weighting of rewards received with certainty. Conversely, immediate, certain losses are typically viewed as “bad” and are disproportionately avoided: many people will not buy with cash (immediate loss) what they will buy with credit cards (delayed loss). More generally, real people often exhibit time preferences that exhibit approximately hyperbolic discounting, and hence dynamic inconsistency: someone who would always prefer $1 now to $2 six months from now may nonetheless also prefer $2 in 36 months to $1 in 30 months. The conflict between the high perceived value of immediate temptations and their lower perceived value (or even negative net benefit) when viewed from a distance in time explains many a broken resolution and resulting predictable regret.

Figure 12.1 provides a schematic sketch of some suggested relations among important decision biases. Although there are numerous details and a vast literature about relations among biases, the core relations in Figure 12.1 can be summarized succinctly as: WTP  Affect heuristic  Learning aversion  Overspending  Rational regret. These components are explained next. The arrows in Figure 12.1 indicate a range of implication relations of various strengths and degrees of speculation, ranging from relatively weak (the bias at the tail of an arrow plausibly contributes to, facilitates, or helps to explain the one at its head) to strong (the bias at the tail of an arrow mathematically implies the one at its head under quite general conditions). For example, it may seem plausible that the certainty effect helps to explain hyperbolic discounting if delayed consequences are interpreted by the brain as being uncertain (since something unknown might happen in the future to prevent receiving them – one might be hit by a bus later today) (Prelec and Loewenstein, 1991; Saito, 2011a). Establishing or refuting such a speculation empirically might take considerable effort for an experimental economist, behavioral economist, or neural economist (Dean and Ortoleva, 2012; Epper and Fehr-Duda, 2014). But mathematical conditions under which the certainty effect implies hyperbolic discounting (and also the common ratio effect found in the Allais Paradox) can be established fairly easily, e.g., Saito, 2011a and b). The arrows in Figure 12.1 suggest several such implications having varying degrees of support in the literature; the cited references provide details.


Figure 12.1. Suggested relations among decision biases. (An arrow from A to B indicates that bias A implies, contributes to, or facilitates bias B.)


Some of the most striking implications in Figure 12.1 concern the consequences of ambiguity aversion, i.e., reluctance to take action based on beliefs about events with unknown objective probabilities (and willingness to pay to reduce uncertainty about probabilities before acting). An ambiguity-averse decision maker would prefer to use a coin with a known probability of heads, instead of a coin with an unknown probability of heads, whether betting on heads or on tails; this is inconsistent with SEU (since revealing a preference for the coin with known probability of heads when betting on heads implies, in SEU, that one considers the other coin to have a smaller probability of heads, and hence a larger probability of tails). Proposed normative models of decision-making with ambiguity aversion lead to preferences for acts that can be represented as maximizing the minimum possible subjective expected utility when the probabilities of consequences for acts, although unknown, belong to a set of multiple priors (the Gilboa-Schmeidler multiple priors representation); or to more general representations in which an additional penalty is added to each prior (Maccheronia et al., 2006). However, recent critiques of such proposed “rational ambiguity-aversion” models have pointed put the following implications (Al-Najjar and Weinstein, 2009):

  • Ambiguity aversion implies that decisions do not ignore sunk costs, as normative theories of rational decision-making would prescribe;

  • Ambiguity aversion implies dynamic inconsistency, i.e., that people will make plans based on assumptions about how they will behave if certain contingencies occur in the future, and then not actually behave as assumed.

  • Ambiguity aversion implies learning aversion, i.e., unwillingness to receive for free information that might help to make a better (SEU-increasing) decision.


Decision Biases Invalidate Straight-Forward Use of WTP Values
One clear implication of the network of decision biases in Figure 12.1 is that they make WTP amounts (both elicited and revealed) untrustworthy as a normative basis for quantifying the benefits of many risk-reducing measures, such as health, safety, and environmental regulations (Casey JT, Delquie P. 1995). Important, systematic departures of elicited WTP from normative principles include the following:

  • Affect heuristic. People (and other primates) are willing to pay more for a small set of high-quality items than for a larger set that contains the same items, with some lower-quality one added as well (Kralik et al., 2012). More generally, in contrast to the prescriptions of SEU theory, expanding a choice set may change choices even if none of the added alternatives is selected, and may change satisfaction with what is chosen (Poundstone, 2010).

  • Proportion dominance. Willingness-to-pay is powerfully, and non-normatively, affected by use of proportions. For example, groups of subjects typically are willing to pay more for a safety measure described as saving “85% of 150 lives” in the event of an accident than for a measure described as saving “150 lives” (Slovic et al., 2002, 2005) (Similarly, one might expect that many people would express higher WTP for saving “80% of 100 lives” than for saving “10% of 1000 lives,” even though all would agree that saving 100 lives is preferable to saving 80.) The high percentages act as cues triggering positive-affect evaluations, but the raw numbers, e.g., “150 lives,” lack such contextual cues, and hence do not elicit the same positive response. This aspect of choice as driven by contextual cues is further developed in Ariely’s theory of arbitrary coherence (Ariely, 2009).

  • Sensitivity to wording and framing. Describing the cost of an alternative as a “loss” rather than as a “cost” can significantly increase WTP (Casey and Delquie,1995). The opportunity to make a small, certain payment that leads to a large return value with small probability, and else to no return, is assessed as more valuable when it is called “insurance” than when it is called a “gamble” (Hershey et al., 1982). Describing the risks of medical procedures in terms of mortality probabilities instead of equivalent survival probabilities can change preferences among them (Armstrong et al., 2002), since the gain-frame and loss-frame trigger loss-averse preferences differently, in accord with Prospect Theory.

  • Sensitivity to irrelevant cues. A wide variety of contextual cues that are logically irrelevant can nonetheless greatly affect WTP (Poundstone, 2010). For example, being asked to write down the last two digits of one’s Social Security Number significantly affects how much is willing to pay for consumer products (with higher SSNs leading to higher WTP amounts) (Ariely, 2009). The “anchoring and adjustment” heuristic (Kahneman, 2011) allows the mind to anchor on irrelevant cues (as well as relevant ones) that then shape real WTP amounts and purchasing behaviors (Poundstone, 2010).

  • Insensitivity to probability. If an elicitation method or presentation of alternatives gives different salience to attributes with different effects on affect (e.g., emphasizing amount vs. probability of a potential gain or loss), then choices among the alternatives may change (the phenomenon of elicitation bias, e.g., Champ and Bishop, 2006). Similarly, although rational (System 2) risk assessments consider the probabilities of different consequences, System 1 evaluations may be quite insensitive to the magnitudes of probabilities (e.g., 1 in a million vs. 1 in 10,000), and, conversely, overly sensitive to the change from certainty to near-certainty: “When consequences carry sharp and strong affective meaning, as is the case with a lottery jackpot or a cancer… variation in probability often carries too little weight. …[R]esponses to uncertain situations appear to have an all or none characteristic that is sensitive to the possibility rather than the probability of strong positive or negative consequences, causing very small probabilities to carry great weight.” (Slovic et al., 2002)

  • Scope insensitivity. Because the affect heuristic distinguishes fairly coarsely between positive and negative reactions to situations or choices, but lacks fine-grained discrimination of precise degrees of positive or negative response, WTP amounts that are largely driven by affect can be extraordinarily insensitive to the quantitative magnitudes of benefits involved. As noted by Kahneman and Frederick (2005), “In fact, several studies have documented nearly complete neglect of scope in CV [contingent valuation stated WTP] surveys. The best-known demonstration of scope neglect is an experiment by Desvouges et al. (1993), who used the scenario of migratory birds that drown in oil ponds. The number of birds said to die each year was varied across groups. The WTP responses were completely insensitive to this variable, as the mean WTP’s for saving 2,000, 20,000, or 200,000 birds were $80, $78, and $88, respectively. … [Similary], Kahneman and Knetsch (see Kahneman, 1986) found that Toronto residents were willing to pay almost as much to clean up polluted lakes in a small region of Ontario as to clean up all the polluted lakes in Ontario, and McFadden and Leonard (1993) reported that residents in four western states were willing to pay only 28% more to protect 57 wilderness area than to protect a single area.”

  • Perceived fairness, social norms, and moral intensity. How much individuals are willing to pay for benefits typically depends on what they think is fair, on what they believe others are willing to pay, and on whether they perceive that the WTP amounts for others reflect moral convictions or mere personal tastes and consumption preferences (e.g., Bennett and Blaney, 2002). The maximum amount that a person is willing to pay for a cold beer on a hot day may depend on whether the beer comes from a posh hotel or a run-down grocery store, even though the product is identical in either case (Thaler, 1999).

Many other anomalies (e.g., preference reversal, endowment effect, status quo bias, etc.) drive further gaps between elicited WTP and WTA amounts, and between both and normatively coherent preferences (see Figure 12.1). Taken together, they rule out any straight-forward use of WTP values (elicited or inferred from choices) for valuing uncertain benefits. Indeed, once social norms are allowed as important influencers of real-world WTP values (unlike the WTPs in textbook BCA models of quasi-linear individual preferences), the question arises of whether coherent (mutually consistent) WTP values necessarily exist at all. Simple examples show that they may not.


Yüklə 12,64 Mb.

Dostları ilə paylaş:
1   ...   44   45   46   47   48   49   50   51   ...   57




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə