The Psychology of Judgment & Decision Making mis 696A – Readings in mis (Nunamaker)



Yüklə 1,38 Mb.
tarix12.08.2018
ölçüsü1,38 Mb.
#62343


The Psychology of Judgment & Decision Making

  • MIS 696A – Readings in MIS (Nunamaker)

  • 05 November 2003

  • [Cha / Correll / Diller / Gite / Kim / Liu / Zhong]


SECTION I: Perception, Memory & Context

  • Hoon Cha & Jeff Correll



Chapter 1: Selective Perception

  • Hoon Cha



Define First and See?

  • People selectively perceive what they expect and hope to see



Examples

  • Any book which is published will have been read possibly hundreds of times, including by professional proof readers.

  • And yet grammatical and other errors still get into print. Why?

  • Because the mind is very kind and corrects the errors that our eyes see.



Lessons Learned

  • Before conducting your research and interpreting your results

    • Ask yourself what expectations you did bring into the situation?
    • Consult with others who don’t share your expectations and motives


Chapter 2: Cognitive Dissonance



  • People are motivated to reduce or avoid psychological inconsistencies.

    • Cognitive dissonance
  • People are in much the same position as an outside observer when making inferences.

    • Self-Perception


Examples

  • Smokers find all kinds of reasons to explain away their unhealthy habit.

  • The alternative is to feel a great deal of dissonance.



Lessons Learned

  • Change in behavior can influence change in attitude

  • During your research, get other people to commit themselves to own the object, then they will form more positive attitudes toward an object.

  • Use systems development as a research methodology



Chapter 3: Memory & Hindsight Biases



"I knew it all along …"

  • Memory is reconstructive, not a storage chest in the brain.

    • Shattered memories
  • It can be embarrassing when things happen unexpectedly. People tend to view what has already happened as relatively inevitable and obvious.

    • Hindsight bias


Examples

  • Just before the election, people tend to be uncertain about who will win; but, after the election, they tend to point to signs that they now say had indicated clearly to them which candidate was going to win.

  • In other words, they are likely to remember incorrectly that they had known all along who the winning candidate was going to be.



Lessons Learned

  • During your research, explicitly consider how past events might have turned out differently.

  • Keep in mind the value of keeping accurate notes and records of past events



Chapter 4: Context Dependence

  • Jeff Correll



4 Illustrations of Context Effect













SECTION II: How Questions Affect Answers

  • Jeff Correll



Chapter 5: Plasticity

  • Jeff Correll















Chapter 6: Effects of Wording & Framing

  • Jeff Correll















SECTION III: Models of Decision Making

  • Chris Diller



Chapter 7: Expected Utility Theory



Classic Utility Theory

  • Example: Self-Test Question #30

  • The "St. Petersburg Paradox"

    • Question initially posed by Nicolas Bernoulli (1713)
    • "Solution" provided by Daniel Bernoulli (1738/1754)


Expected Utility Theory

  • Expected Utility Theory

    • Developed by von Neumann & Morganstern (1947)
    • The value of money DECLINES with the amount won (or already possessed)
    • Normative … NOT descriptive!


Expected Utility Theory

  • "Rational Decision Making" Assumptions

    • Ordering = Preferred alternatives or indifference
    • Dominance = Alternative with better outcome(s)
      • "Weakly" dominant vs. "Strongly" dominant
    • Cancellation = Ignore identical factors/consequences
    • Transitivity = If A > B and B > C … then A > C !
    • Continuity = Prefer gamble to sure thing (odds!)
    • Invariance = Unaffected by way alt's are presented
  • A Major Paradigm with Many Extensions



Chapter 8: Paradoxes in Rationality



The Allais Paradox

  • Example: Self-Test Question #28

  • Maurice Allais (1953)

    • Showed how the Cancellation Principle is violated
    • The addition of equivalent consequences CAN lead people to make different (irrational?) choices


Ellsberg's Paradox

  • Daniel Ellsberg (1961)

    • Also showed how Cancellation Principle is violated
    • People to make different (irrational?) choices in order to avoid uncertain probabilities
  • Example: Urn with 90 balls (R/B/Y)



Intransitivity

  • "Money Pump"

    • Decision makers with intransitive preferences
      • A < B  B < C  A > C
  • Amos Tversky (1969)

    • Harvard study: 1/3 of subjects displayed this!
  • "Committee Problem" Example



Preference Reversals

  • Sarah Lichtenstein & Paul Slovic (1971)

    • Preferences can be "reversed" depending upon how they are elicited
      • High payoff vs. High probability
    • Choosing between a PAIR of alternatives involves different psychological processes … than bidding on a particular alternative separately
  • Exist even for experienced DMs in real life!

    • Example: Study of Las Vegas bettors & dealers


Conclusions

  • Violations of EUT are not always irrational!

    • Approximations simplify difficult decisions
    • Increase efficiency by reducing cognitive effort
    • Lead to decisions similar to optimal strategies
    • Assume that the world is NOT designed to take advantage of the approximation efforts utilized
    • A decision strategy that can not be defended as logical may be rational if it yields a quick approximation of a normative strategy that maximizes utility.


Chapter 9: Descriptive Models of DM



Satisficing

  • Herb Simon Blows Up EUT (1956)

    • Simplifying assumptions make the problems tractable:
      • DMs are assumed to have complete information
      • DMs are assumed to understand and USE this information
      • DMs are assumed to compare calculations & maximize utility
    • Simon says: People "satisfice" rather than optimize
      • "People often choose a path that satisfies their most important needs, even though the choice may not be ideal or optimal."
      • Humans' adaptive nature falls short of economic maximization


Prospect Theory

  • Daniel Kahneman & Amos Tversky (1979)

    • Prospect Theory differs from EUT in two big ways:
      • Replace "Utility" with "Value" (net wealth vs. gains/losses)
      • The value function for losses is different than the one for gains


Prospect Theory

  • George Quattrone & Amos Tversky (1988)

    • Introduced notion of "loss aversion" & its results
      • Political ramifications – Incumbent re-elections
      • Commercial ramifications – Bargaining & negotiation
      • Personal ramifications – "The Endowment Effect"
        • Losses are felt much more strongly than gains!


Prospect Theory's Certainty Effect

  • Amos Tversky & Daniel Kahneman (1981)

    • Reductions in probability have variable impacts
      • Zeckhauser: Russian Roulette – 4 to 3 bullets vs. 1 to 0 bullets
    • People would rather eliminate risk than just reduce it
      • Probabilistic Insurance – Kahneman & Tversky (1979)
      • Small probabilities often "overweighted," inflating the importance of improbable events
      • Example: Self-Test Question #23


Prospect Theory's Pseudocertainty

  • Amos Tversky & Daniel Kahneman (1981)

    • Similar to Certainty Effect, this effect deals with apparent certainty rather than real certainty (Framing)
  • Slovic, Fischhoff, & Lichtenstein (1982)

    • Example: Vaccinations
    • People prefer the option that appeared to eliminate risk!
    • Other Examples: Marketing Tactics
      • Buy two, get one FREE (preferred) … versus 33% off!


Regret Theory

  • Prospect Theory's Premise

    • Compare gains & losses relative to a reference point
    • However, some compare imaginary outcomes!
  • "Counterfactual Reasoning"

    • Dunning & Parpal (1989) – The basis of Regret Theory
    • Compare decisions with what MIGHT have happened
  • Same as Prospect Theory's Risk Aversion … but:

    • "Regret variable" is added to the new utility function
    • Accounts for many previously-mentioned paradoxes


Multi-Attribute Choice

  • Einhorn & Hogarth (1981)

    • Consistency of goals/values, not objective optimality
    • Research: HOW (not how well) decisions are made
  • Compensatory Strategies (John Payne, 1982)

    • Used primarily for simple choices, few alternatives
    • Trade off low & high values on different dimensions
      • Linear Model (All attributes weighted  index score)
      • Additive Differences Model (Only the different attributes weighted)
      • Ideal Point Model (Evaluate attributes on their distance from the ideal)


Noncompensatory Strategies

  • R.M. Hogarth (1987)

    • Used primarily for complex choices, many alternatives
    • These do NOT allow for making trade-offs!
    • Most well-known examples include:
      • Conjunctive Rule (Satisficing! Criterion ranges  acceptance/rejection)
      • Disjunctive Rule (Each alternative is measured by its BEST attribute)
      • Lexicographic Strategy (Step-wise evaluation of attributes  superior)
      • Elimination-By-Aspects (Step-wise evaluation of attributes  inferior)


The More Important Dimension

  • Slovic (1975)

    • "Given a choice between two equally-valued alternatives, people tend to choose the alternative that is superior on the more important dimension."
    • Example: Baseball players' statistics
    • Results indicate that people DO NOT choose randomly!


Applications to MIS & Academia

  • Normative vs. Descriptive Approaches

  • Importance of Framing

  • Understanding "Rationality" in DM

    • Departmental budget "battles"
    • Competition for research funding
    • Analysis of technology adoption
    • Personnel decisions
    • Selling "transitioned" research products/tools


Break



SECTION IV: Heuristics & Biases

  • Sanghu Gite, Iljoo Kim & Jun Liu



Heuristics and Biases

  • Sanghmitra Gite



He loves me…he loves me not…



Heuristics or Hueristics?



The Representativeness Heuristic



The Law of Small Numbers



Neglecting Base Rates



Nonregressive Prediction

  • “Regression to the mean is the phenomena in which high or low scores tend to be followed by more average scores…The tendency to overlook regression leads to critical errors of judgment.

  • Examples

    • Baseball Magic
    • Sports Illustrated Jinx
  • Nisbett and Ross –

  • “…measures designed to stem a crisis ( sudden increase in crime, disease…or a sudden decrease in sales, rainfall, or Olympic gold medal winners) will, on the average, seem to have greater impact than there actually has been…”



The Availability Heuristic



The Limits of Imagination



Probability and Risk



Compound Events



Conservatism and the Perception of Risk



Take away this…

  • Maintain accurate records

  • Beware of wishful thinking

  • Break compound events into simple events

  • Importance in Your research

  • Use heuristics and probability measures carefully

  • Be aware of biases arising from each type of heuristic

  • Apply corrective measures to your data to “undo” the effect of biases

  • Don’t let your “desire” for accuracy sway you towards inaccurate data



Chapter 13: Anchoring & Adjustment

  • Iljoo Kim



Anchoring and Adjustment

  • The insufficient adjustment up or down from an original starting value, or “anchor”

  • Ex) Number estimates after a spin

  • Anchoring is a robust phenomenon in which the size of the effect grows with the discrepancy between the anchor and the “pre-anchor estimate”.



What I really mean is…?

  • Arbitrary numerical references may have unintended effects

  • - “Would you support a U.S. attempt to build a defensive system against nuclear missiles and bombers if it were able to shoot down 90% of all Soviet nuclear missiles and bombers?”

  • - “A defense that can protect against 99% of the Soviet nuclear arsenal may be judged as not good enough, given the destructive potential of the weapons that could survive”



Power in a real-world

  • Real Estate Agents Case

  • - All agents given different figures about same information (e.g., info. about nearby properties)

  • - Significant evidence of anchoring shown

  • What we can see…

  • - Experts are not immune to it

  • - Hard to realize

  • - Powerful in real world



Things we learned

    • Try to be free from the previous results or the existing perception
    • Be aware of any suggested values that seem unusually high or low
    • Generate an alternate anchor value that is equally extreme in the opposite direction
    • Realize that a discussion of best- or worst-case scenarios can lead to unintended anchoring effects
    • Worth considering multiple anchors before making final estimate


Chapter 14: The Perception of Randomness



Ch. 14 The Perception of Randomness

  • There are Coincidences out there…

  • People tend to see patterns in the randomness

  • Which one is randomly selected?

    • wwbbbwbwbbwbwww / wbwbwbwwbbwbwbw
    • People saw randomness when there was actually a pattern, and saw patterns when the sequence was actually random


Things we learned

  • Decision makers have a tendency to over-interpret chance events

  • Researchers should resist the temptation to view short runs of the same outcome as meaningful: Distinguish between a pattern and a coincidence!

  • Try! Try! And Try!



Chapter 15: Correlation, Causation & Control



Ch 15. Correlation, Causation, and Control

  • Correlation Assessments are not easy (Survey #14)



Illusory Correlation

  • The mistaken impression that two unrelated variables are correlated

    • e.g., Draw-A-Person test
  • Hard to eliminate



Invisible Correlations

  • Failing to see a correlation that does exist

    • Difficult to detect in frequency
    • Usually from the absence of an expectation
    • e.g., correlation between smoking and lung cancer


Causation

  • Correlation != Causation

    • “Just as correlation need not imply a causal connection, causation need not imply a strong correlation”
  • Illusion of Control

    • Belief of having more control over chance outcomes
    • from Illusory Correlation and Causation


Things we learned

  • Researchers should focus on more than confirming and positive cases of a relationship

  • Take away biases

    • Judgments from “Observation” or “Expectation”?
  • Remember,

    • “Correlation != Causation”


SECTION V: The Social Side

  • Jun Liu



Chapter 17: Social Influences



Social Facilitation

  • What change in an individual’s normal performance occurs when other people are present?

  • - Performance of simple, well-learned responses is enhanced while the performance of complex, unmastered skills tends to be impaired.

  • VS.



Social Loafing & Bystander Intervention

  • People do not work as hard in groups as they work alone.

  • Decision to intervene is heavily influenced by the presence of others.

  • Possible cause: diffusion of responsibility



Social Comparison Theory

  • People evaluate their opinion and abilities by comparing themselves with others.

  • People tend to take cues from those who are similar

  • Social analgesia: social comparisons can influence perceptions.



Lessons Learned Three monks’ story



New version of three monks’ story



Chapter 18: Group Judgments & Decisions



Group Errors and Biases

  • “Group-serving bias”: group members make dispositional attributions for group successes and situational attributions for group failures

  • “Outgroup homogeneity bias”: groups perceive their own members as more varied than members of other groups.



Are several heads better than one?

  • Groups usually perform somewhat better than average individuals

  • Groups performs worse than the best individual in a statistical aggregate of people

  • Brainstorming is most effective when conducted by several people independently rather than in a group session



The Benefits of Dictatorship

  • The best member of a group often outperforms the group

  • The dictatorship technique outperforms other types of decision techniques (“consensus”, “delphi”, “collective”, etc.)

  • An good leader encourages all members to express an opinion



Lessons learned

  • Three cobblers with their wits combined equal Zhuge Liang the master mind.

  • It is more important to “put heads together”

  • Implications to MIS Researchers



SECTION VI: Common Traps

  • Mike Zhong



Chapter 19: Overconfidence



Overconfidence

  • Example:

    • Attack on Pearl Harbor
    • Columbia & Challenger disasters (The estimated launch risk was 1 catastrophic failure in 100,000 launches – equivalent to launching a shuttle once per day and expecting to see only one accident in three centuries)


Overconfidence

  • Description

    • Occurring when a subject’s confidence in the estimated accuracy surpasses the real accuracy.
    • Correlation between overconfidence and accuracy:


Overconfidence

  • Overconfidence in Research



Overconfidence

  • Remedy

    • Extensive literature review is not enough itself;
    • stop to consider reasons why your judgment might be wrong;
    • because of the subject’s confirmation bias, opinions from other researchers are valuable.


Confirmation Bias

  • Example

    • Have we bought a bargain?


Confirmation Bias

  • Confirmation Bias in Research

    • Focusing on things which will confirm our new ideas or hypothesis, while ignoring the negative sides.
  • Remedy

    • Negative testing strategy
    • Are all insects have 6 legs?


Chapter 20: Self-Fulfilling Prophecies



Self-fulfilling Prophecies

  • Example

    • Robert Rosenthal and Lenore Jacobson ‘s test, 1968.This is also known as the Pygmalion Effect.
  • Description

    • “The self-fulfilling prophecy is, in the beginning, a false definition of the situation evoking a new behavior which makes the originally false conception come true


Self-fulfilling Prophecies

  • Using it

    • Affecting a person’s behavior.
  • Defending

    • Questioning their assumptions about you if you do not wish to be pushed in this direction.


Chapter 21: Behavioral Traps



Behavioral Traps

  • Description

    • A course of action appears to be promising when embarked on, but later becomes undesirable and difficult to escape from.
  • Traps & Counter-traps



Behavioral Traps

  • Taxonomy

    • Time delay traps (short-term vs long-term)
    • Ignorance traps (unforeseen negative effects)
    • Investment traps (sunk cost effects)
    • Deterioration traps (changing benefits and cost)
    • Collective traps (self-interests leads to negative consequences for whole)


Behavioral Traps

  • Avoiding behavioral traps in MIS Research

    • To avoid time delay traps, balance short-term and long-term goals ( design vs implementation)
    • To avoid ignorance traps, conduct comprehensive literature review before plunge into research work.
    • To avoid collective traps, do not always depend on others in group research/work, do as good as you can when working alone.


Summary / Key Take-Aways

  • Changes in behavior can influence change in attitude

  • Framing of questions/alternatives is important

  • Understand the “rationality” of DM (e.g. – satisficing)

  • Be aware of biases arising from heuristics … apply corrective measures!

  • Don’t over-interpret chance events … distinguish between patterns and coincidence!

  • The superior performance of groups is a function of not only having “more heads than one” … but of putting those heads together!

  • Avoid time-delay traps … balance S-T and L-T goals!



Yüklə 1,38 Mb.

Dostları ilə paylaş:




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə