The Rationality of Irrationality



Yüklə 87,5 Kb.
tarix15.08.2018
ölçüsü87,5 Kb.
#63014


The Rationality of Irrationality

Uri Weiss*

The bounded rationality assumption does not contradict the rationality assumption. Specifically, in small decisions it is rational to act like boundedly rational agents. This proposition may provide answers to many challenges to rational choice economic theory that are posed by behavioral economics. In particular, it provides an answer to the "Rabin paradox," and challenges our understanding of how to refute the rationality assumption.

1. Introduction

Frank Knight wrote: "It is evident that the rational thing to do is to be irrational, where deliberation and estimation cost more than they are worth". Frank Knight (1921, p. 67, footnote).

Economic theory lost a lot from skipping this footnote, even though the proposition is wrong or at least inaccurate, and we will explain why it is so and attempt to correct it. Next, we will discuss the relationship between rationality and bounded rationality and we will show that the assumption of bounded rationality does not contradict the assumption of rationality. In contradiction to the claims of Herbert Simon and others, the bounded rationality assumption does not replace the rationality assumption; but it does sharpen it. We will show that the claim that the bounded rationality assumption replaces the rationality assumption derives from an incorrect interpretation by Simon of the rationality assumption.

Contrary to Knight's proposition above, a person who does not compute, when the cost of computation is greater than its benefit, is not irrational, but absolutely rational. However, a person who does compute in this situation is irrational (like a person who behaves in another irrational way, say by deciding according to an astrological rule instead of according to an efficient rule). Hence, even when the cost of computation is bigger than its benefit, it is not rational to be irrational, but it is rational to be what is considered to be boundedly rational. Therefore, the corrected statement is:



Whenever deliberation and estimation cost more than they are worth, it is evident that the rational thing to do is to act like a boundedly rational agent; and moreover it is irrational not to act like boundedly rational agent.

The proposition of Frank Knight was of course made before the discourse of bounded rationality, but also Frank Knight could have said:



It is evident that the rational thing to do is to act in a way that is usually considered irrational – i.e., to refrain from deliberating or estimating – where deliberation and estimation cost more than they are worth.

In the light of our correction of Frank Knight's proposition, let us turn to the relationship between rationality and bounded rationality. First of all, we will inquire how the relationship between rationality and bounded rationality is understood in economic theory and we will show the existence of mistakes and confusion in the economic theory's approach. In other words, this approach misses that bounded rationality does not contradict rationality. Actually once we understand the difference between bounded rationality and rationality as summarized in that in the former there is limitation of information and\or computational capacity, then bounded rationality is included in rationality, .i.e., bounded rationality is a subset of rationality.


Herbert Simon coined the concept of "bounded rationality." He saw the concept as coming to replace the concept of rationality. Later writers, like Selten, also see the bounded rationality assumption as replacing the rationality assumption.

Of bounded rationality Simon (1955) wrote:



"Broadly stated, the task is to replace the global rationality of economic man with a kind of rational behavior that is compatible with the access to information and the computational capacities that are actually possessed by organisms, including man, in the kinds of environments in which such organisms exist."

He saw bounded rationality as an approximation of rationality. Of rationality, such as it appears in economic theory, Simon wrote:



"Traditional economic Theory postulates an "economic man," who ,in the course of being "economic" is also "rational." This man is assumed to have knowledge of the relevant aspects of his environment which, if not absolutely complete, is at least impressively clear and voluminous. He is assumed also to have a well-organized and stable system of preferences, and a skill in computation that enables him to calculate, for the alternative courses of action that are available to him, which of these will permit him to reach the highest attainable point on his preferences scale."

If we adopt Simon's definition for rationality, the conclusion will be that the assumption of bounded rationality replaces the assumption of rationality, and this is consistent with Simon's challenge:

"I shall assume that the concept of "economic man" (and, I might add of his brother "administrative man") is in need of fairly drastic revision, and shall put forth some suggestions as to the direction the revision might take."

Actually, a boundedly rational person is not rational, according to Simon's definition of rationality. It is very easy to agree with Simon that there is a computation cost, and therefore to accept his critique of the rationality assumption and in particular the need to replace it. But what precisely is the problem? Simon's definition for rationality is not the common definition in neo-classical economics. The rationality definition proposed by Simon is a straw man argument. Herbert Simon actually used the tactic that Schopenhauer calls exaggeration: the speaker first exaggerates the claim of his opponent and then it is very easy for him to refute it. And that's exactly what Simon did: he exaggerated the assumption of rationality, and then it became easy for him to refute it. This tactic is, of course, invalid.


Unlike Simon's definition of rational person, it cannot be concluded from the definition of rationality that a rational person has unlimited computation capacity or unlimited information. Lagueux (1997) claims that it is unclear who was the first to use the concept of rationality in economic theory. Max Weber defined rational behavior as an end-means orientation. It cannot be concluded from this that there are no limitations on a person's information or computation. We can see computation capacity as a limited means. Furthermore, we will show that bounded rationality does not contradict other definitions that are rooted in economic theory. Indeed, we will show that also other definitions for rationality do not lead to the conclusion that a rational person has unlimited computation capacity or unlimited information.

Selten's definition (1991) for rational behavior is:



"Rational Economic Behavior is the maximization of subjectively expected utility."

If we adopt the definition of Selten, the conclusion is that the bounded rationality assumption does not contradict the rationality assumption. According to Selten's definition, in small enough decisions it is rational not to compute, but to decide according to rule of thumb. Nevertheless, Selten sees the bounded rationality assumption as coming to replace what he sees as an unrealistic rationality assumption: "In order to replace unrealistic rationality assumptions, we need theories of bounded rationality."

Aumann's definition (2006) for rationality is:

"A person’s behavior is rational if it is in his best interests, given his information."

The word "information" includes not only his information about reality (like the arrangement of pieces on the chessboard), but also his knowledge and his intellectual abilities (like how to checkmate his opponent with a rock and king versus just a king). Hence, the bounded rationality assumption does not contradict the rationality assumption; it sharpens it. Hence, in our opinion, the definition of Aumann should be sharpened as follows:

A person’s behavior is rational if it is in his best interests, given his information and abilities.

In other words, the rationality of a person should not be examined as if he were logically omnipotent, but according to his personal abilities. It is similar to the statement of Rabbi Zusha from Anipoli, one of the founding fathers of the Hasidic movement: "I do not fear to be asked in Heaven, why have you not been Moses, but why have you not been Zusha." That is, a person is rational if he chooses the right means to achieve his goals given his abilities. What makes a person rational is not his knowledge, but his use of his knowledge. This definition of rationality is much more reasonable.

The correct understanding of rationality is evident in Common Law before economic theory invaded it. According to Common Law, a person is negligent, if she behaves in an unreasonable manner. When we come to determine what reasonable conduct is, we examine the subjective (personal) characters of a person. For example, the caution standard imposed on the electrician is higher than the one imposed on a regular person. On the other hand, in law and economics people ignored this point and defined negligent behavior as conduct in which a person causes injury to another person, while the prevention of the cost of injury is higher than the expected injury. Actually, the mistake in economic theory consists in ignoring the limited personal intellectual abilities of a rational person – a mistake that the law, which developed on a trial-and-error-basis, was free from.

Having shown that a person with a limited computation capacity is not excluded from the ranks of the rational, we wish to claim that economic theory adopted only partly the mistakes of Herbert Simon. The theory adopted his mistake that a person with limited computation capacity is excluded from the ranks of the rational, but it did not adopt his mistake that a person with limited information is excluded therefrom. It is well known that in models of rationality, like those developed on the basis of the work by Harsanyi, the agent sometimes has limited information. And, just as it is a mistake to claim that a person with limited information is not rational, so it is a mistake to claim that a person with limited computation capacity is irrational. The theory should correct the mistake of Simon regarding computation and rationality, just as it corrected his mistake regarding information and rationality.



Actually, Simon confused logic and rationality. By boundedly rational Simon actually meant boundedly logical. A boundedly logical person is not necessarily boundedly rational, and a perfectly logical is not necessarily perfect rational. What determines the degree of rationality of a person is not his logical ability, but (inter alia) his use of his personal logical ability. Herbert Simon actually spoke about a rational person, who is boundedly-logical, and wrongly called him boundedly-rational.

Second Part

Aumann (2005) claims that a significant shortcoming of economic theory is its failure to take into account the cost of calculation. In knowledge theory there is a fundamental axiom of logical omniscience, which means that agents know everything that follows logically from anything they know. Let us consider the dynamic dimension of this axiom. When does a rational agent compute? If we assume that the agent knows whether or not the computation costs exceed the computation benefits and that there is no computation cost for the question how much to compute, she will compute iff the benefits of computation are greater than its costs1. Thus, rational agents will behave differently toward small decisions as compared with big ones. In small enough decisions they will avoid computing and act like boundedly rational agents, while in big enough decisions they will compute and act like rational agents who bear no computation cost. Thus, in real-life decision theory the shortcoming that Aumann points out is very problematic for small decisions, but much less so (and sometimes not so at all) for big decisions. In small enough decisions, models that fail to take into account computation costs will provide us with different predictions from models that do take computation costs into account. However, in big enough decision models that do not take computation costs into account will provide us with the same predictions as models that do take them into account, if we face a binary choice between computing or not computing. It may challenge the usefulness of experimental economics studies, including behavioral economics, vis-à-vis big enough decisions. Experimental and behavioral economics usually study human conduct beings in small-stakes experiments or in hypothetical questions. In making big decisions, however, people may behave differently than they would in making small decisions. More precisely, people behave like rational agents in making big enough decisions, but they behave like boundedly rational agents in making small enough decisions. Thus, the proposition that in small enough decisions it is rational not to compute, but to act like boundedly rational agents, challenges the common understanding that we can refute the assumption that an agent is rational by showing that she sometimes makes mistakes, .i.e. makes wrong decisions. If those are small enough decisions, as in the case of lab experiments, we can explain her wrong decisions by saying that she is rational not to compute and to make errors. It does not mean that the assumption that she is rational is irrefutable. Rather, I wish to make the surprising claim that if there is a computation cost and in addition an agent makes the right decisions with probability 1 iff she computes, but there is no computation cost for the question whether or not to compute, the assumption of her rationality may be refuted by showing that she always makes the right decisions. This claim contradicts our current understanding of economics theory, yet it is entirely consistent with our everyday experience. If a person never errs in small decisions, such as having clothes that are always spotlessly clean and pressed, we would not call her rational, but rather obsessive.
2. Motivation

Rabin (2000) has shown that a rational agent who behaves according to expected-utility theory assumptions and rejects a lottery of 0.5*11+0.5*(-10) will also reject a lottery of 0.5*∞+0.5*(-100). Rabin (2000) claims that his theorem shows that expected-utility theory is an utterly implausible explanation for appreciable risk aversion over modest stakes. In addition, Rabin and Thaler (2002) claim that because most people are not risk-neutral over modest stakes, expected-utility theory should be rejected by economists as a descriptive theory of decision-making.

One might object that Rabin did not examine empirically whether people really reject small lotteries. Indeed, Leroy (2003) claims that it is not true that in real life people reject such small lotteries with positive expected value. However, Ariel Rubinstein (2001) claims that Rabin's calibration relies on a mental experiment and rings true without any need for verification. Let us skip over this discussion and instead assume that it is true that in real life a significant number of people really will reject small lotteries like [11, -10] but accept big lotteries like [∞, -100]. If we accept this assumption, Rabin's proposition challenges the applicability of expected-utility theory to real life.

In this paper I wish to claim that in real life, in which there is a computation cost, a rational agent who acts according to the assumptions of expected-utility theory may reject the small lottery of [11, -10], yet accept the big lottery. That is so precisely because of the computation cost. When a lottery of [11, -10] is proposed to an agent, she will be rational to avoid bearing the computation cost, and to act according to the rational rule of thumb: “reject lotteries.” I will explain shortly why this should be her rule of thumb.

In addition, I wish to challenge the validity of the prevailing conclusion in behavioral economics small-stakes experiments and hypothetical questions regarding big decisions. Specifically, I wish to show that the fact that participants in small-stakes experiments sometimes make mistakes does not refute the proposition that their behavior is rational. I also wish to provide a new answer to the question how the assumption of rationality may be refuted.
3. The Model

Let Co denote computation.


The set of alternatives of the agent is [Co, ~Co]X[A, ~A]. The agent needs to choose first between [Co, ~Co], and then between [A, ~A].

The computation cost is c.

The agent has a utility function U: =f(c) + u(x): x {A, ~A}

U: R+X{~A, A}
U(Co) = max{u(A), u(~A)} – f(c)
U(Co) is the utility of the agent who computes. In this situation she computes and hence makes the right decision with probability one. Hence, when she computes, she will have the benefit of the right decision, which is max{u(A), u(~A)} , but bears the cost of computation f(c).
U(~Co) = (1-P)max{u(A), u(~A)} + Pmin{u(A), u(~A)}
U(~Co) is the utility of the agent who does not compute. In this situation she does not compute and hence makes the right decision with probability 1-P and the wrong decisions with probability P. Hence, when she does not compute, she will have the utility of the right decision, max{u(A), u(~A)} with probability 1-P and the utility of the wrong decision, min{u(A), u(~A)}, with probability P.
S:= max{u(A), u(~A)} – min{u(A), u(~A)}
S is the size of the decision. It is the gap in utility between the right decision and the wrong decision. It is important to distinguish between the stakes of the decision and the size of the decision. If it is a small-stakes decision, it will also be a small decision. However, if it is a big-stakes decision, it still may be a small decision. For example, choosing between two very similar apartments is still a small decision. In addition, it is a small decision when it is a decision between similarities, but also when the alternatives are dissimilar (even very much so), but their utilities are similar.



The agent knows the game (and there is no computation cost for the question how to compute).
Results

The agent will choose Co and not ~Co, iff



U(Co)>U(~Co).

max{u(A), u(~A)} – f(c) > (1-P)max{u(A), u(~A)} + P*min{u(A), u(~A)}

- f(c) > - P* max{u(A), u(~A)} + P*min{u(A), u(~A)}

max{u(A), u(~A)} - min{u(A), u(~A)} > f(c)/P

S > f(c)/P.

Hence, the agent will compute iff S > f(c)/P


The conclusion is that if we assume that P and c are not correlated with S, then in this simple model there is a critical point f(c)/P such that if the decision is greater than that point (big enough decision) the agent will compute, and if the decision is less than that point (small enough decision), she will not compute.
In the above model, there is no computation cost and the agent does not make mistake in her choice if to compute. However, we may conclude from this model also to situations in which there is computation cost. We can conclude from this model the following impossibility theorem:

It is impossible to be free from mistake in the choice between [A,A] without making a mistake in the choice if to compute, when the computation cost is bigger than SP.



Thus, in small enough decision it is impossible to be free from mistake in the choice between [A,A] without making a mistake in the choice if to compute.

4. Discussion and Explanation

The proposition that the computation is a function of the size of the decision also provides the answer to the Rabin paradox and explains why agents behave differently in small decisions than in big ones. The explanation for the Rabin paradox is the computation cost.

In the paper of Rabin, there is no computation cost, and so Rabin’s theorem is absolutely correct. However, in real life there is a computation cost. Hence, even if it is true that in real life people reject small lotteries with positive expected value, it does not imply the absurd results Rabin discussed.

Nevertheless, one may try to challenge our rule of thumb by asking why lotteries should be rejected. Our answer is that we can conclude from Aumann’s agreement theorem (1976) that if two players in a zero-sum situation are not risk-lovers, have common priors, and there is common knowledge of rationality, then they will not come to a “lottery agreement.” Hence, People learn by trial and error that generally it is rational to reject lotteries.

Thus, Rabin’s argument does not contradict the proposition that in real life, expected-utility theory is useful both as a descriptive theory and as a normative theory, if we model the situation correctly. To do so, we must take into account the computation cost. However, if we do not model the situation correctly, either by not taking the computation cost into account or by assuming that the computation cost contradicts classic expected-utility theory, then as far as small decisions are concerned, Rabin's proposition that expected-utility theory is not applicable to real life is true and also clear. Moreover, if we do not model the situation correctly, then expected-utility theory will be wrong about small decisions normatively, too, because it will lead to inefficient results. More precisely, it will lead the agent to make the right decision regarding [A, ~A] with probability 1, but it will not maximize her utility because of the computation cost.

What happens in the case of big decisions, if we model the situation incorrectly, by ignoring the computation cost? We have shown that in our simple and binary model, if the decision is big enough [if S >f(c) /P], the agent will compute and arrive at the right decision regarding [~A, A]. This means that an expected-utility model that does not take the computation cost into account will provide the same predictions as an expected-utility model that does take it into account.

Rabin (2000) says that “expected-utility theory is manifestly not close to the right explanation of risk attitudes over modest stakes.” However, Rabin does not tell us how big the decision needs to be in order for expected utility theory to be a good descriptive decision theory. He does however give us a clue when he concludes that “expected-utility theory may well be a useful model of the taste for very-large-scale insurance.” It seems that Rabin considers expected-utility theory to be a good descriptive theory only for decisions of this magnitude.

By contrast, our model provides us with the critical point at which expected-utility theory should not be rejected as a good descriptive theory even when the computation cost is ignored, namely, S > f(c) /P. We may conclude that even when expected-utility theory does not take into account the computation cost, it should not be rejected as a good normative and descriptive theory in much smaller decisions than very-large-scale insurance, which means that it describes many more real-life situations than Rabin suggests.

In addition, Rabin thinks that the right explanation for risk aversion in small decisions is loss aversion. We on the other hand maintain that an alternative explanation for risk aversion may be the computation cost: in small enough decisions it is inefficient to compute, which may lead to a wrong decision regarding [A, ~A]. That is, in small enough decisions it is rational not to compute, but rather to make a decision as would a boundedly rational agent in a state of the world in which there is no computation cost.

It is also important to emphasize that in our model the agent still maximizes her expected utility and in addition may be sensitive to her initial wealth, which is consistent with expected utility theory.

Furthermore, according to behavioral economics, systematic cognitive biases may lead to wrong decisions. However, even according to Kahneman, one can handle such biases by reformulating the question. We can conclude from the proposition that in small enough decisions it is rational not to compute but rather to behave like boundedly rational agents. That is, in small decisions, it is rational not to bear the cost of reformulation, while in big decisions, it is rational to bear it. The conclusion is that behavioral economics is much more attractive in describing decision making when the stakes are small.

In addition, the proposition that the choice “to compute or not to compute” is a function of the size of the decision challenges the usefulness of many studies of experimental economics, including behavioral economics. If it is a small-stakes experiment, we cannot conclude from it about the behavior of people in big decisions. Furthermore, even rational-choice models that take into account the computation cost lead to the conclusion that agents will act like boundedly rational agents in small enough decisions. Hence, the applicability of a rational-choice model to big enough real-life decisions cannot be refuted by small-stakes experiments. Hence, the studies of Kahneman and Tversky do not refute rational choice or “real-life” expected-utility theory, .i.e. an expected-utility theory that takes into account computation costs. Trying to refute expected-utility theory by showing that people do make mistakes in small decisions is like trying to refute Galileo’s theory of the motion of bodies by repeating his experiment with two feathers and showing that a different result is obtained. In the economics as well as the physics experiment, there is “friction” that has an impact on the predicted result of the theory. Assuming that in real life there is a computation cost, but not for deciding whether or not to compute - or alternatively that it is irrational to have a rule of computing regarding such decisions - and assuming that agents will make right decisions with probability 1 iff they compute, then the proposition that a person is rational is not refuted by showing that she sometimes makes wrong decisions in small enough decisions, but rather by showing that she never makes wrong decisions in small enough decisions. She is irrational because she always computes, even when it is inefficient to do so. This is consistent with everyday experience and common sense. If someone never errs in making small decisions—say she never wears clothes that are not spotlessly clean and pressed or she never forgets closing her door—we would not call such behavior rational, but obsessive.

Furthermore, the fact that the small-stakes experiments and hypothetical questions studied in experimental economics and behavioral economics contradict the common rational-choice models may be explained by the failure of rational-choice models to take into account the computation cost. They do not model the real-life situation correctly. Yet, despite the fact that rational-choice models that fail to take into account the computation cost give wrong predictions about small decisions, we cannot conclude that these models will give wrong predictions about big decisions. The reason for this, as we have shown, is that the failure of economics theory to take into account computation costs is much more acute in modeling the situation of small decisions than in modeling the situation of big decisions. Indeed, in our model in which there is an assumption of a binary choice between computing or not computing, failure to take into account for computation costs will not lead to wrong predictions on the choice of the agent between [A, ~A] in big enough decisions, i.e., when S > f(c) /P. However, it will lead to wrong predictions on the choice of the agent between [A, ~A] when the decision is small enough, i.e., when S < f(c) /P.

Another interesting counterintuitive conclusion from our model is that it is not true that the bigger the probability of making mistakes without computation, the bigger the probability of making mistakes. This is because if P>c/S, the risk-neutral agent will compute the right decision, whereas if P, the risk-neutral agent will avoid computation, which implies that the probability of making a mistake is P. Therefore, if the probability of making a mistake is small enough, the benefit of computation is so small that it is inefficient to compute, which increases the probability of making a mistake. It is consistent with our everyday experience: a person with a good memory may choose not to write down her appointments and forget them more often than a person with a bad memory.


5. Conclusion

The bounded rationality assumption does not contradict the rationality assumption. Specifically, in small enough decisions it is rational not to compute, but to act like boundedly rational agents. It is also the answer to the Rabin paradox: when there is a computation cost, it is no longer true to say that the rejection of a lottery of [11, -10] by a rational agent who behaves according to the assumptions of expected-utility theory implies a rejection of a lottery of [∞, -100] by her. Hence, if we model the real-life situation correctly, .i.e., take into account the computation cost, then Rabin does not challenge expected-utility theory as a descriptive theory. In addition, we may explain the results of behavioral economics experiments, which have been for small stakes, by the proposition that in small enough decisions it is rational to act like a boundedly rational agent. Furthermore, it challenges the current understanding that one can refute the assumption that an agent is rational by showing that she sometimes makes wrong decisions. In small enough decisions a rational agent sometimes makes wrong decisions. Surprisingly, if there is computation cost and in addition an agent makes the right decisions with probability 1 iff she computes, but there is no computation cost for the question whether to compute or not to compute, we can refute the assumption that an agent is rational by showing that she never makes the wrong decisions.


6. References

Aumann, Robert (1974), "Agree to Disagree" Annals of Statistics 4 (1976), pp. 1236-1239.

Aumann, Robert (2005), “Musings on Information and Knowledge,” Econ Journal Watch ,88-96.

Aumann, Robert (2006), "War and Peace" in Les Prix Nobel 2005, edited by K. Grandin, The Nobel Foundation, Stockholm, pp. 350-358.

Kahneman, Daniel and Amos Tversky (1984), “Choices, Values, and Frames,” American Psychologist 39, 341-350.

Knight, Frank H. Risk, Uncertainty and Profit. Boston: Houghton-Mifflin, 1921; reprinted New York: Sentry Press, 1964.

Lagueux, Maurice "The Rationality Principle and Classical Economics" congrès de

History of Economics Society tenu à Charleston, S.C., du 20 au 23 juin 1997.

LeRoy, Stephan (2003), “Expected Utility: A Defense”, Economics Bulletin 7, 1-3.

Rabin, Matthew (2000), “Risk Aversion and Expected-Utility Theory: A Calibration Theorem,” Econometrica 68, 1281-1292.

Rabin, Matthew and Richard Thaler (2001), “Anomalies: Risk Aversion,” Journal of Economic Perspectives 15, 219-232.

Rabin, Matthew and Richard Thaler (2002), “‘Defending Expected Utility Theory’: Response” Journal of Economic Perspectives 16, 229-230

Rubinstein, Ariel (2001), “Comments on the Risk and Time Preferences in Economics,” Working Paper #6-02, Tel Aviv University.

Schopenhauer, Arthur

Selten, Reinhard (1991), "Evolution, Learning and Economic Behavior" Games and Economic Behavior 3, 3-24.



Simon, Herbert (1995) "A Behavioral Model of Rational Choice" The Quarterly Journal of Economics, Vol. LXIX.



* This working paper was written under the supervision of Prof. Robert J. Aumann, whose comments and suggestions, which significantly improved every part of this paper, I acknowledge with gratitude. Special thanks are due Itai Arieli. I also wish to thank to…

1 We assume that there is a binary choice: to compute or not to compute.




Yüklə 87,5 Kb.

Dostları ilə paylaş:




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə