An interview with



Yüklə 0,92 Mb.
səhifə4/7
tarix15.08.2018
ölçüsü0,92 Mb.
#63008
1   2   3   4   5   6   7

A: I was born in 1930 in Frankfurt, Germany, to an orthodox Jewish family. My father was a wholesale textile merchant, rather well to do. We got away in 1938. Actually we had planned to leave already when Hitler came to power in 1933, but for one reason or another the emigration was cancelled and people convinced my parents that it wasn’t so bad; it will be okay, this thing will blow over. The German people will not allow such a madman to take over, etc., etc. A well-known story. But it illustrates that when one is in the middle of things it is very, very difficult to see the future. Things seem clear in hindsight, but in the middle of the crisis they are very murky.

H: Especially when it is a slow-moving process, rather than a dramatic change: every time it is just a little more and you say, that’s not much, but when you look at the integral of all this, suddenly it is a big change.

A: That is one thing. But even more basically, it is just difficult to see. Let me jump forward from 1933 to 1967. I was in Israel and there was the crisis preceding the Six-Day War. In hindsight it was “clear” that Israel would come out on top of that conflict. But at the time it wasn’t at all clear, not at all. I vividly remember the weeks leading up to the Six-Day War, the crisis in which Nasser closed the Tiran Straits and massed troops on Israel’s border; it wasn’t at all clear that Israel would survive. Not only to me, but to anybody in the general population. Maybe our generals were confident, but I don’t think so, because our government certainly was not confident. Prime Minister Eshkol was very worried. He made a broadcast in which he stuttered and his concern was very evident, very real. Nobody knew what was going to happen and people were very worried, and I, too, was very worried. I had a wife and three children and we all had American papers. So I said to myself, Johnny, don’t make the mistake your father made by staying in Germany. Pick yourself up, get on a plane and leave, and save your skin and that of your family; because there is a very good chance that Israel will be destroyed and the inhabitants of Israel will be wiped out totally, killed, in the next two or three weeks. Pick yourself up and GO.

I made a conscious decision not to do that. I said, I am staying. Herb Scarf was here during the crisis. When he left, about two weeks before the war, we said good-bye, and it was clear to both of us that we might never see each other again.

I am saying all this to illustrate that it is very difficult to judge a situation from the middle of it. When you’re swimming in a big lake, it’s difficult to see the shore, because you are low, you are inside it. One should not blame the German Jews or the European Jews for not leaving Europe in the thirties, because it was difficult to assess the situation.

Anyway, that was our story. We did get away in time, in 1938. We left Germany, and made our way to the United States; we got an immigration visa with some difficulty. In this passage, my parents lost all their money. They had to work extremely hard in the United States to make ends meet, but nevertheless they gave their two children, my brother and myself, a good Jewish and a good secular education. I went to Jewish parochial schools for my elementary education and also for high school. It is called a yeshiva high school, and combines Talmudic and other Jewish studies with secular studies. I have already mentioned my math teacher in high school, Joe Gansler. I also had excellent Talmud and Jewish studies teachers.



Picture 4. Bob Aumann with fiancée Esther Schlesinger,

Israel, January 1955

When the State of Israel was created in 1948, I made a determination eventually to come to Israel, but that didn’t actually happen until 1956. In 1954 I met an Israeli girl, Esther Schlesinger, who was visiting the United States. We fell in love, got engaged, and got married. We had five children; the oldest, Shlomo, was killed in Lebanon in the 1982 Peace for Galilee operation. My other children are all happily married. Shlomo’s widow also remarried and she is like a daughter to us. Shlomo had two children before he was killed (actually the second one was born after he was killed). Altogether I now have seventeen grandchildren and one great-grandchild. We have a very good family relationship, do a lot of things together. One of the things we like best is skiing. Every year I go with a different part of the family. Once in four or five years, all thirty of us go together.

H: I can attest from my personal knowledge that the Aumann family is really an outstanding, warm, unusually close-knit family. It is really great to be with them.

A: My wife Esther died six years ago, of cancer, after being ill for about a year and a half. She was an extraordinary person. After elementary school she entered the Bezalel School of Art—she had a great talent for art. At Bezalel she learned silversmithing, and she also drew well. She was wonderful with her hands and also with people. When about fifty, she went to work for the Frankforter Center, an old-age day activities center; she ran the crafts workshop, where the elderly worked with their hands: appliqué, knitting, embroidery, carpets, and so on. This enabled Esther to combine her two favorite activities: her artistic ability, and dealing with people and helping them, each one with his individual troubles.

When she went to school, Bezalel was a rather Bohemian place. It probably still is, but at that time it was less fashionable to be Bohemian, more special. Her parents were very much opposed to this. In an orthodox Jewish family, a young girl going to this place was really unheard of. But Esther had her own will. She was a mild-mannered person, but when she wanted something, you bet your life she got it, both with her parents and with me. She definitely did want to go to that school, and she went.






Picture 5. Bob Aumann with some of his children and grandchildren, Israel, 2001

* * *


H: There is a nice story about your decision to come to Israel in ’56.

A: In ’56 I had just finished two years of a post-doc at Princeton, and was wondering how to continue my life. As mentioned, I had made up my mind to come to Israel eventually. One of the places where I applied was the Hebrew University in Jerusalem. I also applied to other places, because one doesn’t put all one’s eggs in one basket, and got several offers. One was from Bell Telephone Laboratories in Murray Hill; one from Jerusalem; and there were others. Thinking things over very hard and agonizing over this decision, I finally decided to accept the position at Bell Labs, and told them that. We started looking around for a place to live on that very same day.

When we came home in the evening, I knew I had made the wrong decision. I had agonized over it for three weeks or more, but once it had been made, it was clear to me that it was wrong. Before it had been made, nothing was clear. Now, I realized that I wanted to go to Israel immediately, that there is no point in putting it off, no point in trying to earn some money to finance the trip to Israel; we’ll just get stuck in the United States. If we are going to go at all we should go right away. I called up the Bell Labs people and said, “I changed my mind. I said I’ll come, so I’ll come, but you should know that I’m leaving in one year.” They said, “Aumann, you’re off the hook. You don’t have to come if you don’t want to.” I said, “Okay, but now it’s June. I am not leaving until October, when the academic year in Israel starts. Could I work until October at Bell Labs?” They said, “Sure, we’ll be glad to have you.” That was very nice of them.

That was a really good four months there. John McCarthy, a computer scientist, was one of the people I got to know during that period. John Addison, a mathematician, logician, Turing machine person, was also there. One anecdote about Addison that summer is that he had written a paper about Turing machines, and wanted to issue it as a Bell Labs discussion paper. The patent office at Bell Labs gave him trouble. They wanted to know whether this so-called “improvement” on Turing machines could be patented. It took him a while to convince them that a Turing machine is not really a machine.

I am telling this long story to illustrate the difficulties with practical decision-making. The process of practical decision-making is much more complex than our models. In practical decision-making, you don’t know the right decision until after you’ve made it.



H: This, at least to my mind, is a good example of some of your views on experiments and empirics. Do you want to expand on that?

A: Yes. I have grave doubts about what’s called “behavioral economics,” but isn’t really behavioral. The term implies that that is how people actually behave, in contradistinction to what the theory says. But that’s not what behavioral economics is concerned with. On the contrary, most of behavioral economics deals with artificial laboratory setups, at best. At worst, it deals with polls, questionnaires. One type of so-called behavioral economics is when people are asked, what would you do if you were faced with such and such a situation. Then they have to imagine that they are in this situation and they have to give an answer.

H: Your example of Bell Labs versus the Hebrew University shows that you really can give the wrong answer when you are asked such a question.

A: Polls and questionnaires are worse than that; they are at a double remove from reality. In the Bell Labs case, I actually was faced with the problem of which job to take. Even then I took a decision that was not the final one, in spite of the setup being real. In “behavioral economics,” people ask, “What would you do if …”; it is not even a real setup.

Behavioral economists also do experiments with “real” decisions rewarded by monetary payoffs. But even then the monetary payoff is usually very small. More importantly, the decisions that people face are not ones that they usually take, with which they are familiar. The whole setup is artificial. It is not a decision that really affects them and to which they are used.

Let me give one example of this—the famous “probability matching” experiment. A light periodically flashes, three quarters of the time green, one quarter red, at random. The subject has to guess the color beforehand, and gets rewarded if he guesses correctly. This experiment has been repeated hundreds of times; by far the largest number of subjects guess green three quarters of the time and red one quarter of the time.

That is not optimal; you should always guess green. If you get a dollar each time you guess correctly, and you probability-match—three quarters, one quarter—then your expected payoff is five eighths of a dollar. If you guess green all the time you get an average of three quarters of a dollar. Nevertheless, people probability-match. The point is that the setting is artificial: people don’t usually sit in front of flashing lights. They don’t know how to react, so they do what they think is expected of them, which becomes probability-matching.

In real situations people don’t act that way. An example is driving to work in the morning. Many people have a choice of routes, and each route has a certain probability of taking less time. It is random, because one can’t know where there will be an accident, a traffic jam. Let’s say that there are two routes; one is quicker three quarters of the time and the other, one quarter of the time. Most people will settle down and take the same route every day, although some days it will be the longer one; and that is the correct solution.

In short, I have serious doubts about behavioral economics as it is practiced. Now, true behavioral economics does in fact exist; it is called empirical economics. This really is behavioral economics. In empirical economics, you go and see how people behave in real life, in situations to which they are used. Things they do every day.

There is a wonderful publication called the NBER Reporter. NBER is the National Bureau of Economic Research, an American organization. They put out a monthly newsletter of four to six pages, in which they give brief summaries of research memoranda published during that month. It is all empirical. There is nothing theoretical there. Sometimes they give theoretical background, but all these works are empirical works that say how people actually behave. It is amazing to see, in these reports, how well the actual behavior of people fits economic theory.

H: Can you give an example of that?

A: One example I remember is where there was a very strong effect of raising the tax on alcohol by a very small amount, let’s say ten percent. Now we are talking about raising the price of a glass of beer by two to two and a half percent. It had a very significant effect on the number of automobile accidents in the States. There is a tremendous amount of price elasticity in people’s behavior. Another example is how increasing the police force affects crime.

H: Let’s be more specific. Take the alcohol example. Why does it contradict the behavioral approach?

A: The conclusion of so-called behavioral economics is that people don’t behave in a rational way, that they don’t respond as expected to economic incentives. Empirical economics shows that people do respond very precisely to economic incentives.

H: If I may summarize your views on this, empirical economics is a good way of finding out what people actually decide. On the other hand, much of what is done in experimental work is artificial and people may not behave there as they do in real life.

A: Yes. Let me expand on that a little bit. The thesis that behavioral economics attacks is that people behave rationally in a conscious way—that they consciously calculate and make an optimal decision based, in each case, on rational calculations. Perhaps behavioral economists are right that that is not so. Because their experiments or polls show that people, when faced with certain kinds of decisions, do not make the rational decision. However, nobody ever claimed that; they are attacking a straw man, a dead horse. What is claimed is that economic agents behave in a way that could be described as derived from rationality considerations; not that they actually are derived that way, that they actually go through a process of optimization each time they make a decision.

* * *


H: This brings us to the matter of “rule rationality,” which you have been promoting consistently at least since the nineties.

A: Yes, it does bring us to rule rationality. The basic premise there is that people grope around. They learn what is a good way of behaving. When they make mistakes they adjust their behavior to avoid those mistakes. It is a learning process, not an explicit optimization procedure. This is actually an old idea. For example, Milton Friedman had this idea that people behave as if they were rational.

Rule rationality means that people evolve rules of behavior by which they usually act, and they optimize these rules. They don’t optimize each single decision. One very good example is the ultimatum game, an experiment performed by Werner Güth and associates in the early eighties.



H: And then replicated in many forms by other people. It is a famous experiment.

A: This experiment was done in various forms and with various parameters. Here is one form. Two subjects are offered one hundred Deutsch Marks, which in the early eighties was equivalent to 150–200 Euros of today—a highly non-negligible amount. They are offered this amount to split in whatever way they choose, as long as they agree how. If they cannot agree, then both get nothing. The subjects do not speak with each other face to face; rather, each one sits at a computer console. One is the offerer and the other, the responder. The offerer offers a split and the responder must say yes or no, once the proposed split appears on his computer screen. If he says yes, that’s the outcome. If he says no, no one gets anything.

This experiment was done separately for many pairs of people. Each pair played only once; they entered and left the building by different entrances and exits, and never got to know each other—remained entirely anonymous. The perfect equilibrium of this game is that the offerer offers a minimum amount that still gives the responder something. Let’s say a split of 99 for the offerer and 1 for the responder.



H: The idea being that the responder would not leave even one Deutsch Mark on the table by saying no.

A: That is what one might expect from rationality considerations. I say “might,” because it is not what game theory necessarily predicts; the game has many other equilibria. But rationality considerations might lead to the 99-1 split.

In fact, what happened was that most of the offers were in the area of 65-35. Those that were considerably less—let’s say 80-20—were actually rejected. Not always, but that was the big picture. In many cases a subject was willing to walk away from as much as twenty Deutsch Marks; and the offerer usually anticipated this and therefore offered him more.

Walking away from twenty Deutsch Marks appears to be a clear violation of rationality. It is a violation—of act rationality. How does theory account for this?

The answer is that people do not maximize on an act-by-act basis. Rather, they develop rules of behavior. One good rule is, do not let other people insult you. Do not let other people kick you in the stomach. Do not be a sucker. If somebody does something like that to you, respond by kicking back. This is a good rule in situations that are not anonymous. If you get a reputation for accepting twenty or ten or one Deutsch Mark when one hundred Deutsch Marks are on the table, you will come out on the short end of many bargaining situations. Therefore, the rule of behavior is to fight back and punish the person who does this to you, and then he won’t do it again.

Of course, this does not apply in the current situation, because it is entirely anonymous. Nobody will be told that you did this. Therefore, there are no reputational effects, and this rule that you’ve developed does not apply. But you have not developed the rule consciously. You have not worked it out. You have learned it because it works in general. Therefore you apply it even in situations where, rationally speaking, it does not apply. It is very important to test economic theories in contexts that are familiar to people, in contexts in which people really engage on a regular basis. Not in artificial contexts. In artificial contexts, other things apply.

Another example of rule rationality is trying to please. It is a good idea to please the people with whom you deal. Even this can be entirely subconscious or unconscious. Most people know that voting in elections is considered a positive thing to do. So if you are asked, “Did you vote?”, there is a very strong tendency to say yes, even if you didn’t vote. Camil Fuchs, one of the important polltakers in Israel, gave a lecture at the Center for Rationality, in which he reported this: in the last election in Israel, people were asked several hours after the polls closed, did you vote? Ninety percent of the people in the sample said yes; in fact, only sixty-eight percent of the electorate voted.



H: It calls into question what we learn from polls.

A: It sheds a tremendous amount of doubt; and it shows something even more basic. Namely, that when people answer questions in a poll, they try to guess what it is that the questioner wants to hear. They give that answer rather than the true answer; and again, this is not something that they do consciously.

That is another example of rule rationality. I am not saying that people do this because there is something in it for them. They do it because they have a general rule: try to please the people to whom you are talking; usually they can help you. If you are unpleasant to them it is usually not to your good. So people subconsciously develop tools to be pleasant and being pleasant means giving the answer that’s expected.



H: What you are saying is that one should evaluate actions not on a decision-by-decision basis, but over the long run. Also, one has to take into account that we cannot make precise computations and evaluate every decision. We need to develop rules that are relatively simple and applicable in many situations. Once we take into account this cost of computation, it is clear that a rule that is relatively simple, but gives a good answer in many situations, is better than a rule that requires you to go to extreme lengths to find the right answer for every decision.

A: That’s the reason for it. You are giving the fundamental reason why people develop rules rather than optimize each act. It is simply too expensive.

H: Kahneman and Tversky say that there are a lot of heuristics that people use, and biases, and that these biases are not random, but systematic. What you are saying is, yes, systematic biases occur because if you look at the level of the rule, rules indeed are systematic; they lead to biases since they are not optimal for each individual act. Systematic biases fit rule rationality very well.

A: That’s a good way of putting it. If you look at those systematic biases carefully you may well find that they are rule optimal. In most situations that people encounter, those systematic biases are a short way of doing the right thing.

* * *


H: This connects to another area in which you are involved quite a lot lately, namely, biology and evolutionary ecology. Do you want to say something about that?

A: The connection of evolution to game theory has been one of the most profound developments of the last thirty or forty years. It is one of the major developments since the big economic contributions of the sixties, which were mainly in cooperative game theory. It actually predates the explosion of non-cooperative game theory of the eighties and nineties.

It turns out that there is a very, very strong connection between population equilibrium and Nash equilibrium—strategic equilibrium—in games. The same mathematical formulae appear in both contexts, but they have totally different interpretations. In the strategic, game-theoretic interpretation there are players and strategies, pure and mixed. In the two-player case, for every pair of strategies, each player has a payoff, and there is a strategic equilibrium. In the evolutionary context, the players are replaced by populations, the strategies by genes, the probabilities in the mixed strategies by population proportions, and the payoffs by what is called fitness, which is a propensity to have offspring. You could have a population of flowers and a population of bees. There could be a gene for having a long nectar tube in the flowers and a gene for a long proboscis in the bees. Then, when those two meet, it is good for the flower and good for the bee. The bee is able to drink the nectar and so flits from flower to flower and pollinates them.

What does that mean, “good”? It means that both the flowers and the bees will have more offspring. The situation is in equilibrium if the proportions of genes of each kind in both populations are maintained. This turns out to be formally the same as strategic equilibrium in the corresponding game.

This development has had a tremendous influence on game theory, on biology, and on economic theory. It’s a way of thinking of games that transcends biology; it’s a way of thinking of what people do as traits of behavior, which survive, or get wiped out, just like in biology. It’s not a question of conscious choice. Whereas the usual, older interpretation of Nash equilibrium is one of conscious choice, conscious maximization. This ties in with what we were saying before, about rule rationality being a better interpretation of game-theoretic concepts than act rationality.

* * *


Yüklə 0,92 Mb.

Dostları ilə paylaş:
1   2   3   4   5   6   7




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə