An interview with



Yüklə 0,92 Mb.
səhifə2/7
tarix15.08.2018
ölçüsü0,92 Mb.
#63008
1   2   3   4   5   6   7

H: There is the famous “Folk Theorem.” In the seventies you named it, in your survey of repeated games [42]. The name has stuck. Incidentally, the term “folk theorem” is nowadays also used in other areas for classic results: the folk theorem of evolution, of computing, and so on.

A: The original Folk Theorem is quite similar to my ’59 paper, but a good deal simpler, less deep. As you said, that became quite prominent in the later literature. I called it the Folk Theorem because its authorship is not clear, like folk music, folk songs. It was in the air in the late fifties and early sixties.

H: Yours was the first full formal statement and proof of something like this. Even Luce and Raiffa, in their very influential ’57 book, Games and Decisions, don’t have the Folk Theorem.

A: The first people explicitly to consider repeated non-zero-sum games of the kind treated in my ’59 paper were Luce and Raiffa. But as you say, they didn’t have the Folk Theorem. Shubik’s book Strategy and Market Structure, published in ’59, has a special case of the Folk Theorem, with a proof that has the germ of the general proof.

At that time people did not necessarily publish everything they knew—in fact, they published only a small proportion of what they knew, only really deep results or something really interesting and nontrivial in the mathematical sense of the word—which is not a good sense. Some of the things that are most important are things that a mathematician would consider trivial.



H: I remember once in class that you got stuck in the middle of a proof. You went out, and then came back, thinking deeply. Then you went out again. Finally you came back some twenty minutes later and said, “Oh, it’s trivial.”

A: Yes, I got stuck and started thinking; the students were quiet at first, but got noisier and noisier, and I couldn’t think. I went out and paced the corridors and then hit on the answer. I came back and said, this is trivial; the students burst into laughter. So “trivial” is a bad term.

Take something like the Cantor diagonal method. Nowadays it would be considered trivial, and sometimes it really is trivial. But it is extremely important; for example, Gödel’s famous incompleteness theorem is based on it.



H: “Trivial to explain” and “trivial to obtain” are different. Some of the confusion lies there. Something may be very simple to explain once you get it. On the other hand, thinking about it and getting to it may be very deep.

A: Yes, and hitting on the right formulation may be very important. The diagonal method illustrates that even within pure mathematics the trivial may be important. But certainly outside of it, there are interesting observations that are mathematically trivial—like the Folk Theorem. I knew about the Folk Theorem in the late fifties, but was too young to recognize its importance. I wanted something deeper, and that is what I did in fact publish. That’s my ’59 paper [4]. It’s a nice paper—my first published paper in game theory proper. But the Folk Theorem, although much easier, is more important. So it’s important for a person to realize what’s important. At that time I didn’t have the maturity for this.

Quite possibly, other people knew about it. People were thinking about repeated games, dynamic games, long-term interaction. There are Shapley’s stochastic games, Everett’s recursive games, the work of Gillette, and so on. I wasn’t the only person thinking about repeated games. Anybody who thinks a little about repeated games, especially if he is a mathematician, will very soon hit on the Folk Theorem. It is not deep.



H: That’s ’59; let’s move forward.

A: In the early sixties Morgenstern and Kuhn founded a consulting firm called Mathematica, based in Princeton, not to be confused with the software that goes by that name today. In ’64 they started working with the United States Arms Control and Disarmament Agency. Mike Maschler worked with them on the first project, which had to do with inspection; obviously there is a game between an inspector and an inspectee, who may want to hide what he is doing. Mike made an important contribution to that. There were other people working on that also, including Frank Anscombe. This started in ’64, and the second project, which was larger, started in ’65. It had to do with the Geneva disarmament negotiations, a series of negotiations with the Soviet Union, on arms control and disarmament. The people on this project included Kuhn, Gerard Debreu, Herb Scarf, Reinhard Selten, John Harsanyi, Jim Mayberry, Maschler, Dick Stearns (who came in a little later), and me. What struck Maschler and me was that these negotiations were taking place again and again; a good way of modeling this is a repeated game. The only thing that distinguished it from the theory of the late fifties that we discussed before is that these were repeated games of incomplete information. We did not know how many weapons the Russians held, and the Russians did not know how many weapons we held. What we—the United States—proposed to put into the agreements might influence what the Russians thought or knew that we had, and this would affect what they would do in later rounds.

H: What you do reveals something about your private information. For example, taking an action that is optimal in the short run may reveal to the other side exactly what your situation is, and then in the long run you may be worse off.

A: Right. This informational aspect is absent from the previous work, where everything was open and above board, and the issues are how one’s behavior affects future interaction. Here the question is how one’s behavior affects the other player’s knowledge. So Maschler and I, and later Stearns, developed a theory of repeated games of incomplete information. This theory was set forth in a series of research reports between ’66 and ’68, which for many years were unavailable.

H: Except to the aficionados, who were passing bootlegged copies from mimeograph machines. They were extremely hard to find.

A: Eventually they were published by MIT Press [v] in ’95, together with extensive postscripts describing what has happened since the late sixties—a tremendous amount of work. The mathematically deepest started in the early seventies in Belgium, at CORE, and in Israel, mostly by my students and then by their students. Later it spread to France, Russia, and elsewhere. The area is still active.

H: What is the big insight?

A: It is always misleading to sum it up in a few words, but here goes: in the long run, you cannot use information without revealing it; you can use information only to the extent that you are willing to reveal it. A player with private information must choose between not making use of that information—and then he doesn’t have to reveal it—or making use of it, and then taking the consequences of the other side finding it out. That’s the big picture.

H: In addition, in a non-zero-sum situation, you may want to pass information to the other side; it may be mutually advantageous to reveal your information. The question is how to do it so that you can be trusted, or in technical terms, in a way that is incentive-compatible.

A: The bottom line remains similar. In that case you can use the information, not only if you are willing to reveal it, but also if you actually want to reveal it. It may actually have positive value to reveal the information. Then you use it and reveal it.

* * *


H: You mentioned something else and I want to pick up on that: the Milnor–Shapley paper on oceanic games. That led you to another major work, “Markets with a Continuum of Traders” [16]: modeling perfect competition by a continuum.

A: As I already told you, in ’60–’61, the Milnor–Shapley paper “Oceanic Games” caught my fancy. It treats games with an ocean—nowadays we call it a continuum—of small players, and a small number of large players, whom they called atoms. Then in the fall of ’61, at the conference at which Kissinger and Lloyd Shapley were present, Herb Scarf gave a talk about large markets. He had a countable infinity of players. Before that, in ’59, Martin Shubik had published a paper called “Edgeworth Market Games,” in which he made a connection between the core of a large market game and the competitive equilibrium. Scarf’s model somehow wasn’t very satisfactory, and Herb realized that himself; afterwards, he and Debreu proved a much more satisfactory version, in their International Economic Review 1963 paper. The bottom line was that, under certain assumptions, the core of a large economy is close to the competitive solution, the solution to which one is led from the law of supply and demand. I heard Scarf’s talk, and, as I said, the formulation was not very satisfactory. I put it together with the result of Milnor and Shapley about oceanic games, and realized that that has to be the right way of treating this situation: a continuum, not the countable infinity that Scarf was using. It took a while longer to put all this together, but eventually I did get a very general theorem with a continuum of traders. It has very few assumptions, and it is not a limit result. It simply says that the core of a large market is the same as the set of competitive outcomes. This was published in Econometrica in 1964 [16].

H: Indeed, the introduction of the continuum idea to economic theory has proved indispensable to the advancement of the discipline. In the same way as in most of the natural sciences, it enables a precise and rigorous analysis, which otherwise would have been very hard or even impossible.

A: The continuum is an approximation to the “true” situation, in which the number of traders is large but finite. The purpose of the continuous approximation is to make available the powerful and elegant methods of the branch of mathematics called “analysis,” in a situation where treatment by finite methods would be much more difficult or even hopeless—think of trying to do fluid mechanics by solving n-body problems for large n.

H: The continuum is the best way to start understanding what’s going on. Once you have that, you can do approximations and get limit results.

A: Yes, these approximations by finite markets became a hot topic in the late sixties and early seventies. The ’64 paper was followed by the Econometrica ’66 paper [23] on existence of competitive equilibria in continuum markets; in ’75 came the paper on values of such markets, also in Econometrica [32]. Then there came later papers using a continuum, by me with or without coauthors [28, 37, 38, 39, 41, 44, 52], by Werner Hildenbrand and his school, and by many, many others.

H: Before the ’75 paper, you developed, together with Shapley, the theory of values of non-atomic games [i]; this generated a huge literature. Many of your students worked on that. What’s a non-atomic game, by the way? There is a story about a talk on “Values of non-atomic games,” where a secretary thought a word was missing in the title, so it became “Values of non-atomic war games.” So, what are non-atomic games?

A: It has nothing to do with war and disarmament. On the contrary, in war you usually have two sides. Non-atomic means the exact opposite, where you have a continuum of sides, a very large number of players.

H: None of which are atoms.

A: Exactly, in the sense that I was explaining before. It is like Milnor and Shapley’s oceanic games, except that in the oceanic games there were atoms—“large” players—and in non-atomic games there are no large players at all. There are only small players. But unlike in Milnor–Shapley, the small players may be of different kinds; the ocean is not homogeneous. The basic property is that no player by himself makes any significant contribution. An example of a non-atomic game is a large economy, consisting of small consumers and small businesses only, without large corporations or government interference. Another example is an election, modeled as a situation where no individual can affect the outcome. Even the 2000 U.S. presidential election is a non-atomic game—no single voter, even in Florida, could have affected the outcome. (The people who did affect the outcome were the Supreme Court judges.) In a non-atomic game, large coalitions can affect the outcome, but individual players cannot.

Picture 3. Werner Hildenbrand with Bob Aumann, Oberwolfach, 1982

H: And values?

A: The game theory concept of value is an a priori evaluation of what a player, or group of players, can expect to get out of the game. Lloyd Shapley’s 1953 formalization is the most prominent. Sometimes, as in voting situations, value is presented as an index of power (Shapley and Shubik 1954). I have already mentioned the 1975 result about values of large economies being the same as the competitive outcomes of a market [32]. This result had several precursors, the first of which was a ’64 RAND Memorandum of Shapley.

H: Values of non-atomic games and their application in economic models led to a huge literature.

* * *


Another one of your well-known contributions is the concept of correlated equilibrium (J. Math. Econ. ’74 [29]). How did it come about?

A: Correlated equilibria are like mixed Nash equilibria, except that the players’ randomizations need not be independent. Frankly, I’m not really sure how this business began. It’s probably related to repeated games, and, indirectly, to Harsanyi and Selten’s equilibrium selection. These ideas were floating around in the late sixties, especially at the very intense meetings of the Mathematica ACDA team. In the Battle of the Sexes, for example, if you’re going to select one equilibrium, it has to be the mixed one, which is worse for both players than either of the two pure ones. So you say, hey, let’s toss a coin to decide on one of the two pure equilibria. Once the coin is tossed, it’s to the advantage of both players to adhere to the chosen equilibrium; the whole process, including the coin toss, is in equilibrium. This equilibrium is a lot better than the unique mixed strategy equilibrium, because it guarantees that the boy and the girl will definitely meet­—either at the boxing match or at the ballet—whereas with the mixed strategy equilibrium, they may well go to different places.

With repeated games, one gets a similar result by alternating: one evening boxing, the next ballet. Of course, that way one only gets to the convex hull of the Nash equilibria.

This is pretty straightforward. The next step is less so. It is to go to three-person games, where two of the three players gang up on the third—correlate “against” him, so to speak [29, Examples 2.5 and 2.6]. This leads outside the convex hull of Nash equilibria. In writing this formally, I realized that the same definitions apply also to two-person games; also there, they may lead outside the convex hull of the Nash equilibria.

H: So, correlated equilibria arise when the players get signals that need not be independent. Talking about signals and information—how about common knowledge and the “Agreeing to Disagree” paper?

A: The original paper on correlated equilibrium also discussed “subjective equilibrium,” where different players have different probabilities for the same event. Differences in probabilities can arise from differences in information; but then, if a player knows that another player’s probability is different from his, he might wish to revise his own probability. It’s not clear whether this process of revision necessarily leads to the same probabilities. This question was raised—and left open—in [29, Section 9j]. Indeed, even the formulation of the question was murky.

I discussed this with Arrow and Frank Hahn during an IMSSS summer in the early seventies. I remember the moment vividly. We were sitting in Frank Hahn’s small office on the fourth floor of Stanford’s Encina Hall, where the economics department was located. I was trying to get my head around the problem—not its solution, but simply its formulation. Discussing it with them—describing the issue to them—somehow sharpened and clarified it. I went back to my office, sat down, and continued thinking. Suddenly the whole thing came to me in a flash—the definition of common knowledge, the characterization in terms of information partitions, and the agreement theorem: roughly, that if the probabilities of two people for an event are commonly known by both, then they must be equal. It took a couple of days more to get a coherent proof and to write it down. The proof seemed quite straightforward. The whole thing—definition, formulation, proof—came to less than a page.

Indeed, it looked so straightforward that it seemed hardly worth publishing. I went back and told Arrow and Hahn about it. At first Arrow wouldn’t believe it, but became convinced when he saw the proof. I expressed to him my doubts about publication. He strongly urged me to publish it—so I did [34]. It became one of my two most widely cited papers.

Six or seven years later I learned that the philosopher David Lewis had defined the concept of common knowledge already in 1969, and, surprisingly, had used the same name for it. Of course, there is no question that Lewis has priority. He did not, however, have the agreement theorem.



H: The agreement theorem is surprising—and important. But your simple and elegant formalization of common knowledge is even more important. It pioneered the area known as “interactive epistemology”: knowledge about others’ knowledge. It generated a huge literature—in game theory, economics, and beyond: computer science, philosophy, logic. It enabled the rigorous analysis of very deep and complex issues, such as what is rationality, and what is needed for equilibrium. Interestingly, it led you in particular back to correlated equilibrium.

A: Yes. That’s paper [53]. The idea of common knowledge really enables the “right” formulation of correlated equilibrium. It’s not some kind of esoteric extension of Nash equilibrium. Rather, it says that if people simply respond optimally to their information—and this is commonly known—then you get correlated equilibrium. The “equilibrium” part of this is not the point. Correlated equilibrium is nothing more than just common knowledge of rationality, together with common priors.

* * *


H: Let’s talk now about the Hebrew University. You came to the Hebrew University in ’56 and have been there ever since.

A: I’ll tell you something. Mathematical game theory is a branch of applied mathematics. When I was a student, applied mathematics was looked down upon by many pure mathematicians. They stuck up their noses and looked down upon it.

H: At that time most applications were to physics.

A: Even that—hydrodynamics and that kind of thing—was looked down upon. That is not the case anymore, and hasn’t been for quite a while; but in the late fifties when I came to the Hebrew University that was still the vogue in the world of mathematics. At the Hebrew University I did not experience any kind of inferiority in that respect, nor in other respects either. Game theory was accepted as something worthwhile and important. In fact, Aryeh Dvoretzky, who was instrumental in bringing me here, and Abraham Fränkel (of Zermelo–Fränkel set theory), who was chair of the mathematics department, certainly appreciated this subject. It was one of the reasons I was brought here. Dvoretzky himself had done some work in game theory.

H: Let’s make a big jump. In 1991, the Center for Rationality was established at the Hebrew University.

A: I don’t know whether it was the brainchild of Yoram Ben-Porath or Menahem Yaari or both together. Anyway, Ben-Porath, who was the rector of the university, asked Yaari, Itamar Pitowsky, Motty Perry, and me to make a proposal for establishing a center for rationality. It wasn’t even clear what the center was to be called. Something having to do with game theory, with economics, with philosophy. We met many times. Eventually what came out was the Center for Rationality, which you, Sergiu, directed for its first eight critical years; it was you who really got it going and gave it its oomph. The Center is really unique in the whole world in that it brings together very many disciplines. Throughout the world there are several research centers in areas connected with game theory. Usually they are associated with departments of economics: the Cowles Foundation at Yale, the Center for Operations Research and Econometrics in Louvain, Belgium, the late Institute for Mathematical Studies in the Social Sciences at Stanford. The Center for Rationality at the Hebrew University is quite different, in that it is much broader. The basic idea is “rationality”: behavior that advances one’s own interests. This appears in many different contexts, represented by many academic disciplines. The Center has members from mathematics, economics, computer science, evolutionary biology, general philosophy, philosophy of science, psychology, law, statistics, the business school, and education. We should have a member from political science, but we don’t; that’s a hole in the program. We should have one from medicine too, because medicine is a field in which rational utility-maximizing behavior is very important, and not at all easy. But at this time we don’t have one. There is nothing in the world even approaching the breadth of coverage of the Center for Rationality.

It is broad but nevertheless focused. There would seem to be a contradiction between breadth and focus, but our Center has both—breadth and focus. The breadth is in the number and range of different disciplines that are represented at the Center. The focus is, in all these disciplines, on rational, self-interested behavior—or the lack of it. We take all these different disciplines, and we look at a certain segment of each one, and at how these various segments from this great number of disciplines fit together.



H: Can you give a few examples for the readers of this journal? They may be surprised to hear about some of these connections.

A: I’ll try; let’s go through some applications. In computer science we have distributed computing, in which there are many different processors. The problem is to coordinate the work of these processors, which may number in the hundreds of thousands, each doing its own work.

H: That is, how processors that work in a decentralized way reach a coordinated goal.

A: Exactly. Another application is protecting computers against hackers who are trying to break down the computer. This is a very grim game, just like war is a grim game, and the stakes are high; but it is a game. That’s another kind of interaction between computers and game theory.

Still another kind comes from computers that solve games, play games, and design games—like auctions—particularly on the Web. These are applications of computers to games, whereas before, we were discussing applications of games to computers.

Biology is another example where one might think that games don’t seem particularly relevant. But they are! There is a book by Richard Dawkins called The Selfish Gene. This book discusses how evolution makes organisms operate as if they were promoting their self-interest, acting rationally. What drives this is the survival of the fittest. If the genes that organisms have developed in the course of evolution are not optimal, are not doing as well as other genes, then they will not survive. There is a tremendous range of applications of game-theoretic and rationalistic reasoning in evolutionary biology.

Economics is of course the main area of application of game theory. The book by von Neumann and Morgenstern that started game theory rolling is called The Theory of Games and Economic Behavior. In economics people are assumed to act in order to maximize their utility; at least, until Tversky and Kahneman came along and said that people do not necessarily act in their self-interest. That is one way in which psychology is represented in the Center for Rationality: the study of irrationality. But the subject is still rationality. We’ll discuss Kahneman and Tversky and the new school of “behavioral economics” later. Actually, using the term “behavioral economics” is already biasing the issue. The question is whether behavior really is that way or not.

We have mentioned computer science, psychology, economics, politics. There is much political application of game theory in international relations, which we already discussed in connection with Kissinger. There also are national politics, like various electoral systems. For example, the State of Israel is struggling with that. Also, I just came back from Paris, where Michel Balinsky told me about the problems of elections in American politics. There is apparently a tremendous amount of gerrymandering in American politics, and it’s becoming a really big problem. So it is not only in Israel that we are struggling with the problem of how to conduct elections.

Another aspect is forming a government coalition: if it is too small—a minimal winning coalition—is will be unstable; if too large, the prime minister will have too little influence. What is the right balance?

Law: more and more, we have law and economics, law and game theory. There are studies of how laws affect the behavior of people, the behavior of criminals, the behavior of the police. All these things are about self-interested, rational behavior.

* * *


Yüklə 0,92 Mb.

Dostları ilə paylaş:
1   2   3   4   5   6   7




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə