Nassim taleb & daniel kahneman february 5, 2013



Yüklə 107 Kb.
səhifə1/3
tarix12.08.2018
ölçüsü107 Kb.
#62335
  1   2   3



NASSIM TALEB & DANIEL KAHNEMAN

February 5, 2013

LIVE from the New York Public Library

www.nypl.org/live

Celeste Bartos Forum
PAUL HOLDENGRÄBER: Good evening. I would like to thank Flash Rosenberg, our artist in residence, who, together with our videographer, Jared Feldman, provide an afterlife for our LIVE from the New York Public Library events. And it is the afterlife of these conversations which intrigue me dearly, as you can see from the note we have included in your program.
My name is Paul Holdengräber, and I’m the Director of Public Programs here at the New York Public Library, known as LIVE from the New York Public Library. You have all heard me say this many times, for those of you who have come before. My goal here at the Library is simply to make the lions roar, to make a heavy institution dance, and if possible to make it levitate. I’d like to say a few words about the upcoming season. We will have Adam Phillips coming. George Saunders with Dick Cavett. Anne Carson. Supreme Court Justice Sandra Day O’Connor, William Gibson, Nathaniel Rich, Junot Díaz, Daniel Dennett, David Chang, and many, many others. We have just added two visual artists to this spring season, the ever-cool Ed Ruscha on March 6 and in May we will have Matthew Barney. I encourage you to join our e-mail list so that you know what is my fancy at the latest possible moment.
Daniel Kahneman and Nassim Taleb will be signing some books after their conversation. As usual I wish to thank our independent bookseller, 192 Books. Preceding the signing, there will be time, if the mood permits it, to take some questions, brief questions, only questions, only good questions. (laughter) No pressure. Now, I’ve said this also before. A question can be asked usually in about fifty-two seconds.
I have wanted to invite Nassim Taleb back to the New York Public Library to a LIVE program, and his most enticing new book, The Antifragile: Things That Gain from Disorder, seemed like the perfect occasion. Nassim and I have contemplated many, many times having a conversation onstage here about things French, particularly André Malraux, who we both like and feel has not gotten his share, and also we might want to talk about Michel de Montaigne or even Pascal. We may someday Nassim indulge ourselves, I promise you. I promise to try. But for now.
When I asked Nassim who his utmost desired interlocutor would be tonight, not his interviewer, but someone who would be in true conversation with him, he said without hesitation as if I knew who he meant, “Danny!” (laughter) He meant Daniel Kahneman, the 2002 winner of the Nobel Prize in Economics, professor emeritus of psychology at Princeton University, and the author of Thinking Fast and Slow. I’m so delighted that Daniel Kahneman agreed to this conversation, where I hope these two gentlemen will goad each other sufficiently.
Now, over the last five, six years I’ve asked my various guests to provide me a biography in seven words rather than reading their accomplishments, which are many in each case, to give me seven words which might define them or not at all. A haiku of sorts, or if one wants to be very modern, a tweet. Nassim Taleb provided me with this, which I think is tremendously enlightening: “A convexity metaprobability and heuristics approach to uncertainty,” “which is best explained,” and then he provided me a three-line link and I clicked on the link, hoping for some enlightenment, maybe some of you don’t need it, but upon opening the link I came upon a 395-page document, (laughter) which after the seven words mentioned, which were part of it, has the following, which I hope will help you: “Metaprobability consisting of published and unpublished academic-style papers, short technical notes, and comments.” I did not print the close to 400-page document, because I was hoping for some clarification tonight.
Daniel Kahneman sent me his seven words, but from the outset negotiated with me and asked me if I would accept five. (laughter) He had I’m sure more to offer, but he offered me five. That was a relief after the 395-page document. Here are the five words: “Endlessly amused by people’s minds.” Ladies and gentlemen, please welcome warmly to this stage Daniel Kahneman and Nassim Taleb.
(applause)
NASSIM TALEB: In the book Antifragile, I discuss something called the Lindy Effect, that time is a great tester of fragility, with the following law. When you see an old gentleman next to a young man, you know, based on actuarial tables, that the young man will outlive the old gentleman, but for anything nonperishable—like an idea, a book, a movie—it’s the exact opposite. If you see an old book, we just saw the Gutenberg, a 500-year-old book, quite impressive. If you see a book that has been in print say, 2,000 years, odds are you can guess it will be in print for 2,000 years. Been in print for ten years, ten years. If an idea has been in fashion for fifty years, then fifty years. That is why we’re using this glass, this three-thousand-year-old technology.
Danny’s ideas are forty years old. People think that Thinking Fast and Slow is a new book, and effectively, before you published your book, I was discussing it here, as exemplified, you can predict the life of an idea or the importance of an idea based on its age, so therefore I should be talking about your book, not mine, given it has forty years. This is a great way to show how time can detect fragility and take care of it. This is introduction, so, about why you should be one running the show and your ideas should be the ones—
DANIEL KAHNEMAN: Well, I’m not going to run the show because I think the focus of this conversation should be your recent book, I mean, mine is already old, I mean, it was out I think in October 2011. Yours is a lot newer, so let’s talk about what Antifragile is.
NASSIM TALEB: Okay, let me introduce the idea of antifragility with the following. I was an option trader before I met Danny. We met in 2002, 2003, and of course it changed a lot of things for me. I decided then to become a scholar, all right, right after meeting him, immediately, all right, I’m going to be a full-time scholar. It took me a couple of years to become a scholar and of course I went and I had a book that I had to revise.
Anyway, before that I was an option trader. And option traders aren’t very sophisticated people. They know two things: volatility and alcohol, all right? (laughter) So—and they classify things into things that like volatility and things that hate volatility. There are packages that like volatility, packages that hate variability, volatility, they call it long gamma or short gamma. That was my specialty. When I entered the scholarly world I realized that there was no name for things that benefit from volatility. “Robust” isn’t it, you see. Things that are robust, things that are adaptable, things that are agile, resilient, destructively resilient, creatively resilient, all these things, they are not the opposite of fragile, and they’re not the equivalent of things that gain from volatility, so I decided, figured out, that what is short volatility is fragile, this doesn’t like volatility, because if there is an earthquake in New York—you never know with, you know, Paul—there may be earthquakes here, but if there is an earthquake, this is not going to gain from the earthquake, you see? So it does not like disorder, it does not like volatility, it doesn’t like these things.
So I figured out that the fragile is a category of object and the opposite of fragile was not robust. The opposite of fragile had to be a different category. If I’m sending a package to Siberia, “fragile,” you know, you translate into Siberian or whatever Russian they use, “handle with care.” The opposite wouldn’t be, you know, a robust package, you write nothing on it, the opposite would be something on which you write, “please mishandle.” There’s no name for that category, so I called it antifragile, what benefits from volatility has antifragility, and I realized that somehow the people who would interview wouldn’t get it. When we shoot for something, we shoot for resilience. That’s not it. If you aim for resilience, you’re not going to do very well. So, you know, I decided to classify, and this book is about classification of things into the three categories: fragile, robust, antifragile. So Danny, there you go.
DANIEL KAHNEMAN: Well, I mean, you’re almost forcing me to define what antifragile is because you haven’t done it.
NASSIM TALEB: It gains from variability, disorder, stress, what else, stressors, harm, things that benefit from—
DANIEL KAHNEMAN: In an early chapter of the book you have a very long table with three columns: Fragile, Robust, and Antifragile, and you can pick any of the rows in that table and elaborate on it. For example, tourists and flâneur, one of your favorite words. Why are tourists—and there is something quite deep in that discussion.
NASSIM TALEB: I was an option trader. Options you like optionality, you see, you like uncertainty, you benefit from uncertainty, you like some disorder. When you’re a tourist, you’re on the track. If the bus is late, you’re in trouble. If you’re an adventurer, you benefit, you’re opportunistically taking advantage of uncertainty so therefore you’re in that category of antifragile, and if you’re robust you don’t care. So the idea of entrepreneurs are in the category of antifragile and the people who have you know very rationalistic thing, you’re put on track, you’re following a, you know, a certain code, you are fragile because if something breaks you’re in trouble. Nothing good can happen. Things can go wrong, but they can’t get better.
So the fragile has more downside than upside. Uncertainty harms them. Take a plane ride. Inject uncertainty into a plane ride. I just came from Europe, it was eight hours, ended up to be sixteen, so I was eight hours late. Have you ever landed in JFK coming from Europe eight hours early? (laughter) So you inject uncertainty the travel time extends. The opposite would be a system in which if you inject uncertainty, you have benefits, and that’s the antifragile, entrepreneurship. If you’re an adventurer, you like uncertainty.
And people, you know, couldn’t get it. And I have a way to explain it. Someone tells me, “what’s the difference with resilient?” I says there’s a big difference, if I buy insurance on my house in ten times the amount needed, I want an accident to happen. It’s—every day, every morning I’d wake up, I’d say there’s no trouble, earthquake, no nothing, I get paid ten times the damages. So that’s like inverse insurance. So insurance companies of course are short volatility are harmed by disorder, they are fragile and someone who has ten times insurance would benefit from uncertainty.
DANIEL KAHNEMAN: There are many ways in which robust and antifragile sort of contrast in your world that I’ve been trying to understand. So you’re opposed to—you’re in favor of decentralization, you’re opposed to planning, you are opposed to—
NASSIM TALEB: I’m not quite opposed to planning. I’m opposed to planning like you’re on a highway and with no exits. You suddenly have a problem you can’t exit, that’s what I’m against. I’m in favor of planning if you have optionality to exit. You see, an option benefits from uncertainty. You need options out, so there’s something technical. A five-year plan, like a five-year option is not as good as a series of one-year, very adaptable, you’re not, you can’t take advantage of changes in the environment if you’re planned.
DANIEL KAHNEMAN: So one of the points that you make which resonates with everybody is that “too big to fail” is actually the theme that you anticipated in your previous book, anticipated the crisis in—you know, one hates to use the word prediction, but you came as close as anyone I think to anticipating that crisis, so—but you take that very far. I mean you wouldn’t be satisfied with just breaking up the banks. You are really questioning globalization—you are questioning a great deal of what—there seems to be a logic within modern economy of things getting bigger, of people searching for economies of scale and people searching for power, and the drive for power and for economies of scale causing things to grow, and you are really, it seems to me, fundamentally opposed to it, or am I pushing you too far?
NASSIM TALEB: No, no, no, definitely. A system that gains from disorder has to have some attributes, and a system that’s not harmed from disorder, has to have some, follow some, has a certain structure. So let me explain, you know, the Greenspan story. The mistake we made, you know, ever since the Enlightenment, but it’s very exacerbated now because we’re more capable of controlling our environment. We want to eliminate volatility from life. We make the categorical mistake, I call it mistaking the cat for a washing machine. I don’t know if you own a cat or a washing machine, they’re very different. (laughter) The washing machine will never self-heal. An organism needs some amount of variability and disorder, otherwise it dies. It actually—an organism communicates with the environment via stressors. You see, you have stressors, you lift weights, your bones are going to get thicker. If you don’t do that, if you go on the space shuttle, you’re going to lose bone density. So you need stressors.
So people made the mistake of thinking that the economy is more like some machine that needs constant maintenance, will never get better, and like this, you put it on a table, okay, it will be harmed by any disorder. And the human body that needs a certain amount of variability. That huge mistake led us to micromanage the economy into chaos. Take forest fires. If you extinguish every forest fire, single forest fire, a lot of flammable material will accumulate, and the big one would be a lot worse.
So what Mr. Greenspan did, actually he’s writing a book, he probably will come here. What Greenspan did is he micromanaged the economy and had you given him the seasons or the nature, he would have fixed nature at 68.7 degrees year-round, no more seasons, so of course things got weaker and you had the pseudostability. Danny himself, you mention in your writings that when human beings are chicken, okay, usually, they claim to have a lot of courage, so they take risks they don’t know. They like to take risks they don’t understand, but when you show them variability and risk they get scared. We try to manage things, overmanage things, our health, the economy, a lot of things, into complete chaos and fragilize it. That’s a mistake, the first mistake is mistaking something organic for an engineered product, that’s the first mistake, and of course it has the psychological—there’s a psychology behind it, no?
DANIEL KAHNEMAN: Well, I think there is an interesting psychological issue. My sense is that people by and large prefer robustness to antifragility and that there are deep reasons for that. So what are you advocating is not intuitive. What you are advocating, it fits you very well. (laughter) I mean, it’s not—but that is because you know, you have what you call “fuck you” money.
(laughter)
NASSIM TALEB: Sorry.
DANIEL KAHNEMAN: You can afford to be antifragile and to live in that particular way. Not many people do. Not many people would want to. You like unpredictability, moderate unpredictability. Most people really don’t like it very much. They like much less unpredictability than you do. And some of your prescriptions push the boundary a lot. That is, this is less obvious perhaps in this book than in your previous one, The Black Swan. You are very much, well, you’re very much a standard economic profession and economic models and models of options, and people trying to predict the future. That is, for you, the attempt to predict is a sort of arrogance, and here I think both of us, this is something that we certainly agree on, both of us, this is something that we certainly agree on, it’s a sort of arrogance, you have a system it seems to me in which probability plays very little role in this book because you don’t believe we can do it, we can actually say much, it seems to me, about the future, we shouldn’t try about the things that really matter and so you have a system that would guide people by the outcomes, by the range of outcomes, so basically the major prescriptions is limit the losses, don’t limit the gains, it’s called convexity.
NASSIM TALEB: Exactly. I have two things to say here. Look at this coffee cup. We know why it is fragile and we can measure the fragility. I can’t predict the event that would break the coffee cup, but I know if there is an event, what will break first. The table will break after the coffee cup, or after the glass, so we can measure pretty much very easily fragility, and let me link it to size, and then I’ll talk about prediction. And, you know, to illustrate too big to fail.
I was trying to explain for a long time why too big to fail was not a good idea, why too large is not good, why empirically companies never become large, they go bust before unless governments prop them up, like the banks. I figured out something from Midrash Tehillim. There’s a story of a king who had a mischievous son. And he was supposed to punish the son, and the standard punishment was to crush him with a huge stone. I don’t know if you’ve had to crush your son with a huge stone, but it’s not a very—he was definitely looking—looked for a way to get out of that. So what did he do? What do you think he did?
He crushed the big stone into small pebbles and then had some coffee and then pelted the son with small pebbles, all right? So you see if you’re harmed by a 10 percent deviation more than twice a 5 percent deviation, you are fragile. If a ten-pound stone harms you more than twice a five-pound stone, you are fragile, it means you have acceleration of harm, so you can measure fragility simply through acceleration of harm, if harm accelerates, if I smash my car against a wall at fifty miles an hour, once, I’m going to be a lot more harmed than if I smash it fifty times at one mile per hour. Don’t try because—definitely a lot more than if I smashed it five thousand times at a one millimeter per hour speed, you realize. So that’s acceleration, you can figure out.
So from that, we can have rules of what’s fragile. We can measure fragility. And we can eliminate fragility, and we know that size brings fragility. For example as projects, Danny and a lot of his disciples have figured out that people tend to underestimate the cost of projects. It’s chronic. Projects tend to last longer. I don’t know if you had to renovate a kitchen, (laughter) but you probably will experience that, okay. It’s getting worse with complexity. One common friend, you know, I asked him to provide me with numbers, and we looked at it, we realized that in the UK a hundred-million-pound project had 30 percent more cost overruns than a ten-million-pound project. I mean, size brings some fragility. Which is why we don’t have that many elephants. An elephant breaks a leg much very quickly compared to a mouse. I don’t know if you have a mouse, but if you play with a mouse, it doesn’t care. An elephant breaks a leg very quickly. So this is why I don’t like size.
A decentralized government makes a lot of small mistakes. It seems messy, because you see a lot of mistakes. They’re in the New York Times front page every day, all right, so it scares people. A large centralized government doesn’t seem to make mistakes, it’s smoother, but guess what, when they make mistakes, we had two of them last decade, we had the guy who went to Iraq, about three trillion dollars so far and counting, and we had Mr. Greenspan, two big mistakes. When you have decentralization multiplies the mistakes, they’re smaller, pretty much like the pebbles, you know? They’re going to bother you, but they’re not going to kill you.
DANIEL KAHNEMAN: I want to raise some discomfiting questions. One of them, you know, goes back to something that you don’t emphasize in this book quite as much as you did in The Black Swan, but that’s your turkey example, and I think it’s important, so you tell the turkey story, and then I’ll respond to you.
NASSIM TALEB: In The Black Swan, there’s a turkey that is fed every day by a butcher and every day confirms to the turkey, to the turkey statistical department, to the turkey’s, you know, policy wonks, to the turkey’s office of management and budget, that the butcher loves turkeys with increased statistical confidence every day. And that goes on for a long time. There is a day in November when it’s never a good idea to be a turkey, which is T minus 2 we call it, all right, Thanksgiving minus two days, all right? So what happens, there’s going to be a surprise, it’s going to be a black swan, a large surprise event, but it would be a black swan for the turkey, not a black swan for the butcher. So this is the turkey story, and my whole idea from the turkey story to explain my black swan problem, which was misinterpreted for five years is the whole idea is let’s not be turkeys, that’s the whole point of The Black Swan. (laughter) Danny sort of, you know, has a psychological interpretation of the turkey problem.
DANIEL KAHNEMAN: I mean, I have a problem with it because when I look at this, at your story, I think the turkey has a pretty good life, until, you know. I think that this sort of worry-free life that the turkey enjoys until Thanksgiving, this is something that we aspire to. That is, people do want robustness, they do want predictability, they do dislike risk and this is very clear in your case, the focus being on extreme events, so you don’t put a lot of weight on however many days the turkey has to enjoy life without worry. You put a lot of stress on the disaster, and it turns out, and the same I think you have this as a general point about your orientation that you put a lot of stress on events that are very rare and extremely important, both the big ones—the good ones and the bad ones. You wrote several examples of bad ones.
Examples of good ones. You bring your own life story. Where I think it is fair to say that you’re a pretty wealthy man, and you made your wealth in two periods, in two brief periods of time, and most of the rest of the time you either broke even or lost some. And you had it set up so that in some sense it was predictable that this policy of waiting, of arranging things so that you left yourself open for a very large positive accident while preventing your losses from becoming very severe. That is clearly your ideal view of how to run things. Whether it is the ideal view for most people I’m not sure. That is, for most people we have a mechanism that will allow us, that makes us extremely sensitive to the small losses that you incur along the way in a way that may not be compensated by the extreme win, and the story’s similar on the other side.
NASSIM TALEB: Yeah, and effectively that’s what, this explanation is why you went to Stockholm in 2002, that’s exactly prospect theory. Danny discovered the following: you’d rather make a million dollars over a year in small amounts and get more pleasure. If you’re going make money, make it in small amounts, and if you’re going to going to lose money or have a bad event, have it all at once, because losing, you’d rather lose a million dollars in one day than lose a little bit of money, even a smaller amount, okay, over two years, because by the third day it’s like Chinese torture, and that’s prospect theory, that’s the reason he had the Nobel, it’s the reason he—you know, the whole thing. And people still haven’t absorbed that point of prospect theory, and when we met in 2003 I immediately found the embodiment of my idea right there in prospect theory and that equation.
Yüklə 107 Kb.

Dostları ilə paylaş:
  1   2   3




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə