Nassim taleb & daniel kahneman february 5, 2013



Yüklə 107 Kb.
səhifə2/3
tarix12.08.2018
ölçüsü107 Kb.
#62335
1   2   3

DANIEL KAHNEMAN: In a way what you are, and if we link it to prospect theory, what you’re prescribing goes against the grain. That is, you are prescribing a way of being or a way of doing things that exposes an individual to a long series of small losses in the hope of a very large gain.
NASSIM TALEB: Not quite. What I’m doing is the opposite, trying to tell people, “do not mortgage your future for small gains now,” and they get it, 20 percent that way, and there’s another psychological thing that people tend to create a narrative that the large event will never happen, so—and there is another dimension of why do we have bankers, for example? They make pennies, they make pennies, they make pennies, and when a crisis happens, they lose everything they made before, and effectively in 1982, I was writing in The Black Swan, and I got so many hate letters from bankers. In 1982 I showed why the banks lost in one quarter more money than they’d made in the history of banking until then, history of money center banking, they repeated in 1991 and of course 2007 you need more examples, so and every time that there’s another dimension, it’s not their money they’re losing, it’s your money. Everyone here who files that piece of paper on April 15th is subsidizing them, because when they make the money, theirs, and when they lose the money, it’s ours on April 15th, taxpayers sponsored it, so we have here what I call a transfer of antifragility. The banker is antifragile, he has upside, no downside; taxpayer has downside no upsides, so you’re going to have a lot of risk-hiding on top of the psychological thing.
So and then you told me, you know, there is two effects, and I call them one, fooled by randomness and two, crooks of randomness, okay, so there’s the fools of randomness and the crooks of randomness, you see, it’s a combination that we see presently in society. If you removed one, the crooks of randomness, you would have much less of it, this of course phenomenon would prevail, although there is something called dread risk, people may be scared by extreme events if you present it to them properly as you have shown, but the phenomenon, crooks of randomness is what you definitely can remedy.
DANIEL KAHNEMAN: There is something else that troubles me, and is psychologically troubling, I think. You take to task and very severely in The Black Swan and, you know, you repeat it to some extent in Antifragile. You take to task people who attempt to predict in the economic domain because they do not predict the big events. You call them names. You call them charlatans and other.
NASSIM TALEB: They are.
(laughter)
DANIEL KAHNEMAN: And yet those people are quite popular, and their services are very much in demand and I—my image about that is that if you have a system of weather forecasting that does pretty well in the everyday. It does pretty well at telling you when to take an umbrella and when not to, but it completely misses out on hurricanes. You would reject that system altogether.
NASSIM TALEB: No, I mean, we definitely. When we get on planes, all right, when we get on a plane, we focus on the plane crashing, not on comfort. So it’s not like we’re going to have bad coffee on the plane. The risk is well defined, the plane crashing, not having an uncomfortable ride. So what you’re talking about is different. Someone is predicting an event, you have to predict the full span of events, you see, you have to be protected. And people understand it in the military, in the military where they routinely spend eight hundred billion dollars a year to deal with extreme events, we haven’t had a big war here for about fifty-some years, sixty years. So we have the problem of people who predict who are not harmed by their mistakes. So what are they going to do? They’re going to predict the regular very well, and when the extreme event happens, guess what, it was unpredictable, you see? Yet, you’re going to rely on it. And you showed in your research, actually I was inspired by it in the story of the Wheel of Fortune, how if someone flips the Wheel of Fortune randomly in front of you and then you get a number and makes you estimate any variable, whatever you’re going to estimate will be correlated to what you saw on the Wheel of Fortune.
DANIEL KAHNEMAN: That’s a phenomenon called anchoring. Any number that somebody puts in your mind is likely to have an effect. Well, if you consider it as a possible solution to an estimation problem, it’s going to have an effect on how you think about the problem, so if I give you, if I ask you are there more or less than well 65 percent African nations in the UN and then I ask you what is the proportion of African nations in the UN you are going to end up with a different estimate than if I first ask you are there more or less than 15 percent African nations in the UN. You know that those numbers are wrong, but they influence you. They tell you how to think about it.
NASSIM TALEB: The conclusion is that if I produce a forecast, I’m going to harm you, you see. No different—I call it iatrogenics, harm done by the healer. No different from when you go to pharmacy to get a drug, if it doesn’t work, it may harm you. Actually, it will harm you, so this is the fact of forecasting, it is harmful. And my idea is to build a society that can sustain forecast errors, so that charlatans can forecast all they want, they should not be able to harm us, that’s the idea.
How to build such a society? Less government debt, because when you have debt you have to be very precise about the future. You can’t make forecast errors, so you need to have less debt. You’re more robust when you have less debt. Actually, to become antifragile, first you have to become robust, less debt, decentralization, and the third one is elimination of moral hazard. How do you eliminate moral hazard? I call it the Bob Rubin problem. Bob Rubin made 120 million dollars from Citibank from hiding risk, or when Citibank was hiding risks and then when Citibank went bust, he didn’t show up to work with his checkbook you see to return the money, no, we’re paying for it retrospectively. So here we have truck drivers paying for his bonus. So we have to eliminate these three. If you eliminate these three, then we’re a lot better off. Then would come we sit down and look at what psychological problem we have left that we should monitor.
DANIEL KAHNEMAN: I think you know the problem of skin in the game and in a way what you’re saying there is more generally acceptable, that incentives have to be aligned. That would not be a very controversial statement, and you make it in a controversial way.
NASSIM TALEB: No, no, no, it is, because their definition of skin in the game is a bonus for a lot of people. They don’t understand. Skin in the game is not a bonus. Skin in the game is a bonus and malice. You have to be penalized by your mistake, small amount, nevertheless you have to be penalized. You know they used to behead bankers when they made a mistake. And the best risk management rule actually, we discovered that since Hammurabi. Hammurabi had a very simple rule, who is the best risk manager is you, or can be the best risk manager. If an architect, and I’m sure this architect took care of construction, if an architect or engineer builds a building and the building collapse and kills the owner, Hammurabi’s law, the architect is put to death. Why? Not because they wanted to kill architects, look, they build a lot of ziggurats, all right, what they wanted to prevent is risk hiding, because they realized that no cop, no regulator will ever know more about the risks than the person who created them and where are they are going to hide the risks? In tail events, in rare events that happen infrequently, so that’s why we have to have some disincentive, not very large. Even Adam Smith by accepting the limited liability company was not crazy about the fact that people could have no liability. He wanted people to always have some liability. So capitalism doesn’t work unless you have liabilities, and here we have a class of people, bankers, they got us in trouble and they made all the bonuses after. In 2010 they paid themselves a larger bonus pool than before the crisis. It’s crazy.
DANIEL KAHNEMAN: I don’t think, you won’t get an argument from me on this issue, (laughter) so you know we can argue in Antifragile this is robust because the implications of your argument are, I think, are costly. I mean, there is a cost.
NASSIM TALEB: They’re necessary. They’re beyond necessary. If I put someone, a child, completely deprived of stressors in a sterilized room for five years and then bring him here to visit the New York subway, you know, next door, how many minutes will he last? So obviously, if you’re not exposed to stressors, you’re going to be weakened, okay? We’re getting—we have a society that is obsessed by eliminating small stressors, small risks at the expense of large risks. And this manifests itself in a lot of things. The fact that we didn’t have a name for antifragile means half of life doesn’t have a name. It didn’t matter in the old days because we had stressors all the time. And now we control our environment, and we control the wrong thing about the environment. We try to make the ride comfortable but don’t eliminate the large risks. In fact, it’s the opposite.
In any field. We’ve been harmed, I call it denial of antifragility. Denial of antifragility, Greenspan, without the economy created no more boom and bust, it’s natural to have boom and bust in some amount. He created the big bust. Okay? I was discovering. There’s something called Jensen’s inequality. Jensen’s inequality means some properties of nonlinear response that makes you benefit from variability. Your food intake. If you eat food, if you always have the same amount of calories all the time you’re going to be injured, you know. If you stay on a chair, no stressor, all right, your back is going to become weaker. Stuff like that that you can generalize. So what I’ve done here is try to identify the blind spot we have today that matter today did not matter in the past because in the past, the environment was providing us with stressors.
Take, for example, temperature. It’s not healthy to stay at 68.7 degrees twenty-four hours, it’s not healthy. You need variability. We’re made for variability. We’re made for an environment where you had big thermal fluctuation. So you injure yourself and effectively now have a catalog of things, harm you’re getting, from not getting variability. Likewise in the economy. If you don’t have—think of the restaurant business. If you don’t have bankruptcies in the restaurant business, what would happen? Tonight we’re going out to dinner, all right? The quality of the meal is a direct result of the fragility of individual restaurants. A system, to be robust and antifragile, needs to recycle and improve the fragility of the components. Okay, otherwise, you know, look at Russia, restaurants didn’t go bust. I don’t know if you ate food in the seventies in Russia, but some people are still trying to recover from the experience. (laughter)
But I have one more thing to say. A system that does not convert stressors, problems, variability into fuel is doomed. Let me give you a system that’s perfectly adapted to converting stressors into improvement. The air transport. In the last seven years we had one plane crash, commercial plane crash. I mean, I’m not talking about individuals flying on weekends half drunk, plane crash for, you know, major airlines. Why? Every plane crash that has happened has made the next plane ride safer, lowered the probability of the next plane crash. Every plane crash. That’s a system that is overall antifragile where you benefit, you never let a mistake go to waste. Now, compare that to the banking system. A bank crash would make the probability of the next bank crash much more likely, all right, much higher. That’s not a very good system, all right? Now we can based on that compare things.
DANIEL KAHNEMAN: I keep going back to the same point. This is not really what people want to do. That is, I certainly—many of us certainly go almost directly in New York from heating to air conditioning and vice versa because we do like the constant temperature. So you are making a point which I think is true and deep that in some situations that we are made for variability, that is, we are designed by evolution to be able to cope with stressors and indeed as you—a point you made absolutely correctly to benefit from stressors. But we’re also designed to avoid stressors, to try to avoid stressors.
NASSIM TALEB: Identical to randomness. We’re made to hate randomness because the environment was giving us randomness and it prevented us from dying, prevented us from encountering the big, you know, large-scale event so we are made to hate all kind of randomness because we’re not, you know, fine-tuned, you know, for such subtlety. Okay, that some randomness in small amounts is good for us, so we realize randomness is bad, is bad, okay, so it’s the same way, the same way we think that stressors are bad, when in fact big stressors are bad, small stressors are beneficial. This is the nonlinearity that we don’t capture intellectually.
DANIEL KAHNEMAN: Well, the psychology of it is the following, that we’re actually relatively more sensitive to small losses than to big losses and to small harms than to big harms. That is, we have a limited ability, actually, to feel pain and we feel a lot of pain for very little harm and then it doesn’t get worse proportionally. So that in some, in a very real sense we’re designed against what you want.
NASSIM TALEB: Except that guess what saved us from this? Religion. I mean, I’m Greek Orthodox, all right, I’m not practicing, but sometimes practicing for dietary reasons, okay? (laughter) We—mean think about it, religions force you to fast, force you to have variability in your food, especially we have forty days for Lent, forty days plus every Wednesday and Friday, no animal—you’re vegan. So you’re vegan so many days, why? To prevent you from having protein, because we’re part lion and part cow. The lion in us gets the protein with a random frequency whereas the cow in us eats salad without dressing every day all the time, all right, so you see the boring and the hunter, so if you’re made to get protein episodically, intermittently, and you get it all the time, you may be harmed. Religions have evolved to prevent us from that by banning us from eating protein seven days a week, you see, you can look at the fasts in Ramadan has the very same purpose. So you see all these rituals were there to help us cope, force us on grounds of—religion is like someone packaging a story, you know, giving you a story in fact to force you to do something else, and that was a good thing, so we have had mechanism to correct for this—for the diseases of abundance. I mean, we live in a world today where a lot more people are dying of overnutrition than undernutrition, you see? We have seven hundred million people supposed to be underfed, but of these the really ill are a very small number.
DANIEL KAHNEMAN: I want to change the subject (laughter) and I want to tell the story of what I have learned from you, and it’s going to be only part because I’ve learned a fair amount. Nassim really changed the way I think about the world in quite significant ways by making me realize that fundamental unpredictability of truly important things in our lives, this idea, the Black Swan idea of rare events, extreme events dominating what actually happens in our lives, profoundly important insight and uniquely original, I think, and it certainly had a very large impact on how I think about uncertainty. The skepticism about professional predictions. The fundamental skepticism and there are really sort of two personalities and when you read Nassim’s books you’ll encounter several characters. Two of them I think are part of you, that’s Nero Tulip and Fat Tony and Fat Tony is quite an interesting character. He is a trader and he is a cynic and he really fundamentally irreverent. Nero Tulip is the intellectual, and he is very much that part of you and both of them are in you.
NASSIM TALEB: This is a psychologist for me.
DANIEL KAHNEMAN: He is also irreverent and he also doesn’t take nonsense but he’s very interested in ideas, whereas Fat Tony is not particularly interested in ideas at all and the skepticism of both of these, both of Fat Tony and of Nero Tulip, was very instructive for me. I mean, you are rude to economists, frankly, and it’s not you are more rude than you need to be, in my opinion. But there is something really refreshing and something very instructive in seeing a free spirit, and those two kinds of approaches, the approach of the trader and you keep emphasizing that, and then the approach of a scholar who is really a self-taught scholar. You’re not—you don’t really respect academics very much.
NASSIM TALEB: I do, I do.
DANIEL KAHNEMAN: Some of them. (laughter) Very selectively.
NASSIM TALEB: I’m an academic now. I fake it.
DANIEL KAHNEMAN: So that was up to The Black Swan and then I remember we had a conversation in which I said to him, “Okay, you’re raising fundamental questions and you’re creating or focusing our attention on the specter of unpredictability. What are we going to do about? What are you going to do about it?” And Antifragile I think is to some extent an attempt to answer that question; it’s not a question because I raised it.
NASSIM TALEB: Danny forced a subtitle on me, I don’t know what Nassim’s book, next book, is going to be called, and the subtitle will be How To Live in a World We Don’t Understand, so I had to get to work and I had to spend three and a half years locked up trying to work out the antifragile, nonfragile, and it became the subtitle of my UK edition. In the UK they want more strange subtitles, (laughter) in the U.S. they want precise subtitles. The idea was he said, okay I said, “If you know that there is unpredictability in some domains and can identify the domains where there’s unpredictability, you’re done.” He said, “No, no, you had to take the horse to water and now you have to make it drink.” So I had to come up with precise rules, so this one is a little more precise.
DANIEL KAHNEMAN: So that’s what emerges, and what emerges is fairly surprising, actually. The Antifragile is a set of prescriptions and it’s a set of prescriptions how when you have achieved antifragility you really don’t need to predict. I mean, I think that is a very fundamental, you don’t need to predict in detail, this is where probability doesn’t feature very much in this book. It features, you know, in the more technical discussions, but fundamentally, this is about outcomes. This is about making losses small and allowing for full-out gains. That is your recipe, it is avoiding the major crises so you don’t have to anticipate them, so that you don’t have to try to predict them because in fact they’re unpredictable. And the distinction between robustness and antifragility is in some sense really giving up on the possibility of prediction.
NASSIM TALEB: There’s something technical I want to mention here. There’s a domain that’s purely antifragile and it’s entrepreneurship, okay? And I was calling for National Entrepreneur Day, why? Because, you know, as you were saying, losing a little bit of money all the time to big gains isn’t part of human nature, except in California, it is. Where it’s respectable to fail, because you fail early, so you fail seven times before your big thing, so, collectively, you have thousands of people failing for every one succeeding and the person succeeding also has failed probably seven or eight times before, okay, so they deal with failure. But they have small upside. How did they do it? One, they made it respectable, and I want to make it more respectable.
But there’s a mathematical property that’s quite shocking that came out from options that I realized and I call it the Philosopher’s Stone and nobody’s getting the following: that trial and error isn’t really trial and error. Trial and error is trial with small error. If you have small error and big upside. What is antifragile has small losses, big gains. Trial and error has to have small losses, big gains, so if you model it you realize that to outperform trial and error intellectually you need at least a thousand IQ points, which, I mean, I’m sure, you know, you’re close, but even—nobody, you know, gets close to a fraction of that. So you realize that’s my main idea when I say that you’d rather be antifragile than intelligent anytime.
And you look at the data and your realize all the big gains we have had, in any field, almost any field, except the nuclear, and even in medicine except for AZT drugs came from trial and error by people who didn’t have much of a clue about the process. Trying. You try, you discover, you’re rational enough to know that what you found is better than we had before. And this is where, and in this Fat Tony story, Fat Tony, I’m just—like talking to a shrink, I realized there was a Fat Tony in me, Fat Tony is in a sense a self-character, I didn’t know I was Fat Tony, but now I realize that I am Fat Tony. I am Fat Tony half the day now. Things comes out, you know, in these conversations. Watch out when you talk publicly with a psychologist next time. (laughter)
So the Fat Tony—where I got this idea of Fat Tony from Nietzsche, I don’t know if you’ve heard of the notion of crazy disruption, but it’s in Nietzsche, Nietzsche had Dionysus, the Dionysian in us and he has the Apollonian. The Apollonian likes order, knowledge, serenity, harmony, and, of course, creditability and see things, and there’s that dark force that’s hard to understand, the Dionysian, all right, in us and he found that when the balance got disrupted with Socrates, so Fat Tony goes to argue with Socrates. So there’s two poles, Fat Tony who doesn’t believe in knowledge, he believes in tricks and no theory, but he’s doing, and you keep trying and keep trying until something works, and you get rich and then you go have lunch, and this is why he’s called Fat Tony, he has a lot of meals.
So he was arguing with Socrates and he was able to express that sentence that Nietzsche really understood. He said, “The mistake people tend to make is to think that whatever you don’t understand is stupid.” That was from Fat Tony, to say that the unintelligible is not necessarily unintelligent, and antifragility is harvesting the unintelligibles, is harvesting what we don’t understand and this is what was done. Take the Industrial Revolution, take California, you know, the Silicon Valley, take in medicine the discoveries, is harvesting the unintelligible, with small errors and big gains and doing it on an industrial scale. And the problem is education. That’s the only thing I don’t like about academia is one, had we put Bill Gates through an entire, you know, college experience we wouldn’t have had Microsoft, all right, okay, so the problem is the Industrial Revolution happened with people who weren’t really academics and then once we got there, then we wanted the state, you know, to create invention from top down, not bottom up. That’s the problem; education inhibits risk-taking, that’s my only point, it disrupts that balance.
DANIEL KAHNEMAN: Well, when you read Antifragile, some of you, many of you I hope have read it already, but what you see is that many of the concepts that we normally admire are questioned in the book, and the book even if you don’t completely buy the argument, because I think the argument is quite extreme, but even if you don’t completely buy it, it is bound to lead you to questions, to questions about the relative value of theory and practice as you identified, to questions about the value of planning versus trial and error. So we normally favor theory, we favor general understanding, we favor deductive reasoning over induction, that’s the way our values are. These are questions in this book. We normally tend to large size, and it is built in, this thing that you oppose, it is really built in that enterprises tend to grow and they tend to try to grow in part because of hubris, in large part, it turns out because of hubris. You have leaders who seek power, and they seek power by growing. The market wants organizations to grow. I mean, the value of stocks is not determined by a stable input, it is really determined by the anticipation of whether somehow that firm will grow, even when it is already gigantic, like Apple or Microsoft, its value is determined by the expectation that it will grow. So all of these issues, which all of these topics, which we normally accept and we normally consider fairly obviously the way to go in the modern world all of these are really questioned in Antifragile. That makes it—that makes it worth taking a look even if sometimes it—the book is going to make you quite uncomfortable. Hassim has that way. He sometimes makes his friends squirm, and it remains worthwhile. I think we should open it up.
Yüklə 107 Kb.

Dostları ilə paylaş:
1   2   3




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə