MBA Masterclass The Psychology of Losing: A Snippet from ‘Managerial Decision-Making’
with George Wu
Get a taste of how Booth students learn about the psychology of decision-making and the concept of loss aversion—the notion that losses are more painful than gains are attractive.
- February 17, 2021
- MBA Masterclass
Kara Northcutt: All right. Well, we will go ahead and get started. My name is Kara Northcutt. I'm a Senior Director of Employer Engagement and Admissions at Chicago Booth, and on behalf of our executive, evening, weekend and full-time admissions teams, we are thrilled to welcome you to this evening's Masterclass, The Psychology of Losing, with Professor George Wu. This session is a snippet of Professor Wu's very popular class, Managerial Decision Making. So throughout George's presentation, feel free to submit any questions you would like to ask him in the Q&A and then after his presentation, we'll moderate and just like an open conversation Q&A. So again, feel free to submit questions throughout and then we'll pose those to George and he'll respond to those verbally at the end, after his presentation. So today you will get to experience two key components of the Chicago Approach, which is our unique approach to business education. Our data-driven and evidence-based approach helps you learn how to ask the right questions and think more strategically and analytically, which really helps prepare you and gives you the confidence to take on the business challenges you're facing today, as well as in the future. And I think you'll find from today's session, a lot of this is also very handy and convenient in your personal life as well. So it's not all about business decisions. It could also help in your personal and outside-of-work world. Another key advantage of Booth's program is our supportive and collaborative community. At Booth, you join a group of students, administrators, faculty and alumni that are really there for you throughout the entire experience. We all really wanna make sure you have the best possible experience in the MBA. So an integral part of that community, obviously, is our faculty. The same faculty teach across all of our MBA programs at Booth, and they truly also become a part of your network. So today I'm very excited to introduce you to Professor George Wu. He is the John P. and Lillian A. Gould Professor of Behavioral Science. George studies the psychology of decision making, goal setting and motivation, and cognitive biases in bargaining and negotiation. His research has been published widely in a number of journals in economics, management science and psychology, including Cognitive Psychology; The Journal of Personality and Social Psychology; and the Quarterly Journal of Economics, just to name a few. Professor Wu was the inaugural faculty director of our Harry L. Davis Center for Leadership, and he also currently serves as a faculty director for our Civic Scholars Program. George is committed to diversity, inclusion, social justice and in fact, he was honored as the 2020 recipient of the Chicago Urban League's Humanitarian Award. And I have personally had the pleasure of working very closely with George over my 12 years at Booth -- and I can tell you firsthand, I can really speak to his kindness, how approachable he is, his passion, and his commitment to making sure you have a wonderful experience at Booth, as well as really having an impact on the greater community beyond. So with that, it's my pleasure to turn it over to Professor George Wu. Thank you, George.
George Wu: All right. Thank you, Kara. Thanks for the nice words and I've had the pleasure of working with Kara for 12 years in turn, and that's been great. And so it's great to be here and to talk to all of you. A couple things before we get started. One is, I'm gonna give you a little bit of a snippet of a class that I used to teach in the full-time and part-time MBA programs, which is Managerial Decision Making, and I'll tell you a little bit about that class. I currently teach it as part of the Executive MBA class, but, you know, this'll give you a sense of at least how the class goes. I'll talk a little bit at the very end about how the particular -- let's just say 45 minutes or so that we're gonna have together -- is a little bit different than the classroom. And also, we'll also talk a little bit about, in some sense, what I think is unique to our approach -- and of course every faculty is a little bit different and so on. So before we get going, let me know two things. One is that I should read the titles carefully. I wrote "model" instead of "master," so that was meant to just make sure that you were paying attention. And the second thing is I should also learn to save my PowerPoint. So at about 4:48, I discovered that I had not actually saved the talk so I was able to recreate it. But, so -- a lesson for you all. OK, so let me get going. I think most of you know a fair amount about the curriculum. Let me just tell you a couple things on where this class fits in. So there are some required classes for, as part of the curriculum -- so economics, statistics and accounting. And then the idea is that there is, in some sense, a kind of platter of courses. They're actually what we call Functions, Management and the Business Environment. These are breadth requirements and the idea is that out of this menu of items, you can choose six lines of this. And one of the lines is about the management of decisions. And I think this shows a little bit about how, you know, we really want you to think about things from different perspectives. Decision making is something that we all do. Obviously, it is critically important in organizations; it's critically important in all of our lives. There are many different approaches to decision making, and the idea here is that we're gonna give you a choice of different perspectives there. And these are not mutually exclusive choices. In fact, many students choose at least more than one of these. Some people choose all three. So one approach is to think about modeling of decisions, so there are lots of different analytical approaches to decision modeling, simulation, what's called linear programming, lots of things of that sort. There's an approach called managerial accounting, which is to kind of understand the incentives in organizations, and also the kinds of numbers that help you to understand how to make better decisions in organizations. And the idea of this course is to think about the psychology of decision making. So let me tell you a little bit more about that, and I'm gonna take a little bit out of my syllabus. So this is verbatim from my syllabus, to give you an idea of at least how I think about it: "The aim of this course is, quite simply to make you a better decision maker. When I've asked previous students about their ability to make decisions, many have expressed a lack of self-confidence. Others have indicated that the process of making a decision, particularly an important one, is painful. There's no single source of discomfort. Some students complain that the process of deliberating is unpleasant and stressful. Others talk about second-guessing their decisions or suffer regret. Others talk about being indecisive, paralyzed and unsure of how to proceed." So Managerial Decision Making takes a systematic approach to improving your decision making. The course is organized around an important distinction in decision making between descriptive, normative and prescriptive concerns. So the idea of normative is, this is how we would like our decisions to be governed. So, if you could kind if program yourself to follow a bunch of rules, these would be what they were. And so that's something important because obviously if you wanna be a good decision maker, you have to know what you wanna try to optimize, or try to maximize, or whatever you wanna say. Now the reason why we teach a course in psychology is that we don't naturally do that. If we naturally did the things that we always wanted to do, or the things that we should do, then you don't need to have a course in terms of decision making, 'cause you'd do that naturally. On the contrary, we actually don't do that. We're smart, all of you are smart; all of you, on the whole, make good decisions. But we make decisions in a systematically biased way and that's an important idea: that our mistakes are systematic, and if we can understand how they are systematic, maybe we can improve our decision making. And so that gets to the third category, which is prescriptive -- and the way I describe it is, the idea is, that you or your organization wanna be here. You and your organization are kind of naturally here. How did you design processes, individual or organizational, to help your organization naturally be more like the organization that you would like it to be? In other words, you want your organization to be making great decisions; how do you get your organization to do that? Now the interesting thing is that when I started teaching decision making, it was kind of a novel thing. I haven't been at Booth since this particular time but I've been here since 1997 But even then, when I started, decision making as something to teach in business school curriculums was kind of unique. And it's a little different now -- and I'll tell you a little bit of why it's different -- but actually we have even a longer tradition than that. So in 1977, the Graduate School of Business, which was the precursor name to Booth -- so the Graduate School Business, now known as Booth, established what's called the Center for Decision Research. And these are the two founders, Hillel Einhorn
- - who unfortunately passed early -- Robin Hogarth, who is Emeritus Professor now in Spain, founded the Center for Decision Research. And the idea here is that this has been a source of lots of pathbreaking research over the years, and a lot of that pathbreaking research is funneled into teaching of our classes, and I'll say a little bit more about that. Now here's a quote from an article about decision making that was written a short time after the center was founded. "Decision making is fascinating," Einhorn said; this is Hilly Einhorn. "Everyone does it, and they do it all the time. You do it so often and automatically that most of the time you don't think about it. A line I like to use when I teach business executives is 'You don't think about how you think.' Then after they have thought about that, I say 'Now, how did you think about that?'" So I think one thing that's interesting about decisions is that, as this quote says, is that we do it all the time. We have to make decisions whether or not we're good at it, whether or not we have training. Most of us actually don't have much training in terms of making decisions. We have training in making certain kinds of decisions, so you might be a marketing person or a finance person or an operations person, and you have training in making those kinds of decisions. But very few of us have training kind of writ large systematically about making decisions. And few of us have thought very systematically about how we should make decisions. And this course is meant to fill the gap, so to speak. Now, more recently in 2017, many of you know that one of my colleagues, Richard Thaler, won the Nobel Prize in economics. And he won the Nobel Prize in Economics because he basically brought psychology to economics, and that is a field that is now widely known as behavioral economics. But until Thaler basically came into this particular play, there was very little psychology in economics, and there was very much no field of behavioral economics. So these kinds of traditions you see very much in the school, and you'll actually see this in the little bit, of a snippet from my class. So here is the outline of what we do. We start with kind of an introduction to decision making. We talk about these perspectives. We talk about how people make judgments. So the idea here is that when you're hiring somebody, you're trying to make sense of whether that person's gonna be a good fit in your organization, whether they're likely to stay, whether they're gonna be a good team player, whether they're gonna be able to acquire these skills, whatever. Those are judgments that you are trying to make. You're trying to make predictions about stuff that happens in the future. The reason, or one of the reasons decisions are hard, is because we don't know precisely what will happen, and it turns out that when we make these kinds of judgements, we don't do a very good job. So we have a few weeks in terms of helping you to improve your ideas about how to make these judgements by understanding the biases. And I won't go through all these things but in week six, we talk about what I call risky decision making, and with a specific application to thinking about finance. All right. So , excuse me -- so here is, at least, a stylized question that you might think of when we think about risk. And the idea is that in business all over, because there's uncertainty -- because you can't predict perfectly what's gonna happen -- there's risk. And what that means is that you boil down choices to upside and downside. Upside has some likelihood of happening, downside has some likelihood, and it could be more complex than that, but you're choosing between that and the status quo. You're choosing between one risky option or another risky option. This is true whether you're talking about an entrepreneurial venture, whether you're talking about investment, whether you're talking about hiring an employee. All kinds of decisions that have this basic aspect to them. Now, we're gonna kind of look at these things in a stripped down representation right now. But you get the idea. So if you think about following a sort of abstract choice you can choose to take a prospect in which you had a 70 percent chance -- a more-likely-than-not chance -- of winning a pretty good amount of money like $50,000. But there was a sizable, less-than-half chance at losing a pretty good amount of money. For most of us, at $25,000, some of you would say yes and some of you would say no. And the question is: How do you think systematically about these things? Now, there's actually gonna be two questions. There's a question about how should you think about them. Right? And you can look at them intuitively and get some sense of what you should do. But the real question is how should you do it and how do you do it? So I'm gonna share with you what we call the Normative Theory, and this is a theory that many of you have heard because if you've taken a microeconomics class, this is probably something that is in most, at least intermediate micro classes -- actually, probably quite a lot of basic microeconomics classes -- which I think a lot of you have taken. And the idea is very simple and very intuitive, which is that what you do is you assign a utility to an outcome. And I'll talk a little bit about how that utility comes about. And you essentially weigh that utility by the probability, you multiply the probability times the utilities, you add them up, and you choose the option with the highest expected utility. So that, I think, intuitively makes sense, and the idea of utility is that these outcomes have some purpose. So $50,000 is a lot of money. Maybe it's a lot more money if you're relatively poor; maybe it's a lot less money if you're somebody like Jeff Bezos or somebody like that. But there's some value to that, and there's some value or some decrease in value to losing $25,000. And the idea is that there is some increase and decrease in utility. You're trying to figure that out. And the sort of classic idea is that
- - and this is an idea that most of you have heard before -- there is something that is called decreasing marginal utility. And the idea here is that the first dollar is the most beneficial. Right? Because you don't really have much money or you don't have any money, or whatever, so you really desperately need the first dollar. A dollar, once you have $1,000, has a little less utility. A dollar after you have $100,000 has even less utility. A dollar after you have $1 million, even less. So that gives rise to this blue curve. This is what is oftentimes called concave. This function is concave. It tends to shape, to have a sharp slope at the top and settled down. Something in the middle is what we call risk-neutral. So if the first dollar to you, if you have zero dollars, is the same as a dollar to you if you have $100 or $1000 or whatever, then the idea is your utility function is linear. If for some reason you had the opposite pattern
- - and here's an example when you might have the opposite pattern -- let's suppose that you needed $100,000 for a down payment. The first dollar doesn't really get you there, because it puts you a long way away from the finishing line. But as you get close to $100,000, the value of an incremental dollar goes up. So the idea of this is that your utility function could have a decreasing marginal utility, it could be linear or increasing -- and if it's decreasing, then you're risk-averse. If it's linear, you're risk-neutral. You're willing to make those decisions based on expected value. And if it's increasing then you're risk-seeking. Now, that's the Normative Theory. Economists accept it as the Normative Theory. Most people, if they thought about it a little bit, think that that's a reasonable way to make decisions. It turns out that's not at all how people make decisions. And what we're gonna do is focus on one aspect of how people make decisions differently, and it's called Prospect Theory. Now, I showed you a picture of Richard Thaler before. This is a sort of hero of the field, not a University of Chicago faculty member. It's Danny Kahneman. Daniel Kahneman was a professor at Princeton, retired maybe 10 years ago, is a psychologist. And one of the things he says is, "I'm a psychologist who has never taken a class in economics in my life." And he won the Nobel Prize in the early 2000s for Prospect Theory and some other ideas. Now: Not taking a class in economics is not a good recipe for getting a Nobel Prize if you haven't figured that out -- not a good recipe at all. But the idea is that Danny Kahneman won a Nobel Prize in economics not because he tried to change economics, but because he came up with an account of how people make risky decisions that is influential in economics, as well as lots of other fields -- including psychology, management, the law, medicine and so on. And what we're going to do is -- what's amazing about this particular theory is it's both very deep and very intuitive. And what I mean by it being very deep is that when you really understand, and you can understand it more, you see it everywhere. But ultimately, the psychology that underlies this is something that we all feel at a very basic level -- not all, but mostly feel at a very basic level. So here's the idea, that remember what the normative thing is, is that we take the probabilities and they multiply them by the utilities and we sum them up. We do something that's similar here. I won't talk about one of them, which is that instead of taking the probability, what we do is we actually distort probabilities. And what we do is we tend to overweight small probabilities. So when we have like a 1 percent risk, we treat a 1 percent risk like a 15 percent risk. And we treat a 99 percent chance of something happening as something like 90 or 85 percent. In other words, things that are possible get inflated and things that are almost certain get deflated. OK? And I won't talk anymore about that but that's something that's called the decision weighting function. And if we had a longer time, I'd spend probably about half an hour talking about that and talking about applications about that. The second part is what's called the value function. And the value function is really what we're gonna talk about today, and I'm gonna give you a bunch of examples. So here's how most of the rest of this will work. And this is actually, as I suggested, how I explain it in class. I'll talk a little bit how the sort of Masterclass
- - I think those are the words Kara used -- Masterclass is different from how we would do it in person. But the idea is that what I'm gonna do is I'm gonna give you 10 different phenomena that are quite different in terms of what they do. Some of them are stylized examples, some of them are real-world examples, and lots of different kinds of things. And what's amazing about these 10 examples is that they all can be explained by this basic idea. And what I'll do is at the end, I'll tell you what that basic idea is. Now the other thing I'll ask you to think about, and I would certainly do this with my students, is I say, "I leave it as an exercise for you to figure out exactly how these 10 phenomena can be explained by Prospect Theory." OK? In other words, your job is kind of not just have the intuition but maybe to kind of explain that intuition at a deeper level. All right. The first demonstration is what's called the Endowment Effect. And this is actually one of the first experiments in Behavioral Economics that my colleague Richard Thaler did. He did it with Kahneman and also another economist named Jack Knetsch. And here's how it would work, is that -- and we actually do this. You know, if we weren't on Zoom we would probably do this in a physical classroom. What would happen is that if you come into the class, you're sitting in a particular seat, I put a coffee mug in front of half of you, or a key chain or something like that, right? So that's a University of Chicago coffee mug. Some of you like that, some of you have plenty of those things, some of you really love it, whatever. There's lots of difference in evaluation of that coffee mug. Now importantly, the idea here is half of you have this coffee mug and half of you don't. And it's been randomly assigned, so the fact is that you sat in chair A2 means that you got a coffee mug, and if you sat in chair A3, you don't have one and the person next to you does. Now what we now do is we ask everybody to essentially give a valuation for that. And in particular, what they do is they give what's called a selling and a buying price. So the people who have one of these things give a dollar amount, which is essentially the amount that they would need to get in order to sell it, and the buyers give the amount that they're willing to pay in order to get one of these things. And what you see here is something kind of remarkable -- which is that the value that's on the seller's side, the selling price, is $5.25, and the price that the buyers are willing pay is $2.50. So this is 2.1 times different. Now again, the thing that's kind of remarkable about this is that the valuation of this mug more or less changed by a factor of two, depending on whether this mug was in front of you and was yours to take home or whether it was next to you and you had a possibility of getting this. And one of the things is that I do this many times a year in corporate settings or with students, or whatever, and it always works in the sense that sellers always insist on a higher dollar amount than buyers are willing pay. So that's what's called the Endowment Effect, and you can think about whether you can figure out what you think the story is. OK, second example, which is there are many, many, many, many different kinds of choices in our life
- - maybe this is most of the choices in our life -- is in which we're choosing between a status quo, something we have right now, and something that is different. So you have a job: You can continue with that job, you can take a new job. You have a house: You could move to a new house. You have an organization, it has a healthcare plan: You can change the healthcare plan. There are lots and lots of different kinds of things that involve a decision between the status quo -- that's what's gonna happen if you do nothing -- and some other choice. And I think one question you might ask yourself is do we choose the status quo the right amount? Do we maybe choose it too often, too little, just the right amount? And obviously the right answer should be just the right amount, but I'm gonna suggest that there is something called a "status quo bias," which means that we choose the status quo way more often than we should. Now this is what's oftentimes called a natural experiment. And the idea here is that something in the real world is almost like what somebody like me would do in a laboratory. And what's happened is just by some remarkable coincidence, the things in two situations are really very much controlled -- not perfectly controlled, but very controlled -- and that what we can observe is that there might be something that's different across these two natural conditions. And so these two conditions in this so-called natural experiment are two states of the U.S. So I'm looking at you, not everybody is from the U.S. But there are two states in the Northeast, Pennsylvania and New Jersey, they are next to each other. They're not perfectly similar but they're similar in many, many different ways. And what happens is until this time, until the early '90s, they have different regimes for auto insurance. OK? So people have to get auto insurance in order to drive a car, and what they have in Pennsylvania is they have a very expensive auto policy. But if you get in an accident, you have lots of rights to sue other people and maybe recover some of the damages. In New Jersey, you pay a little money, less money, your premiums are lower. But in return for those lower premiums, you have this restricted right to sue. OK? So there is an expensive policy in Pennsylvania that has some benefits and there's a cheaper policy in New Jersey, which has fewer benefits. What's happened as a result of deregulation is that people in Pennsylvania now can look at and choose what people have in New Jersey, and people in New Jersey could now pick what people have in Pennsylvania. In other words, people in Pennsylvania could trade down, and people in New Jersey can trade up. And you would think that, like, roughly speaking,
- - and almost all economic theories say this -- is there's some attractiveness of these two options. If the people in the states are pretty close in terms of what they make in income and other kinds of things, then the market share of these two policies should be the same. And it turns out that the percentage of people choosing the expensive policy, the full right to sue, is 75% in Pennsylvania, which is where they started. But only 20% of people in New Jersey are willing to basically pay more for this policy. In other words, a different way of looking at this is that 75 or 80 percent of people stuck with the status quo. So that's kind of a typical number: 75 or 80 percent of people stick with the status quo. I'll actually turn to this example to indicate why Prospect Theory very simply and very nicely explains this. OK: The third example, this is a stylized example. So what happens is you survey adults. You ask them about some hypothetical choices. And the hypothetical choices involve salaries, hypothetical salary profiles, over the next three years, which are different. And so the idea is that you might imagine in situation one, somebody can get 80, 90 and 100 thousand. Situation two, they'll get 90, 90 and 90. In situation three, they will get 100, 90, and then 80. And for the point of this thought experiment, the idea here is your choice doesn't have affect anything else. Right? This is just a matter of how you're gonna get the money. So it can be increasing: 80, 90, or 100. It can be flat: 90, 90, 90. Or decreasing: 100, 90, 80. Now simple economics says that by far the best choice should be to get as much money early on as you can. Decause of course if you get money early on, then what you could do is you could reproduce any other wage profile by putting the money in the bank and getting some interest and being better off. Now this was more compelling in 1989, when there was higher interest. But even, you know, you can get some interest even today if you have tens of thousands of dollars. Now contrary to that particular idea, people overwhelmingly like 80, 90, to 100 over 90, 90, 90, or 100, 90, 80. And even when they're told, "By the way, you're kind of being a dope, because the reason you're a dope is that you could reproduce one of these other things and have a little bit of money if you're safe." Very few people change their mind. OK. Fourth example is, people set goals. And it is now February 18th, some of you still have your New Year's resolution goals. If you do, you're maybe the exception. I have what I'm calling a pandemic running goal. I will not share it with you. But in my mind, the anniversary of the pandemic will be March 14th. That's when the world shut down for me. But the idea is that people set goals for themselves, organizations also set goals for their employees. And the amazing thing is that goals, of course, work. Right? And actually for the most part, the more ambitious the goal is, the more that people produce. Not universally but on average, higher, more ambitious goals improve performance. OK. Fifth example -- and we're gonna actually get you to do things and let's see if this works -- but let me explain the idea here. So what I'm gonna do is, you all have last names; hopefully you all know what your last name is , and you also know whether your last name is A through M, or N through Z. If your last name is A through M, look at the question at the top. If your last name is N to Z, look at the question at the bottom. What I will show you in a second is a poll that looks like this. OK? And the idea here is -- I will take this off in a second -- but if you're A through M, you should choose one of the first two choices, either the sure thing or the risky option. And if you're N through Z, you should choose either the third or the fourth. And here's the idea. Here are the choices. You can read them. Only read the one that pertains to you and let us launch a poll on a webinar. I've never done this before, so let's see how it works. There we go. There are many, now I can see, there are 576 of you. That's kind of an amazing thing. All right. This is fun! Technology. All right. I'm gonna wait 'til I get to 450 and I will close it out. All right. So I'm gonna share these results with you and I'll do a little bit of math in my head. What you see here at the top is there are actually more people A through M than there are N through Z. But about 60 percent of people chose the sure thing at the top. So 34 out of 61, so a little less than 60, but let's call it 58 percent. And on the bottom one, 38 percent of people were N to Z. And in this case fewer than, about a third of people chose the sure thing. So let me show you what typical choices are. A little bit different than here but what you see here is something that's interesting, is that in both this particular example here and with other data, many more people choose the sure thing in the question at the top, and many more people choose the risky option on the question on the bottom. Now if you examine this a little bit and you look, and if you read the question at the top, you should look at the question at the bottom. And if you read the question at the bottom, you should read the question at the top. And I think that some of you are now figuring out something. You're saying a little bit of "Aha, aha." and the "aha" here is that these are actually more or less the same thing. And they're not exactly the same thing. Where they are more of the same thing is that both choices amount to $1500 for sure, or $1000, or $2000 with a flip of a coin. The only difference is that one is described as a so-called gain. So you started with $1000 and you're looking at positive outcomes. That is A through M. The one at the bottom is described as a loss. You can either lose $500 or lose $1000 with 50 percent chance. And the amazing thing about this example, incredible thing about this example, is that if you give the question at the top to people, most people confidently say the sure thing is the right thing to do. If you give the same choice to the person on the bottom, to another group, they confidently say that taking a gamble is the right thing to do. Only when they see these things side by side do they realize there is something inconsistent -- or let me just say a stronger, it's maybe incoherent to choose the both of these things. And it's inconsistent or incoherent because at the end of the day, $1500 is $1500. And I'll describe it a little differently, is: Nobody asked when they were choosing this whether I was gonna give you $2000 and ask you for $500 in change, or to give you 15 $100 bills or whatever, right? And of course you don't ask because that doesn't matter. Maybe you don't actually have $500 in change, but that doesn't matter. That's the idea. OK. Sixth example is that there are things that you do when you buy consumer products that involve what you might call "Improvements or Tradeoffs." Now: So I have this iPhone, those are my kids. I actually just got this iPhone but, you know, let's suppose it will break; I have children, maybe my children will break it. Anyway: But the idea is I could get a different kind of iPhone, a better iPhone. Of course iPhones get better over the years. Or I could get a different kind of phone. And when you get a new phone within the same category, that involves improvements. So you know, phones for the most part get strictly better over the years. Whereas you trade from an iPhone to, you know, an Android phone or Samsung phone or whatever, that involves some good things and some bad things. Let's call that tradeoffs. So let's explore that idea. And his is a study -- and what happens is that the way the study is done is that you come to the lab to do something else. And then you're told at the very end, "I'm gonna give you some reward." And one group of subjects gets a reward, which is they get a nice dinner, a free dinner to a nice restaurant plus a Stanford calendar
- - these questions all make more sense pre-COVID when you can enjoy this -- or a professional photo, an 8 by 10. That's like, for those of you who are metric people, that's basically a kind of sheet of paper, plus a Stanford calendar. Now this study is done at Stanford. The Stanford calendar is there on purpose because nobody really wants a Stanford calendar. But what you're gonna do is you're gonna give people a chance to exchange what they get for something else. So what you can now get is two free dinners at the same nice restaurant or a set of professional photos that includes a large photo, two medium photos, and three small photos. Now notice here that whether you started with a dinner or photo, you look at this differently. If you started with a dinner, then one of them is an improvement: Getting two free dinners is better than one, if you think about the Stanford calendars not being really anything. Which is that one is a tradeoff, which is you give up a dinner and you get some photos, whereas if you start with the photos, the improvement are the photos. And the tradeoff is the dinner because you give up your photos and you get a dinner. And here's what happens, which is that you can see that the people who choose the restaurant, there's about a 30 percent difference between people who started with a dinner or started with photos. In other words, people seem to prefer improvements to tradeoffs. And you can maybe try to figure out why that is. OK. There, too, we're gonna talk about two very different examples. So "Putting Behavior," this is a study that was done by Maurice Schweitzer, who is a buddy of mine at Wharton; and Devin Pope, who is also a buddy of mine, who is one of my colleagues. And some of you know a lot about golf, some of you know very little about golf. And what I will do is I will describe a couple institutional facts about golf. So one is that there is something that is called the PGA Tour. So the PGA Tour is the Professional Golf Association Tour, mostly tournaments in North America. These are widely regarded to be the best golfers in the world. This is a study of 2.5 million putts. And a putt is a shot from a very short distance from the hole, and it is something that happens on the green. There's a particular instrument called a putter, and this is what it looks like. All right? So those are a couple of PGA golfers who are in the data set. Now it turns out that one other thing that you may or may not know about golf is that every hole corresponds to some number par: either three, four or five. So three is a short hole, four is a medium hole and five is a long hole. So the idea is, three is what you largely would expect to get if you were a professional golfer. If you do better than you expect, or than they expect, you get what's called a birdie. So that's something that's called under par, and you could think of that psychologically as a game. Or you could get par, or you could get something that's called a bogie or double bogie. Those are things that are over par. And here's the question to think about. It turns out that an eight-foot putt -- and an eight-foot putt is something like a 2.3, 2.4 meter putt for those who are metric -- an eight-foot putt or 2.4 meter putt is a putt that is roughly a 50-50 putt for a professional golfer. And the question is, does the probability of that professional golfer makes that putt change whether that putt is for a birdie or par, holding everything else constant? And it turns out that it does. And what you see here is for every distance from the hole,
- - this is in inches, all the way to I guess 16 feet, which is I guess about five meters -- you can see that they are more likely to make the putt if they're putting for par than putting for a birdie. Now it turns out that one of the things that seems to go on is that PGA golfers tend to be much more risk-averse when they're putting for a birdie. And one way that you see that
- - and this is actually something that probably you need to know something about golf to know -- is they're much more likely to leave the birdie putt short than the par putt. And leaving it short is the kind of conservative thing to do. So they are 3.6 percent less likely to make a birdie putt than a par putt. And if you think about what the implications of that are
- - this is based on 2009 numbers, so the numbers are higher now because people get paid more -- is if PGA golfers putted the same way for birdies that they did for par, they would, the top 20 golfers would make about $1 million. OK: Eighth example, this is something with me. It involves me, also my colleague Devin Pope and two, actually, accountants, accounting scholars from USC. This involves marathons. So I'm guessing that more than a few of you have engaged in some endurance activity. Some of you have run marathons; some of you have run half-marathons and other kinds of things. So this is a study of about 10 million marathon runners. Actually, that's the published paper. I'm gonna show data that involved more than that. And there's a couple of necessary facts that you might know, these are not surprising facts, is that most runners choose goals. Do you have a goal for finishing time? For finishing a marathon? And these goals actually tend to be round numbers. So 28 percent of runners select hour round-number goals, like four hours. Many people who don't do that choose half-hour goals or 10 minute goals. And here is a histogram of 16 million marathon times. So if you ran a marathon between 1970 and 2018, you are in this particular picture. So what you see here is not a typical distribution. In fact, the distribution that you typically would see, the so-called bell curve, looks like that red line. Instead, if you're having a hard time figuring out what's going on, notice that there are big spikes here at the three hour marks -- the hour marks -- and there are less pronounced spikes at the half-hour marks. And let me take those off to show you, there are about 50 percent more runners who run 2:59 something something, than three hours something something. OK? So there are 50 percent more runners who finish just below three hours than just above, slightly less as a proportion for four hours and so on. And this is true for everything. So you know, we have campuses all over the world. We have a campus in Chicago. It's true for the Chicago Marathon, which is one of the five biggest marathons in the world. It's true for the London Marathon, which is the second biggest marathon in the world. It's true for the Hong Kong Marathon, which is a little weird because they chop off times at six hours; Warsaw, lots of different kinds of things. And let me show you a couple of things that are really interesting about that. Which is that, I'm gonna show you a plot of how fast people run over the last basically two kilometers of the race. So a marathon is 42.195 kilometers -- 26 miles and 385 yards for those of you who are English-inclined. The last 5 percent of the race, OK the last split you get before you finish, is usually at 40 kilometers. So about a mile, 1.4 miles before the finish line. What I plot on the x-axis is the simple extrapolation of what your finishing time if you keep the same pace that you had for the first 40 kilometers and run the last two kilometers. And here's how much people on average slow over the last two kilometers. They slow quite a lot. They're tired. Now not everybody, but on average, people run about 6 percent slower over the last two kilometers than they did in the previous 40. That's maybe interesting. But what's interesting -- and more interesting -- is how much they slow as a function of proximity to this magic number. Now I'm gonna show you a bunch of things where it looks just like average. So if you're on pace to run 3:47 or 3:52 or 4:03, looks the same. Here's the thing that's amazing: What you see here is that the people who are on pace to run something slightly south of four hours, they have a big incentive to not run faster, but not slow down as much as other people. Those who are just north of four hours are pretty demotivated. They're probably not gonna make the goal and as they get further and further from the goal, they get more and more demotivated. So that's the pattern around four hours. Let me just show you one other thing. This is what the pattern looks like over the whole -- all, well not all the marathoners -- or almost all the marathoners. And here's the amazing thing about this, is that you could almost read off the clock by seeing where this sawtooth pattern happens. OK that is number eight. We have two more. Nine: Tax Sheltering. So in the U.S.
- - not always true in other countries -- we file income taxes around April 15th. And what happens is that a lot of people declare deductions. And deduction, you do that when it's advantageous to do so. Some of it involves things like finding charitable contributions and so on. Now most people are pretty disorganized. I'm disorganized. I'm a little less disorganized electronically than I used to be when I had things before that. But whether it is electronic or paper, what people do is they usually are bad filers and they throw what you might call the receipts
- - in the old day, they went into a shoebox -- and now they show up in emails and folders on the computer. And what happens is that if you wanna find deductions that you could legitimately take, it's kind of a pain. I have to hunt through a bunch of emails to find things. I need to talk to my wife to ask her about things, whatever. Now here's the idea. We're gonna show you a bunch of U.S. tax filers, 230,000 or so of these people. All these people have some tax liabilities, meaning that they could get a refund or they could owe the government some money. And what I'm gonna show you is the balance due. Now one of the things here is you can make adjustments, as I said, in lots of different ways. Some of these are, let's just say, ethical. Some of them may not -- I'm not gonna ask, I'm not gonna get into whether these things are OK or not. Here's what the distribution looks like. Now, you know, there's lots of kind of weird things in this thing. Notice where zero is. In fact, there is a line there at zero, which looks like it may be like a copying error or some stray thing, and that's this. There is a lot of what we're gonna call excess mass at zero. And here are some statistics about this. There's about 136 percent more filers who are just above $0. And notice what $0 is. It's the difference between owing the government money or getting a few pennies or dollars back from the government. This is true for all filers of all types: wealthy, poor. It's actually strongest for the wealthy who may actually have more discretion on charitable contributions and things like that. OK: The last thing is a finance example called the Disposition Effect. And what happens is you may own stocks. Some of those stocks are good, some of those stocks are bad. So here's a thought experiment to think about. You have some money in stocks, you have one stock that you bought a bunch of shares at $60,000. It's now trading at $40,000. You have a bunch of shares that you bought at $20,000, which are also trading at $40,000. You need $40,000 to put as a down payment on your house. You could sell half of each of the stocks or you could sell what we'll call the winners, the one which went from 20 to 40, or the loser, the one which went from 60 to 40. And here's what the data look like. Which is over the entire year, what people do is they are about 50 percent more likely to sell the winners rather than the losers. Those of you who know something about this know that it's actually better to sell the losers because there are tax credits that you could get. And in fact one of the interesting things is the only time this pattern reverses is at the end of the year. And it reverses at the end of the year because people are very, very attendant to what their tax liabilities are. OK: So people are much more reluctant to sell losers rather than winners. So, many examples: We have the Endowment Effect; Status Quo bias; Preference for Increasing Income Sequences; goals work. People are risk-seeking when things are losses, risk-averse when there're gains. People like Improvements versus Tradeoffs. PGA golfers are much more likely to make par putts than birdie putts. Marathon finishing times finish right below these round numbers. Tax balances tend to bunch around $0. Investors sell winners rather than losers. OK: Actually, one more example. Let me see if I can do this. We're gonna do one more example. Would you take a gamble that offered you $1,000 with 50 percent chance -- or winning a $1,000 with 50 percent chance -- and a 50 percent chance at losing $500? So here you go. What would you do? You are more risk taking than my MBA students, so that's interesting. With my MBA students, this is exactly 50-50. Not always exactly. It's maybe slightly more than 50-50. For you, you like to take this gamble. Now notice here, this is obviously an attractive gamble in expected value. But it's a gamble in which you may lose a fairly large amount of money and you're compensated with some upside. All right, so let me take this off. Let me tell you how you explain all these things and then we'll take some questions. This little picture on the right explains all these things. And let me tell you what the properties of this are -- which is that outcomes are defined in relative terms, relative to a reference point. Sometimes that reference point's a status quo. Sometimes a reference point is what somebody else makes. Sometimes the reference point is your expectations. So let me give you an example: You get a $10,000 bonus. You were expecting to get a $15,000 bonus. You were expecting to get a $5,000 bonus. A $10,000 bonus is seen as a $5,000 loss if you were expecting to get more, a $5,000 gain if you were expecting to get less. OK? Same outcome objectively seen as something that is positive for one person and a negative for another. Second idea is, people are what we call loss-averse. So many of these examples are ones where people weigh losses more than they weigh gains. In other words, losses are more painful than gains are good. Now the kind of magic number to think about is two to one. And over many, many studies, on average, losses are twice as painful as gains are good. Now you saw that in the very first example with the so-called Endowment Effect. The selling price for a mug, $5.25, was twice as big as the buying price that people are willing to pay. OK: So let me give you a kind of thought experiment that I always share. Imagine two things happen to you one day. You get a surprise birthday check from your eccentric uncle, who actually doesn't have any idea when your birthday is and just sort of magically mailed you a check for $100 on some day. So that's pretty nice. And you also discover that you left your car on the street one day when you shouldn't have -- and there's a big fat parking ticket that says you owe Chicago or Minneapolis or New York or Sydney or Beijing $100. OK, so two things happened today: a $100 gain, a $100 loss. Do you feel neutral today? And the answer is, probably not. Most people feel that the $100 parking ticket is way more painful than the $100 birthday check is nice. And in fact, the way I like to say it is that for most people to be neutral for the day, they need $200 from their crazy eccentric uncle to compensate for the $100 loss. Now one last thing is, one thing is I described a couple examples, Improvements versus Tradeoffs, Status Quo Bias. You can see what, one of the characteristics of those things is that any change involves what I'm gonna call Upgrades and Downgrades. So it involves things that are gonna be better for you and things that are gonna be worse. Now if it involves just upgrades, then it's an easy choice. Almost nothing involves just upgrades, so it's only hard if it involves some tradeoffs. So Upgrades and Downgrades. and Loss Aversion explains quite simply why we don't make change very often. Because in order for a change to be attractive to us, there have to be two times as many upgrades as there have to be downgrades. And you know what? For a change to have two times as many benefits as it does cost, that's a pretty extraordinary change. The last idea is there is so-called diminishing sensitivity away from this reference point. In other words the idea is, just like you saw diminishing marginal utility for gains, this thing flattens out for gains, and it also flattens out for losses. So let me give you a kind of thought experiment about how that works. I teach negotiation and I'm gonna interject a stylized negotiation for you. Maybe not a stylized one, maybe one that you've done. You're buying a car, you negotiate, blah, blah, blah, blah, blah. You have agreed to buy this Toyota for $36,200. Now of course if I'm a car dealer, I'm gonna sell you some more stuff if I can. And let's suppose there is an upgrade in electronics. And I can describe that upgrade in electronics in one of two ways: I can say that this upgrade in electronics has cost you $720, or I can say that upgrade in electronics is gonna change the price of your car from $32,200 to $32,920. Now both of those are true. There's nothing fraudulent about either one of those. There's nothing that's misleading, per se. But they're very different psychologically -- $720 in isolation feels like a lot of money; $720 when it's embedded on a price tag of $32,200, it kind of disappears. So that's the idea of diminishing sensitivity. When we add things to big numbers, we are much less sensitive. When those numbers are in isolation, they turn out to be quite impactful. So a few things. I ask my MBAs at this time to do what I call off-line processing. So identify a situation in which you or your organization suffered from Status Quo Bias. And then these are what I call "connect the dots" -- oh no, this one is "do the math" -- how does Prospect Theory explain all of these things? All right, so I have taken a little bit more than an hour so I want to give you an opportunity to ask some questions. Let me just say very briefly how this is different. So if we were in the real classroom, this would be much more interactive. Obviously there's a limit on what we can do in this kind of webinar format with 556 people. The lecture also would be followed by case discussions. And what I try to do is give you the flavor of these things by giving you many different examples. But the idea also -- part of what makes all this stuff powerful -- is your ability to recognize these concepts so-called in the wild, in real situations. So we use a business example in which some of these things were kind of embedded. And part of your job is to recognize, and part of your job is to apply, and part of your job is to figure out how you could use these to your benefit. Now some of these examples I think are interesting because they pit psychology against economics. And the idea is that -- I'll say in the last slide -- Booth is this place where we share discipline perspectives. And I love economics; and I love statistics; and I love accounting; and I love psychology; and they aren't strong or weak or better or worse -- well, psychology's the best -- but they're not better or worse, per se. They're different perspectives. And one of the things is, having you learn a bunch of different perspectives but also adjudicating between these different perspectives, I think is really part of the education. And the last thing is that students would take some of these materials and off-line, they'd do write-ups that might get them to apply these concepts to a stylized business example, and here's an example: You are HR director at a large consulting firm and you're trying to use these concepts to improve recruiting and retention at the firm. What are you gonna do? OK: last idea. So Kara said something a little bit about what we do here at Booth and, you know, I think this is an amazing education that we have here. But it's also not an education that I think is for everybody. And so I think it's important to recognize what we do and also recognize that it's OK if this is not exactly something for you. Booth faculty are world-class researchers and I think that's an important thing. And you know, it's sometimes not quite obvious to students why it's important that people do research. I think it's important that people do research partly because their minds are actively thinking about these ideas. So I think about decision making, I shared with you some of the things that I've done; I actually study goals and how reference points work in goals all the time -- and I think about that all the time. And I think because I think about that and related ideas, the course is fresh. Every time I teach the course it's different. And my thinking evolves, and I think that one is that students get the state-of-the-art thinking. But they also get the passion of a faculty who really, really, really wanna think about nothing else than these ideas. The second thing is -- you'll hear this a lot -- discipline-based education. And let me unpack that a little bit. The way I like to think it, is that \ we're really teaching you to understand basic -- but not just any basic idea -- basic, but ideas that are relevant. I always try to think, like: I have this kind of, a contract with students, which I think is great -- which is that students are willing to indulge in hearing me talk about basic ideas, if I've done the thought in my mind to know that those basic ideas are things that will make them better managers. OK? And that is a pretty amazing thing. And that's not something that exists in other schools. And I think as Kara said at the very beginning, I think it really sets our students apart. Our alums have this confidence to address really hard, and not just hard, but novel problems. Ones that you haven't seen before. And therefore there's not something to lean on in order to help you understand. And I think that's where real, having these kind of basic principles and understanding them deeply really makes a big difference. So we have a few more minutes, so thank you to all and good luck wherever you are. Stay safe. I will take some questions.
Kara: Great. Thank you, George. And I can, I'll read some of the questions to you and feel free to scroll down the Q&A if you wanna see some yourself, but thank you. That was a great presentation. So I'll just read these verbatim: "How does something like the status quo affect a firm's desire to hire a consulting firm to modify their processes? Is it higher due to needing an external factor to justify change, or lower due to a desire to not change fundamentally?"
George: That's a great question. You know, the answer is I don't completely know. I think, so one thing is that presumably hiring -- somebody wants to a hire consultant, right? And that might be a board, that might be managers. And I think that sometimes they need external cover; they need justification; they need evidence; they need somebody else to administer the hard medicine. In class we often talk about
- - this is something that Americans know about -- about so-called Obamacare. So put aside the politics of this and just think about this as a change like we talked about. This is a big change, and it involves good things and bad things. And one of the things is that it should be clear to you that the way you would choose a policy for the government would likely be very different if you're changing from situation one to situation two -- or you've never had anything in place before and you can just choose number one or number two. And that's what Status Quo Bias is. Now that's a bias in a lot of cases because there are really things that are better for your organization and things like that. And I think that astute example says that sometimes what people need to do is they need people to sort of argue for it. They need people to be persuasive. They need people to create cover for them, so great question.
Kara: Great. Thank you. So Flavio asked, he said, "I'm loving this class, thank you." Two questions and I'll do them one at a time: "Is it true that losing effects, or excuse me, losing affects people twice as much as winning on average?"
George: I think there's two things that are important. One is how much the prospect of losing -- you know, basically, let's call it the anticipation of a bad outcome -- how much that affects your choice. And then second is actually how it feels. So I think both things are true, is that people are really afraid of bad outcomes. Now this is especially true in organizations. We didn't really talk about this very much but one of the things that you might say, is that most of you know that people probably don't take enough risk in organizations. And one of the reasons that they don't take enough risk in organizations is because they're gonna get yelled at if there's failure. And that's unfortunate. And you're gonna get yelled at if there's failure, then you don't do stuff. Now one of the things also to think about is that sometimes the fear of failure, is that exaggerated? For sure. And one thing that I think some organizations do is they recognize that, let's just say this fear of losses being very painful. Losses are gonna be painful. But maybe they're not quite as painful as you think. Maybe you've psyched yourself out of taking on risk. And maybe one of the things for you to learn is that it ain't so bad to take a loss once in a while. Or that there are good things that happen with failure. There's learning and things like that. And so I think there is this bias that for sure, losses loom larger than gains, both in anticipation and in reality. But there's also a bias that they probably loom larger in our imagination than they loom when we actually experience it.
Kara: And then the second one zooms out a little bit: "What could we expect from the application of behavioral psychology and economic studies in the coming years? So like, any areas you're seeing, have you researched on and applications?
George: Yeah, it's a great question. I think the place where it really is being used the most is in policy. And I think that you see this inside organizations. You can see this in governments. You can see this in developing countries. You see this all over the place. And one of the ways to think about this is that, you know, we do things all the time as organizations. We try, we run copy, we try to figure out whether A is better than B or better than C or whatever. People send out emails, they try to be persuasive. But you all know what this is like. You know, Kara says, "Should we use this word?" And I say, "No, I think we should use this word." "Don't you think this would be more effective?" "No I think this -- but I get your point." I'm not really sure if we gotta do one. And historically those things get won in some ways by whoever's the stronger and most persuasive or the bossiest. And I think what happens now is a lot of, both in organizations and governments and stuff like that, people just say, like, "Let's take good guesses about what might work and let's just run experiments." So people know about this idea of A-B testing or whatever. Sometimes these are called randomized control trials. You just run these things. And sometimes you get what you expect and sometimes you don't. And that in some sense I think the insight here is that organizations have the capabilities of running experiments nimbly, instead of just the horsepower to try to figure out what's going on. I know that, what I would bet on, which is that it's really hard to predict what's going on. You know, I studied this stuff for 20 years. I'm always saying, "Wow, that's not what I would've expected but I'm glad to see the data." And I'm better than probably you are because I have many years of behavioral science but I'm not as good as I think I should be, because the real world's complicated.
Kara: Yeah, yeah, it sure is. So there are a couple versions of this question: "When an organization is afraid of risks or losses, how can a good decision maker approach that similar question? How do effective leaders push past the fear of loss?" So any advice when you're up against risk-averse organizations or individuals for that matter?
George: I always tell people not to contribute. And here's one insight: People think of good decisions and good outcomes as being synonymous. So if something good happened, then it must be because you did something right. If there's a bad outcome, it must be that you did something wrong. And one implication of that is that people are afraid of taking risk. And one way I like to think about it is that you can make -- there are two basic asymmetries in organizational life. There is an asymmetry between failure and success, and you all know what that is, and we talked more about that. And there's a second assymetry between errors of action and errors of inaction. Right? And you get blamed for errors of action; you seldom get blamed for sitting around and doing nothing. And if you put those two together, what it suggests is that unless the deck is really stacked in your favor for a good outcome, you're gonna sit on your hands. So seven out of eight outcomes are good, not good enough. Nine out of 10? OK, maybe. And so what that suggests is that managers, when they blame their employees for failures, are contributing to this problem. They are contributing to this problem. Now a second thing is that... how do you get around this? Well, one of it is -- and I've seen organizations do this -- they reify the situations that are bad outcomes and good decisions. So this outcome is bad but I want you to do that again. I don't want you to get a bad outcome but I want you to do what you did to make that decision. Because you did all the right things in making the decision; you just got unlucky. If we keep on doing that, we will -- luck will be on our side more often than not. OK: We got unlucky there. Let's always remember that. Now we reify the good outcomes and therefore people are risk-averse. When we reify the good processes, then that's a very different thing. The good processes that come with bad outcomes, and that's very hard to do, but I think that's something that some organizations have done.
Kara: Great, thanks. Could you expand a little bit more on what we mean at Booth by disciplined-based thinking, and why is that meaningful to our students at Booth?
George: Good, good question. So, you know, we have a lot of so-called disciplines. So the idea is that there are ways or defaults that people use to kind of organize the world. A lot of people are economists. People who teach strategy are economists. People who do marketing are economists. People who do finance are economists. They use the lens of economics to understand problems. Now when I say that they use, that doesn't mean -- actually it's, the school's also an incredibly multidisciplined or interdisciplinary place. So people in their own research use different lenses. But I think the idea is that we're gonna try to show you a perspective to understand things. So, you know, for example: If you wanna change what people do in organizations, here are two very, very different ways of thinking about things. We're gonna change people by norms. Like, we're just gonna make things like, "This is the thing that is the expected social behavior." And you all can understand why you do lots of things because of what other people do. Or I can get people to do things because I'm gonna reward you for doing things. Those are very different perspectives, completely different perspectives. Now both perspectives are right. Both perspectives give you insight into some things. Now they're not all, each one of those is, like, it can't be that norms or incentives are the right -- both are the right thing to do in a particular situation. Part of it is to understand what are the kinds of situations where one is better than the other. So that's the idea, that there's a point of view and a strong lens for understanding business problems of many, many different types. But there's also this courage to ask yourself, "Is that the only view that I should be using for thinking about this situation?"
Kara: Great, we'll squeeze in one more. We got three minutes left, and there are a few versions of this as well: "When there are decision making biases, do you see differences in different industries? Like for example, a capital-intensive industry versus a more low-capital industry, like a software company: Is there any research on that?"
George: You know, there's research and it's hard to characterize. One way I would say this is that -- I always say that, you know, when situations, the more there are clear data, the less room for psychology there is. Now there's a big caveat on that. Because we know that most data are pretty ambiguous. But in a lot of situations if there's a lot of data then we can answer questions and there's a little room for psychology. But think about a kind of, an area where many of you are interested, which is entrepreneurship. Right? Entrepreneurship is by definition something where they're ain't very much data. You're doing something that's new and novel and whatever, and so that's all about judgment. That's all about making sense of stuff that -- you know, my view about things is gonna be different than Kara's, let's just say that. That's one way to think about it, is that in areas where there's just smart people
- - not because they have less or more knowledge, just because they think about things differently -- are gonna come to different opinions. That's just natural. And there have gotta be ways that are better, and there have gotta be ways that are worse. But business is by definition subjective. Subjective doesn't mean bad, it just means that smart people are gonna reach different conclusions. And you want your conclusions to be as good on average as possible.
Kara: Yeah, that's a fair point. Yeah, I appreciate that. Yeah, great, thank you. That's helpful. We're right at 7:30 so I wanna give a quick shout out to my wonderful colleague Kim Epps, who has been organizing all these events for myself and George, so thank you, Kim. And George, thank you so much. It's been a pleasure. I miss seeing you around.
George: I miss you too, Kara.
Kara: It's nice to see you on screen for a while.
George: It's great to see you.
Kara: It's been a pleasure -- and everyone on the session today, thank you so much for spending your time with us, whether it's morning, evening. Hope you have a great day or night, and feel free to follow up with the admissions teams with any questions at all. We're more than happy to connect. So thanks again, and we'll close out from there. Have a great night or great day.
George: Yeah, thanks for coming and everyone stay safe.
Kara: Yeah, absolutely. All right, thanks. Bye everybody.
Course Title | Location | Date |
---|
More Stories from Chicago Booth
Booth News & Events to Your Inbox
Stay informed with Booth's newsletter, event notifications, and regular updates featuring faculty research and stories of leadership and impact.
YOUR PRIVACY
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.