Lecture 7: Risk Preferences I

Flash and JavaScript are required for this feature.

Download the video from Internet Archive.

Description: In this video, Prof. Schilbach describes how economics looks at risk preferences, that is, choices involving risk. Specifically, he covers the topics of risk aversion, expected utility, absurd implications, and small- vs. large-scale risk aversion.

Instructor: Prof. Frank Schilbach

 

[SQUEAKING]

[RUSTLING]

[CLICKING]

 

FRANK SCHILBACH: All right. Welcome to lectures seven and eight. We're going to talk about risk preferences. Hello? We're going to talk about risk preferences. In particular, sort from the perspective of expected utility, which is sort of the classical way of economics to view this. This is going to take one and 1/2, perhaps two lectures. Broadly speaking, we're going to look at what does economics usually assume. How does economics think about risk preferences, about choices involving risk?

How do we think about sort of measuring risk preferences? What are some of the implications and what are some of the limits of risk preferences in terms of what we can explain and what we cannot explain? And then in the following lectures we're going to have an alternative model of reference dependence preferences where essentially you're going to relax or change some of the underlying assumptions on how to measure preferences related to risk.

Problem set two will be posted soon-- later this week. I'm sure you can't wait for this. A reminder, late submissions will not be accepted. This is always like a commitment problem because people come up with all sorts of good excuses why they submitted the problem set late and then I kind of feel bad about it. So I'm here with committing to not accepting any late submissions for whatever reasons unless you have sort of a medical excuse or sort of an excuse notice. So no my PDF was corrupted and all sorts of other good reasons. I was a student once as well and I know students are very creative.

We will post previous problems sets, midterms, and finals for you to practice overall. I think of the problem sets less of a way for testing you or testing sort of whether you can do it or not, but rather as a way for you to practice things, whether you have understood the materials in parts of the past problem sets. And midterms and finals will sort of help you with that. As usual, please ask questions on Piazza or come to office hours.

So what we're going to talk about is, broadly speaking risk aversion. How do economists think about choices involving risk? Then again, I sort of outline sort of the very simple or the basic model of-- main workhorse model of economics-- is to think about choices involving risk, which is the expected utility model. We're going to then think about how do we measure risk preferences, the underlying preference parameters that sort of are embedded in this model. We're going to lead then to some absurd implications in particular. So discrepancy-- how people tend to think about small scale and large scale risk aversion.

What I mean by that-- essentially small gambles that involve a few dollars versus really large scale choices that involve thousands of dollars. And what I'm going to show you, very similar to some degree to how we think about time preferences where the typical exponential discounting model cannot explain both short run and long run time preference decisions that people make. That's just a sort of a calibration-- really very hard to do. Similarly, the expected utility model has problems with reconciling small scale and large scale choices that people make. I'll tell you that in more detail.

But in short, the summary is if people are risk averse when it comes to small gambles, that implies that they're absurdly risk averse coming from large gambles if you sort of take that model seriously. And so that's kind of like not really true in reality. And so then we sort think about how to relax those kinds of assumptions. OK. So first let's think about what kinds of choices and decisions do in fact involve risk and uncertainty. What examples do we have in your life? What is risky or what would involve uncertainty? Yes?

AUDIENCE: Whether or not [INAUDIBLE].

FRANK SCHILBACH: Right. Whether or not to get education. Whether or not to study and so on. Because the reward often is like uncertain. Right? You might get a job. You might not get a job. You might do really well in college. You might not. And so on and so forth. So the costs-- you might like it, you might not like it, and so on. It's not clear. So the costs and benefits are uncertain. There might be a recession when you graduate. And so on and so forth. So the returns and costs are both uncertain. Yes?

AUDIENCE: Making large purchases, like buying [INAUDIBLE]. Because you don't know [INAUDIBLE].

FRANK SCHILBACH: Yeah, exactly. You might buy a house or you might think about buying or renting more broadly. And there it depends a lot on essentially what's happening to the housing market. If the housing market goes up or down, the choice to buy a house versus renting is very different. Right? So if the housing market goes up, most likely you should probably buy a house. If the housing market actually happens to tank, the [INAUDIBLE] [? at ?] [? least ?] it's a bad idea to do so. Yes?

AUDIENCE: [INAUDIBLE] [? renters ?] insurance.

FRANK SCHILBACH: Right. So the two choices that we have so far were essentially choices involving risk where you sort of essentially decide to do something where the outcomes are uncertain or risky in some way.

Now, what you're saying, in some sense, is like, a bunch of other choices, potentially, are risk mitigation strategies. So you have, essentially, certain risk in your life. And you have choices where you could say, I could buy insurance. Or I could sort make other choices that mitigate or reduce the risk that I'm exposed to. And one sort of canonical example of that is purchasing any type of insurance but, in particular, renter's insurance, as you mentioned. Yes.

Yeah?

AUDIENCE: For farmers, the crop they choose to grow.

FRANK SCHILBACH: Yes, so essentially, call that-- in development in particular, or development economics in particular, there's lots of issues of production choices that people make. This could be like what crops to grow, whether you should buy a machine, whether you should start a business, and so on. So there's all sorts of choices in terms of what business to go into, what kinds of specifics, what product to sell if you have a business.

And in the farmer case, what crops should you grow? Should you buy fertilizer? Should you use other inputs and so on and so forth. Should you do [INAUDIBLE] there's a bunch of different other-- should you intercrop? There's a bunch of different other choices that you could make that essentially involve risks, because the outcome, essentially, is uncertain, because essentially, the season might be good or bad.

In the farmer's case, if you, for example, purchase fertilizer or a certain-- for example, you could purchase or use drought-resistant crops. Of course, if there's no drought, then that's not really that helpful. But if there's a drought, that's really high return to do. Yes? Yes?

AUDIENCE: Creating an investment portfolio.

FRANK SCHILBACH: Creating a--

AUDIENCE: Investment portfolio.

FRANK SCHILBACH: Yeah, so investing your money into a different-- in the stock market or in other sort of bonds or the like-- how should you invest the money? That's kind of related, to some degree, to the renting versus buying a house. You can think of buying as a house as one asset, one potential very sort of illiquid asset that you could buy. Similarly, you could buy stocks. You could buy bonds. You could keep just cash and so on. And there, that very much depends on, the return depends on things that are out of your hand, which essentially is just what the stock market might do.

So I think once you think about choices involving risks, essentially almost any choice in your life actually is risky to some degree or uncertain, ranging from going to college, doing problem sets, studying for exams. Which exams should you study for? Which questions are people going to ask?

Health decisions-- should you invest in your health or not? For a lot of diseases that people might get, it's often very uncertain. So even if you're smoking a lot, not every smoker falls sick or the like. No, just the risk of cancer and other diseases increases. That sort of financial investments, dating choices, and so on and so forth-- even friendship choices are, in some sense, risky. If you want, riding a bicycle-- that's very sort of small choices-- wearing a helmet versus not.

Essentially, almost anything in your life, if you think about what are the outcomes, what are the different choices that you have, and what are the outcomes associated with those choices, almost all of those choices are associated with uncertainty and a sense of you're not quite sure. Is the return going to be high or low? Then as I said before, in addition to that, there's sort of risk mitigating strategies and a sense of you have a lot of choices in a lot of issues that are associated with risk. And now you can choose to reduce your exposure to risk by purchasing insurance or, for example, also by avoiding certain behaviors, right?

So if you're worried about being robbed, for example, in a certain part of town, you might choose to go through that part of town. And that's a risky thing to do. Or you can have risk mitigating strategies where you just don't leave your house, or just go in different ways, or just never go into certain areas, which essentially are ways to protect you, to reduce your risk exposure overall. Any questions?

OK, so now let me sort of just tell you three broad, stylized facts that we're going to try and explain or try to tackle. And that's kind of what economics is trying to do. So the first question is-- the first thing that comes to mind when you think about economics and modeling choices involving risk is risk aversion. How do we think about risk aversion? Or what is risk aversion? Why are people averse to risk? Yeah.

AUDIENCE: Risk aversion is the tendency to avoid bets that are more risky, even though they will still ostensibly benefit you the same.

FRANK SCHILBACH: Mhm. So one-- you said, essentially, if there are certain bets that involve risk, where the expected values or an expectation you're going to do pretty well-- perhaps better than some safe outcomes-- people tend to avoid those kinds of bets. And that we might call risk aversion. So that's exactly right. But now, why are people doing that? What are the underlying reasons for that? Yes.

AUDIENCE: Potentially involved is loss aversion, which in the readings, I believe, some studies by [INAUDIBLE] and others mentioned that, showed that we had aversions specifically to losing money or utility rather than risk in and of itself. But that might be wrong.

FRANK SCHILBACH: Right. So one part would be to say, people might sort of lose or gain money in certain gambles. And what you're saying is, essentially, people might not treat the losses the same as they treat the gains. And then you might sort of decline certain gambles or certain risks. You're just worried about losing out, and you put a lot of value on that. That's exactly right. We're going to talk about this next week in a lot more detail. But what are some other reasons why people might not want to engage in risk? Yes.

AUDIENCE: I'd rather be definitely OK than either [? to ?] [? the ?] [? point ?] between super successful or die.

FRANK SCHILBACH: Mhm. And why is that?

AUDIENCE: [INAUDIBLE] diminishing marginal utility, [? maybe. ?]

FRANK SCHILBACH: Right. So one part is-- and that's exactly how economists think about this-- is diminishing marginal utility. That's to say, if you're really poor, getting your first dollar has really high value to you-- the reason being that now, essentially, otherwise you would starve, or you can't eat and so on. The value of whatever you purchase with that dollar is really high, because you just have nothing otherwise.

Now, if I give you a million dollars and then another dollar, then essentially the additional dollar that I give you after the million is just not doing very much. So the marginal utility of that dollar is low. And that's sort of diminishing marginal utility of wealth. That's exactly how economists talk about this. We're going to get back to that in a lot of detail. Yeah?

AUDIENCE: I think if you look a little more at extreme examples, [INAUDIBLE] if I lose so much money, I won't be able to afford my expenses or something. So it doesn't matter what is [? created ?] [? by ?] [INAUDIBLE] lose a lot [INAUDIBLE]

FRANK SCHILBACH: Right. So there might be sort of certain minimum standards, in some sense, that people have over their outcomes. Or you say, for example, you really want to have a place to sleep or you want to have some meal. And essentially, if you're below that threshold, essentially your marginal utility anywhere below that is really high, because you really want to get over that threshold. And so you might avoid certain choices or investments or the like. If there's even a small chance of getting below your threshold, you might be really averse to that. But what are some more perhaps psychological reasons why people don't like risks? Yeah?

AUDIENCE: [INAUDIBLE] [? potential ?] [INAUDIBLE]

FRANK SCHILBACH: Mhm. So that's right. That's what people do. We're going to also talk about that. But why do people not like the risk? Or some people like, actually, risk. But what is the issue about having a lot of exposure? Why do people purchase insurance, for example? Yes?

AUDIENCE: Some amount of uncertainty [INAUDIBLE] [? risk ?] [? aversion, ?] [INAUDIBLE] the higher the uncertainty of the outcome, [INAUDIBLE]

FRANK SCHILBACH: Right. So there's one part that's what you said, which is there's diminishing marginal utility of wealth. And this is exactly how we're going to model this and how economists think about this. However, there's other things involved, which is just things like anxiety or uncertainty. People might just not-- suppose a storm is coming up. I might purchase insurance. And if I purchase insurance, like flood insurance, or the like for a house, I might just feel much-- I might sleep better at night and so on and so forth.

I might just feel better about concerns-- or like health insurance, for example, might lower anxiety and so on, because people are just not worried constantly about like falling ill or like disasters happening. And that's beyond potentially any diminishing marginal utility of wealth. That sort of anxiety, stress, worries, and so on. Any other reasons? Yeah?

AUDIENCE: I think even beyond people wanting to not be stressed and anxious, there's also the element of, if you're buying insurance to smooth your consumption between different states of the world, it makes it a lot easier to plan for states after [INAUDIBLE] If you're--

FRANK SCHILBACH: Well, that's interesting.

AUDIENCE: [INAUDIBLE]

FRANK SCHILBACH: Yes. That's exactly right. So in some sense, if you have insurance, if you reduce risks in the states of the world, you can sort of, essentially, exclude a bunch of bad states of the world. For example, suppose you buy health insurance, suppose you have flood insurance, and so on. A lot of bad things that might happen you might be able to deal with. And you don't have to necessarily have a contingency plan of what if I fall sick and then go bankrupt, and all sorts of other bad things happen.

Or what if I-- there's flood insurance. I can't pay the bills and then have to leave my house and so on. So essentially, it makes planning easier, in part because it's just sort of easier in the mind. People feel more comfortable. And part-- it's actually computational or just easier to do.

So I added some reasons here. There's a bunch of different reasons. I think it's important to understand that economists have modeled risk aversion as diminishing marginal utility of wealth. That's a very simple way of doing this. It captures a lot of things, but perhaps not all of them.

And so here's some reasons that I mentioned. Sort of contingent planning becomes harder if you have risk. That's what Maya just said.

People are worried or stressed or anxious when they have lots of uncertainty. People might feel regret over missed opportunity. That's like, if I offer you some insurance right now, and you might say, well, it's actually unlikely that anything bad happens. But in case something really bad happens and you had the chance to actually avoid it, then you might feel particularly bad not just because the outcome is bad, but because it offered it to you and you didn't accept it.

There's sort of disappointment relative to expectations. This is essentially getting into territory of losses and gains. When people have certain expectations, they have an expectation to have a certain income and the like. Now, if bad stuff happens, they fall below those expectations and perceive those outcomes then as losses compared to the status quo or the expectations that they had.

Again, economists think about this as diminishing marginal utility of wealth. That's a very simple way of modeling this. And we're going to see kind of what the limitations of those are.

Now, I'm going to show you the three stylized facts about the world. And then we're going to discuss kind of how to perhaps model that. So the first one is essentially very simple.

People are risk averse in various ways. And one sort of basic fact is that lots of people buy insurance. People are willing to pay to purchase insurance that gives you essentially money or an expectation less money than you pay, right?

So fair insurance, as economists would model is or think about it, is to say if I purchase insurance that pays you the expected value of the insurance. So if I pay a premium for an insurance, there's a probability of something bad happening and then sort of a loss or some insurance payment that I get in case something bad happens. Fair insurance would be the premium is essentially, in expectation, the same as the expectation of the loss, which is the probability of the loss occurring and the actual payment that I get from the insurance.

Of course, the insurance industry wants to make money. So the insurance industry will not offer fair insurance, but less than fair insurance. Essentially, there's a price for purchasing insurance. Lots of people are willing to pay for insurance. People are willing to pay money to get insurance to reduce the risk or the exposure to risks that they have.

There's social security in various ways where essentially people sort of insure themselves or society insures them for old age or not being able to take care of themselves. You could argue that's perhaps also due to present bias or other paternalistic reasons. But surely like society, in some sense, is helping people insure themselves against potential states of the world where they might be in need.

There's very sort of other-- sorry-- institutions, including sort of extended families, informal insurance in developing countries, sharecropping, and so on and so forth. What's sharecropping? Yes.

AUDIENCE: When you go multiple crops at the same time.

FRANK SCHILBACH: No. That's intercropping or the like, but there might be referred to that as well. Yeah.

AUDIENCE: It's when the person rents out their land for you work on and you get all the benefits from the land, but [? they ?] [INAUDIBLE] [? rent ?] [INAUDIBLE].

FRANK SCHILBACH: Right, exactly. So it's essentially these kinds of arrangement, which are often somebody has land. Somebody rents out the land. And then you have to pay them something back and then can keep some of the output and so on. And often that's essentially some way essentially reducing risk.

So there's various sort of institution. But broadly speaking, we think, in many situations, people don't like risk. And they look for ways to reduce the exposure to risks.

There's also these informal insurance arrangements, which are things like people get together. And they sort of help each other. Whenever something bad happens, one person is then being helped by everybody else. And that's sort of replacing some sort of formula for insurance schemes and particularly in developing countries.

Second, risk reduction has its price. That is to say people are willing to take on risk if the return is high enough. So another way to put this is, if you want to purchase insurance, usually you have to pay for it. The insurance industry makes a lot of money.

Put differently, people are willing to take on some risk if things get cheaper, right? For example, if you think about buying a car, you could buy a super safe car with all sorts of safety features. Not everybody does that.

The reason is because cheaper cars are less safe. Cars are cheaper. So you're might just sort of willing to take on some risk in some situations for some price, right?

When you think about starting a restaurant, restaurants essentially fail all the time. Yet people are always sort of willing to do it. Presumably, the reason is because if you actually succeed, you're going to make quite a bit of money.

So if there's a high expected return, people are willing to take on some risk. People put in money into the stock market, so they increase their risk exposure. The reason why they do that is because, in expectation, they make quite a bit of money. But of course, that often entails risk.

So one way to think about the entire sort of finance industry is risk intermediation. Essentially, there are some businesses that have a lot of risk. And that risk is sort of offloaded to investors. And the investors accept that risk. They say, I'm willing to take on that risk, but only for a good return.

So I'm not going to take on risk from you if I'm not, in expectation, making money. But often, essentially, there's a trade off between. And this is-- well, if you've taken finance classes and so on, that's kind of obvious. There's a trade between risk and expected return.

In what situations are people actually willing to take on risk for its own sake or just-- so I told you people are risk averse. And that's true in most situations. But where are people actually willing to take on risks? Yes.

AUDIENCE: [INAUDIBLE] casino?

FRANK SCHILBACH: Right. Lots of people actually go to casinos. And here, the expected return is actually a negative. You know, you're going to lose money on average. Unless you're sort of a smart MIT student who can count cards in poker or something, you're going to lose money essentially.

So there must be some form of some preference or some desire to take on risk in some situations because you cannot just-- this is not like investing in the stock market where, on average, you're going to get money. If you go to the casino, on average you're going to lose money. Now, you could say it's so much fun and so on. There could also be addiction. There could be also self-control issues and so on.

Or there could be something about people's beliefs or preferences that induce them to take on risk. But notice that's different from what we discussed before. That's not really consistent with risk aversion because people choose to increase the risk that they're exposed to. Any other-- yeah.

AUDIENCE: Buying lottery tickets.

FRANK SCHILBACH: Exactly. People are buying lots of lottery tickets. Similar, they're doing lots of sports betting. And again, what you're doing here is, to be very clear, on average you're going to lose money and that you're going to increase risk or the exposure to risk.

And there's some questions on why people are doing this. We're going to get back to this. We're going to not talk about this today, but I sort of want to flag that. While people are risk averse in many, in almost, nearly all situations of the world of important choices that you encounter, there are some choices where people are, in fact, exposing themselves to risk in addition to exposure to risks that actually has high expected value. Any questions on this?

OK. So now, we're going to talk about the expected utility and sort of how do economists think about how should we behave. And that's a normative model, how the economists think about how people should behave when it comes to choices involving risk. So what is expected utility?

What does the model assume? Or what is it about? Yeah.

AUDIENCE: It's the utility of each state of the world multiplied by the probability of that state of the word occurring.

FRANK SCHILBACH: Right, exactly. So the assumption here is there's different states of the world. There's good things and bad things may happen.

You might get a job. You might not get a job. You might get a good grade, bad grade, and so on. You can sort of partition the world or anything that's going to happen in the future into different states of the world.

We can associate a utility, so an outcome, and an associated utility with that state of the world, right? If you get a good job, you get a high income. And if you don't get a job, you get a low income or whatever or unemployment insurance or whatever. And there are certain utilities associated with these outcomes.

Expected utility, now, is essentially say, OK, now for each of these states of the world, there's a probability of the state of the world happening. And I'm going to use essentially the weighted average, which essentially is weighting each state with their associated probability and then using the associated utilities for each state. And the expectations of those utilities is what's expected utility.

And that's very complicated. I'm going to sort of get [? to where ?] [? that ?] [? is ?] said in more words than necessary. Let me sort of get back to this.

So first, the thing about expected monetary value-- so suppose there's a gamble. And this is not a very simplified. There's a gamble. I'll call it G over two states of the world. State 1 occurs with probability p and yields monetary payoff x. State 2 occurs with probability 1 minus p and yields monetary payoff y.

Now, the expected monetary value-- now, this is not expected utility. This expected monetary value. This is how much money you get in expectation-- is essentially just the weighted average of those two things sort of weighted by the probabilities. So that's p times x plus 1 minus p times y. That's just the expected value of how much money you're going to get from this gamble.

Now, this is just a definition of fair gamble. And this is what economists tend to use a lot. A fair gamble is one with a price equal to its expected monetary value, right?

So if I ask you would you like to take this gamble, you're going to ask kind of, is this a fair gamble? Essentially, that's just asking about is it paying the expected value of this in terms of money.

I put monetary in parentheses because I could also provide you a fair gamble of apples. And then it would just be the expected value of apples. That would be also a fair gamble potentially.

Now, what's the expected utility of this gamble? Well, now, you need to have a utility function for each of these outcomes. So if my utility function is u of x, this kind of how much utility I get in the state of actually getting x or y.

So-called xi is where i is the state of the world. So essentially, now, it's the weighted average of p i times u of xi, so in this case, p times u of x plus y minus p times u of y. Any questions on this so far?

And so when you now think about evaluating gambles using an expected utility, if your utility function is linear, then you're going to essentially decide the same way as if you were just evaluating the expected monetary value, right? And if not, if the utility function is concave and convex, then essentially people are potentially risk averse. So how economists think about expected monetary value is the history of that used to be, the first theory that people wrote down was, this was a model how people thought people should behave.

There was a normative model that people wrote at some point and said, rational people should essentially maximize monetary payouts, which is say, if the expected monetary value of a certain gamble is high, you should accept it. Or if it's higher than its price, then you should accept it, otherwise not.

Now, it turns out that, in practice, that's not descriptively accurate. That's just not how people behave on the world. And in fact, and the reason being that people are risk averse in most situations.

When using, now, the expected monetary value, essentially they use the expected money value as a definition for risk neutrality. If somebody is risk neutral, if somebody doesn't care at all about risk in some situation, that person essentially just is maximizing the expected monetary value, right? So a decision maker is-- and this is a definition-- is risk neutral if, for any lottery G, she is indifferent between G and getting the expected monetary value G for sure. And so the decision maker is risk neutral if the utility function is linear, OK?

Essentially, the more money you get, then there's no diminishing marginal utility of money. Now, what's risk aversion then? A decision maker is risk averse if, for any lottery G, she prefers getting the expected monetary value G for sure rather than taking G. And the person is risk loving if the person rather has the lottery than the expected monetary value for sure.

These are just definition. That's just the way how economists think about risk. That's just definition, defining how we think about risk aversion and risk lovingness if you want.

Now, let me give you just a very simple example. Suppose a person with wealth, $10,000 is offered a gamble. The gamble is you can gain $500 with a 50% chance and lose $400 with a 50% chance. Will you accept this gamble? How do we do this now?

Suppose I'm just maximizing the expected monetary value. What am I going to do? Yeah.

AUDIENCE: You take [INAUDIBLE] something [INAUDIBLE]?

FRANK SCHILBACH: Right. So what I'm going to do is I'm going to just look at what's my expected monetary value of accepting your lottery, which is 0.5, which is the probability of a loss times 9,600. This is 10,000 minus the 400 that I lose plus 0.5 times 10,000 plus 500, which gives me 10,050.

If I reject the lottery, I'm just where I am before. Now, the risk neutral decision maker will reject the gamble, in fact, irrespective of the initial wealth. Because, essentially, everything is linear, so you just drop out the wealth. You can just look at what's the expected value regardless of how much money the person has.

AUDIENCE: So you mean [INAUDIBLE] [? accepting ?] [INAUDIBLE]?

FRANK SCHILBACH: Yes, sorry. That's a typo. Yes, sorry. That's a typo. Yes, thank you.

Yeah. So exactly, the expected monetary value is higher than the status quo, so you accept the gamble. And that doesn't depend on the initial wealth. OK. So now, what's the expected utility maximizer do? And how does an expected utility maximizer think about this? Yes?

AUDIENCE: In their calculation, rather than weighting the 9,600 and the 10,500, they'll weight the utility value.

FRANK SCHILBACH: Exactly. So now, we need the utility function. What's the utility of 9,600, the utility of 10,500, and the utility of 10,000? Now, will she accept the gamble?

Well, now, it depends essentially on the utility function. What's the shape of that utility function look like? In particular, it depends on the concavity of the utility function.

So what do I mean by the concavity of the utility function? This is concave function. What do I mean by that? What's the definition? Yes.

AUDIENCE: [INAUDIBLE]

FRANK SCHILBACH: Right. So one definition is a second derivative is negative. That's exactly right. That's true if the function is twice differentiable.

We have a slightly different definition that's slightly more general because it doesn't depend on differentiability. But essentially, it's to say it's the following definition, if the utility of a convex combination of two outcomes-- I'm going to tell you about this in a second-- is larger than the convex combination of the utility of those outcomes. What do I mean by that?

Suppose you have an outcome x and a utility associated with that that's u of x. Suppose you have an outcome y and a utility of u of y associated with that. And now, suppose you have a convex combination of x and y, which is just p is a probability of p times x plus 1 minus p times y. That's in the middle of x and y. So I take a weighted average of x and y that adds up to 1.

Now, if I have the utility associated with that convex combination, that's u of px plus 1 minus p times y. That's just the utility associated with that. Now, if I draw a line between those two graphs and look at what's the convex combination of p, so what's p of u of x plus 1 minus p of u of y?

That's essentially the weighted average of the utilities associated with x and y. Now, the question is, is the average of the utility of the average higher or lower than the average of the utilities? Can somebody explain this in words of what I just said? Or what do I mean? Can somebody repeat this? Yes?

AUDIENCE: I mean, technically, we're just trying to see if you take two points in the [INAUDIBLE] and draw a line between them, would the line be [? below ?] [? the curve? ?]

FRANK SCHILBACH: Right. Is the line above or below the curve? In this case, the line is below the curve. Now, what that means is, if I give you two outcomes and I say, would you rather have-- you could have x and y, or would you rather have the some weighted average of x and y? Now, the question is, what's the utility associated with this average of x and y? That's the thing that you see here on the left upper side is u of p of x plus 1 minus p times y.

That's essentially the utility of the weighted average. Is that larger or smaller than the weighted average of the utility, which is the thing that I show you here below? And what you see is, in this case-- and this is because the line is exactly as I say-- it's below the utility function.

If the line is below the utility function, that means essentially that the utility of the weighted average is higher than the weighted average of the utilities, which means essentially the function is concave. And that means essentially that the person is risk averse, as we call it. You'd rather have the average than the spread of the two outcomes. Any questions on this or comments?

OK. So you can look at this in detail but essentially it's a simple definition. So now, expected utility says essentially the following. It says a risk averse person rejects all fair gambles. And again, fair gambles are gambles that pay you the expected monetary value.

And the reason why that person does that is because the expected utility is lower than the utility of the expected monetary value. Essentially, as you just said, it's because the straight line is below the utility function. And that's essentially exactly the same definition here. Therefore, a risk averse person who has a concave utility function rejects all fair gambles.

Now, what does expected utility theory then say? Well, it says risky options are valued by doing three things. One is you have to define utility over final outcomes. And this is sort of getting back what you were saying earlier.

People might be worried about losses or gains or the like. We're assuming all of this away. We're just saying, their final outcomes-- how much money you have, what grades you have, how many kids you have, and so on, these are final outcomes, things that sort of where an absolute value is defined.

There's a utility associated with those outcomes. It's not about you expected more money or less money or the like. That's completely irrelevant. We're just looking at final outcomes.

How much money do you end up actually getting? And we're associating some utility with that. That's assumption number one, or that's the first thing one does.

Second is you weight these utilities for each outcomes by its probability. Essentially, we know what the probabilities are for each of those outcomes. We're going to take the weighted average of these utilities.

And then we sort of adding them up. And then by adding them up, essentially we can evaluate all sorts of lotteries. And then we just compare those lotteries either with some fixed amount of money or some other lotteries, the outcomes that we might get.

Now, there's two key implicit assumptions that are really important. One is only final outcomes matter. It doesn't matter what you expected in advance. It doesn't matter what you thought you might get. And it doesn't matter what you had yesterday.

All of these things are completely irrelevant in the simplest form of expected utility unless you sort of have information or the like. Only final outcomes matter. And then there is linearity and probabilities.

That's to say we put weight on the different types of states of the world relative to what the probability of those states are. So it cannot be that, if something is twice as likely, you should put twice as much weight on that in your evaluation of the outcomes. It cannot be that this is non-linear in certain ways. There's linearity in probabilities. Any questions on this?

So now, let me get back to what I said previously. What are we assuming away here? What are the things that are not in here? Yeah.

AUDIENCE: [INAUDIBLE]

FRANK SCHILBACH: Exactly. So essentially, all the other things that we said previously we are assuming away, for example, things like anxiety over certain outcomes, worries, stress, and so on. I'm also assuming away regret. I'm assuming away the gains and losses.

So essentially, anything we said before is essentially just stripped away and simplified in some sense and saying, we can explain a lot of behaviors just using the concavity of the utility function. Now, I have one example for you. And I think, I encourage, you sort of for any of these kinds of assumptions or kind of functions or things that you see in economics, it's worth sort of looking out in the world what people are actually doing and trying to see is that actually compatible with people, with behavior that we see in the world. And here's sort of one example.

So I actually don't think this is irrational behavior. And that's actually a good example of so what we might confuse with irrational behaviors. So I guess what we see is that expected utility has a lot of trouble explaining this behavior, right?

Because essentially, you spent like $5 on those lotteries. I'm offering you $10. So you get twice as many tickets.

So your probability of winning will be twice as high. Presumably, you prefer winning or losing. So, therefore, you should obviously take that deal unless there's some transaction costs and the like.

People are not doing that. The main reason that's mentioned here is regret. Now, what the person was saying here is these are perhaps irrational decisions.

I actually don't think that's right. Essentially, it's just we cannot rationalize the decision that we see with expected utility in the sense that it looks like the person behaves in irrational ways, but the person may just have regret aversion, something that essentially is not in the utility function. We sort of modeled it in the wrong way. And sort of, by not capturing this, we might miss certain behaviors.

Now, we're going to talk a lot about-- not about lotteries right now. We're going to get back to this a little bit in terms of why people engage in risk. But I just want to be clear on what we're assuming here. We're assuming a lot of stuff away, and I want you to be aware of that.

There's another question which actually the question or the video did not sort of try to tackle, which is why are people playing these lotteries in the first place. Why engage in the lotteries in the first place? In some sense, that wasn't clear either.

Again, we're going to get back to that. But the point of the video was to show you that, in some sense, these are a bunch of assumptions that are in the expected utility model. Not all of these assumptions are right. And you know, we want to be sort of aware of that.

But let me sort of just summarize what I just told you. And this, in some sense, is recap of 14.01 if you want, which I think in part you also discussed in recitation. Or there's like a handout from 14.01 that you can look at to study it in more details.

So many important economic choices involve risk and people are risk averse in many contexts. The expected utility model is a workhorse model of the economics for studying risk. And the way it's done is, essentially, one takes the weighted average of utilities from final outcomes. That's what matters for assessing outcomes.

OK. And so, now, we're going to see, OK, now taking that model very seriously, what can be explained? And what are perhaps the limits of doing so? And sort of risk aversion comes solely and exclusively from the concavity of the utility function. There's no other reasons to avoid risk. Then essentially your utility function is concave.

OK. So now, how do we measure risk? And that's, again, sort of definitions that economists use. When you think about risk aversion, how do you measure this?

Well, you measure it essentially through the concavity of the utility function, which is, as you were saying earlier, it's coming from the second derivative of the utility function. There's two main measures that economists use. There's sort of the absolute or the coefficient of absolute risk aversion. We call that r.

It's essentially taking the second derivative, which tends to be negative. So we take the negative of that. We scale it by the first derivative.

That's essentially to make it insensitive to if you multiply the utility function by a constant. Presumably, that doesn't change anything to your choices. So you risk aversion should not change. And, therefore, we sort of have to normalize or we divide by the first derivative.

A second version of that, or an alternative version, is the coefficient of relative risk aversion, which essentially is-- we call it-- gamma. Gamma is x, the wealth outcome that we look at times r times the coefficient of absolute risk aversion. It's the elasticity of the slope of the utility function, which I've written out here.

And sort of one very nice property of this-- and again, that's a definition. There's not much to argue with this. This is just how economists measure this. One nice property of this is, if you look at portfolio models or the like, one implications of constant relative risk aversion, which I'm going to show you a function of in a bit, is that people with constant relative risk aversion invest a constant share of their wealth in risky assets regardless of their level of wealth.

That's a main sort of result from a finance or portfolio models. In some sense, that's sort of irrelevant for you. These are just definitions in the sense of, if you wanted to measure risk aversion, this is what economists have used mostly, OK?

Now, if I give you this definition, how would you actually measure this? If you wanted to know my risk aversion, how would you do that? And so let me give you actually a utility function here.

So let me give you actually two utility functions. Here's just examples of one example of the constant absolute risk aversion function that you see here. Or a Constant Relative Risk Aversion, CRRA, this is what we mostly use.

So that's just the definition of a function that has the property that it has constant relative risk aversion. You can sort of verify that. We're going to focus here on CRRA functions, which are sort of what economists mostly use. So now, if I told you this is my utility function, my totally function looks like this, now how would you estimate my risk aversion?

AUDIENCE: [? I ?] can give you two gambles and then [INAUDIBLE]?

FRANK SCHILBACH: Right. So could you give me essentially choices of outcomes that have essentially some uncertainty or different risk involved. And then I'll give you the choices. If I say I prefer one over the other, can you tell what my gamma is?

AUDIENCE: No.

FRANK SCHILBACH: What can you tell me? Or what can you say?

AUDIENCE: I guess you can say the [? extent, ?] but maybe not [INAUDIBLE] value [INAUDIBLE].

FRANK SCHILBACH: Yeah. You can put some bounds on it, right? So I'm going to show you this in a second. But essentially, if I say I prefer one option over the other, you're going to have gamma on the left-hand side, gamma on the right-hand side, and give you some equations, essentially some inequality.

And then if you solve that equation, you're going to get some bound on gamma that sort of essentially tells you below or above. Or my risk aversion must be below or above a certain number. What else could we do? Yes?

AUDIENCE: [? Ask ?] [? them ?] [? when they're ?] [INAUDIBLE]?

FRANK SCHILBACH: Exactly. And that's what's called the certainty or-- so the simplest way of doing that is to say, here's a lottery between some gains or losses or two gains with certain probabilities. There's some risk involved. And then we could ask you, what's the amount that makes you indifferent between receiving that amount for sure and the gamble that I'm offering you?

Now, that's called the certainty equivalent. Essentially, it's the amount of money, if I have a certain gamble, what's the amount of money that, if you get it for sure, makes you exactly indifferent between that amount of money for sure and the gamble, which is uncertain, right? So if we then had the certainty equivalent, now you could essentially just back out what my gamma is by just solving for gamma that's there.

Let me actually show you that. There's another thing that we could do. What else could we do? So we said, OK, if you gamble, choices between different gambles, I could ask you for certainty equivalent.

Now, these are all kind of lab ways of doing this. But if you looked in the real world, if you try to sort of figure out in the real world how are people making these choices or choices in the world, what kinds of choices could you observe to figure out what people's gamma is? Yes.

AUDIENCE: Could you sell them insurance or an option to mitigate their risk and figure out how much they value that mitigation?

FRANK SCHILBACH: Exactly. That's exactly right. And that's exactly what we're going to discuss and what people have done.

Now, it's a little bit tricky that in usually cases, if I just ask you what insurance have you chosen, it's a little tricky to figure out what your gamma actually is because I don't know what options you had, right? So what I need is essentially a choice set between-- suppose I'm selling you insurance. In particular, what I'm going to show you, I think, next time is Justin Sydnor's paper, where people can choose between-- these are customers in a certain home insurance where people have choices between different deductibles, right?

And now, I can essentially say, if I choose a high deductible versus a low deductible, essentially it's implicitly you're choosing the risk exposure that you have for price. So in Sydnor's case, there's four different options that people offer to him. That was essentially both the choice set, like what are the choices that people offered-- in this case, I guess they're often offered four choices-- and then the actual choice that they made.

Again, that's not going to give you an exact gamma in terms of pinning it down exactly what it is because there's four different inequalities that you get from these choices. But you can actually bound, as it turns out, people's risk aversion pretty well using those kinds of choices. Exactly.

So that's what we have here is certain equivalence, choices from gambles, and insurance choices. So let's start with certainty equivalent. So suppose your wealth equals either to $50,000 to $100,000 each with probability 50%.

Suppose that's essentially there's lots of risk in your life. Either it's 50,000, 100,000, starting tomorrow you're going to find out the chance of that is 50% each. Now, of course, that's hypothetical, but let's suppose that for a second.

You're expected wealth then, of course, is 75,000. Now, what guaranteed amount, the certainty equivalent, of the WCE do you find equally desirable? If I could make all of your risk go away and just say I'm giving you a fixed amount, what amount would you choose?

Now, when you do that, essentially, if you give me an amount W, a certain equivalent, that gives me essentially an equation, which is the utility from the certainty equivalent, by definition, since you just told me that, must be the same as the weighted average of the utility of 50,000 and the ultimately of 100,000 with probability 50% each, right? And once you do that, now you get essentially some nonlinear equation that depends on gamma that you can solve. Perhaps not in closed form, but you can essentially figure out what the answer is in Mathematica or the like, OK?

Now, as it turns out, now you can solve for this. And the implied values of gamma I've written down for here. So if you tell me here 70,000, gamma is 1.

If you tell me 66,000, it's 2. 58,000, it's 5. 53,000, it's 10. 51,000, it's 30.

Who would say anything below 10? Yes, no? What would you say?

AUDIENCE: By below, you mean less than 10?

FRANK SCHILBACH: So value of gamma less than 10, yes.

AUDIENCE: Yeah, for sure.

FRANK SCHILBACH: Yes, for sure. That seems very reasonable. If you think about value of 30, if you had a value of 30, you probably would not leave the house ever in some sense. You would not come to class or something because you're worried about some stuff falling on your head or the like.

Because, again, let me show you what the lottery was. The lottery was between 50,000 100,000 with 50% chance. If you tell me you're indifferent between that and 51,000 for sure, you're essentially valuing this small increment coming from 50,000 to 51,209.

That's $1,209. You value that a lot compared to the 50% chance of actually getting $100,000. So we sort of think that, when looking at these large scale choices, economists often assume, I think, that people's gamma is somewhere between 0 and 2, OK?

So somewhere maybe 70,000, maybe even lower than that, maybe 66,000, these are sort of reasonable choices that we think people are making or you see people making in their lives. Anything above that seems like it's just not right because, in some sense, that's not how people behave in the real world. People are comfortable with at least some risk in their life when you look at them.

So the broad lesson is that choices involving large scale risk suggests that gamma can't be too large, OK? Now, second we can say-- OK, so those are large scale choices. Now, we're going to look at sort of small scale choices using small gambles as [? Deckson ?] was just alluding to.

So here's a choice involving a small scale gamble. What if you had a 50-50 bet to win $11 and lose $10? Who would take that bet? Who would not take it?

OK. So suppose you know there's a question kind of like, follow those questions, now, since your utility is not necessarily linear, we need to know what your wealth is. Suppose it's 20,000, but you can choose all sorts of other numbers.

And you turn down a 50-50 bet to win-- this is 110 and lose 100. You could do this for 11 and 10 as well. What can we learn now about your gamma?

And this is what I was saying earlier. Now, it's essentially, if you turn down this bet, it must be that having 20,000, which is the status quo if you turn down the bet, the utility of that is larger than 50% of 20,000 plus 110 plus 0.5 times the utility of 20,000 minus 100. And again, I can sort of then plug in the utility function and essentially solve for gamma.

Now, if you solve for gamma-- and some of the next problem set is doing some of that-- is essentially rejecting this bet is implying that gamma is larger than 18. Now, 18 is actually not that large. But surely, 18 is larger than 2.

And we just sort of agreed earlier on that gamma should be somewhere below 10, presumably somewhere around 2 or maybe 1. So what we get here now is, when you look at large scale choices, it looks like people's gamma is somewhere between 0 and 2, perhaps below 5 or something, but surely not above 10. When you look at small scale choices that seem pretty reasonable-- and many of you seem to agree that you might not want to take certain bets. Maybe you're credit constrained or the like.

But in any case, it looks like people's gamma is really large. OK. And so, now, the question is, how do we sort of reconcile this? How do we put these things together?

Now, Matthew Rabin has written a paper of this and sort of says, this is not just sort of an intuitive argument. This is a paper in Econometrica from 2000. But in fact, he sort of proves that, when people reject small scale gambles, that just sort of implies crazy stuff for large scale choices, essentially stuff that just seems completely implausible. And essentially, he, under minimal assumptions, proves that this doesn't make a lot of sense.

Now, what do I mean by this is and what do we learn from this is that essentially the marginal utility of money must decrease extremely rapidly if you sort of take this model seriously. He does this under new assumptions about the utility function. So it's not just something special case that he sort of doctored together with some special assumptions of the utility function.

The only thing, in fact, that he's assuming is that the utility function is weakly concave. OK. And so here's the example that was also in your reading. Suppose there's Johnny, who is a risk averse expected utility maximizer where the utility function or the second derivative is smaller or equal than 0, meaning that essentially his utility function is weakly concave.

Suppose that person turns down a 50-50 gamble of losing $10 and gaining $11 for any level of wealth. That assumption at the end, for any level of wealth, is kind of important, but actually not that important. You can sort of relax that as well. For our purposes, we can sort of mostly ignore it.

Now, what's the biggest Y such that we know Johnny will turn down a 50-50, lose 100, win Y bet? So here are the answers. And what's the correct answer?

AUDIENCE: G.

FRANK SCHILBACH: G. And why is that? Or can somebody explain what's going on? Yes.

AUDIENCE: Is it because he will reject this bet for any level of wealth, so that kind of implies that he's not able to accept any level of risk?

FRANK SCHILBACH: No, no, no. I think that's just because, for the iteration forward in the proof of the thing, he needs to sort make that argument. But in fact, that's not essential. There's some restrictions to that.

You can prove the same thing maybe not as stark in terms of a result, but this is just because he's iterating forward. He needs to sort of prove the sequence of utilities that derives. But it's actually not necessarily central. Yes.

AUDIENCE: I think the paper argued that Johnny [INAUDIBLE] implied that the [INAUDIBLE] [? utility ?] [INAUDIBLE] very [? rapidly decreasing, ?] [? which means that ?] between that he will [INAUDIBLE].

FRANK SCHILBACH: Yes. So what's happening here is that-- so let's sort of start very simply. Let's start with Johnny's first choice that says he rejects the bet, which means essentially on the right-hand side is utility of status quo, essentially just utility if w. On the left-hand side is 50% chance of winning $11 and 50% chance of losing $10.

Now, you can sort of multiply this by 2 and rearrange, which gives you the second line. Essentially, that says that the increase in utility going from w to w plus 11 is smaller than the increase in utility going from w minus 10 to w. OK, that's just the left-hand side and the right-hand side. I'm just rearranging terms.

So what that means is that, again, like on the left-hand side, how much does the utility increase if I go from w to w plus 11? Essentially, if I add $11 from coming from w, that utility that he values by at most 10/11-- so each dollar that he gets on the left-hand side is valued at most 10/11 as much as the dollars between w minus 10 and w, right? So if you have $10 on the right, $11 on the left, you prefer the right-hand side over the left-hand side, which means each dollar must be valued more on the right-hand side.

Put differently, the dollars on the left-hand side, that value is 10/11 of the dollars on the right-hand side, OK? So just to repeat again, on the right-hand side, we're adding $10. On the left-hand side, we are adding $11 or subtracting-- we're adding $11 on left-hand side and we're subtracting $10 and the right-hand side.

Now, since you prefer the thing or the thing on the right-hand side is larger, that must mean that each dollar on the right-hand side is worth more. It's 11/10 compared to the dollars on the left-hand side. Or put differently, each dollar on the left-hand side has value 10/11 of each dollar on the right-hand side.

You can sort think about this a bit. But trust me, that is correct. Now, there's diminishing marginal utility. Essentially, concavity sort of says that the marginal dollar at w minus 10 is at least as valuable as the marginal dollar at w.

That's essentially just simple assumption of concavity, essentially just saying, there's diminishing marginal utility. So the lower the amount of wealth that you have, the weakly larger the marginal utility is. So the marginal utility at w minus 10 is at least as large as the marginal utility at w. And that marginal utility for dollar is at least as valuable as the marginal utility at w plus 11.

Now, sort of taken together, that means that Johnny values $1 at w plus 11 by, at most, 10/11 as much as he values the dollars at w minus 10. What does that mean is that if you go from minus 10 to a plus 11 essentially the marginal dollar that you get is worth 10 11th as much for every $21 that he increases his Wilson.

So I think, given some of the confused faces I see, we might do some of this in recitation. But this is essentially simple algebra and using the minimal assumptions that I made. Now, you can do the same thing as like if the person were $21 richer.

So essentially, now, I'm doing the same thing, just adding $21 on each side. I'm doing the exact same thing. And I'm going to get essentially the exact same thing. It's essentially saying that he values each dollar that he gets at w plus $32 by, at most, 10/11 to the power of 2 5/6 as much as he values the dollars at w minus 10.

So what I'm essentially doing is I'm iterating forward. So I know the utility function is concave by sort of your first choice. I know that essentially [INAUDIBLE] utility is declining going on one side.

So now, essentially taking this forward-- essentially saying, well, for every $21, you value each dollar by 10/11 as much. So now, I'm saying, well, if you had $21 plus $21 is $42 plus $29 is $63, your marginal utility must be really declining very, very rapidly. So once you have a lot more money, then essentially you just don't care at all about any marginal dollar that you get.

So you can do this by if the person was $42 richer. Essentially, you'd care about each dollar 5/6 as much. If it's $420, you care about it 3/20 as much. And if you were $840 richer, you care about it only 2/100 as much.

Essentially, that's to say-- and this is exactly as you were saying-- the marginal utility plummets for substantial changes in lifetime wealth. So you care less than 2% about an additional dollar when you are $900 richer than you are right now. That doesn't feel right, but essentially it's a simple implication of what was just assumed.

There's no magic here. This is very simple algebra and using very simple minor assumptions. But essentially, it's saying, if this person rejects this gamble as we just had, it follows-- and there's sort of complicated proofs in the paper.

But it follows that, essentially, the additional for, if you give the person $900 more, the person values each dollar only 2% as much as when the person is $900 richer. And so then you look at these consequences. And you can read this in the Rabin and Thaler paper or the original Rabin paper if you like.

Essentially, you get these absurd conclusions. If you look at the left-hand side, these are sort of like if an expected utility maximizer always turns down certain bets. On the left-hand side, it follows that he also turns on the bets on the right-hand side.

And we think, you know, for example, if I told you losing $10 or gaining $11, that seems like a reasonable thing to reject perhaps. That seems like a thing that one might do. At least, you guys were saying that you might do that.

Well, if that's the case, then you should also accept or reject the gamble of losing $100 and gaining infinite amount of dollars. And that seems obviously absurd. And so that can't be really true.

Now, what's going on here essentially is to say is that the utility function, as it is an expected utility, has trouble reconciling people's small scale choices and large scale choices. And it's similar to what we talked about, like exponential discounting. We only have one parameter here, which is gamma for both gains and losses and for all sorts of scales.

And that parameter is just not able to fit people's choices in sensible ways. It seems to be people have small scale risk aversion in some sense. And it seems to be people are not incredibly risk averse for large amounts of money.

And so, now, the expected utility model cannot match both of those things. That's essentially what the Rabin paper does. We'll talk about this in some slower reform in recitation to sort of go over this.

But in some sense, the important part here is to understand the intuition. And intuition, essentially, is that, if there's curvature over very small scales or over very small stakes, it must be there's lots of curvature going forward over large scales. And that's just not plausible.

Because, essentially, then people would just not value really, really large amounts. And we know that people do value money at least to some extent. Any questions on this overall? Yeah.

AUDIENCE: [INAUDIBLE] I guess [INAUDIBLE] [? mimicked ?] the [INAUDIBLE]. I guess, does the [INAUDIBLE] hyperbolic model essentially almost [? work ?] [? here ?] where [INAUDIBLE] to [INAUDIBLE] singular [INAUDIBLE] if someone tried to do the long-term, short-term thing, [INAUDIBLE].

FRANK SCHILBACH: Yeah. So what we're going to do is-- so notice that, here, he was not assuming anything about the utility function. The only thing he was assuming, or Rabin was assuming here, is that the person is expected utility maximizer and that the utility function is weakly, not even strictly, concave.

Now, what that means is you can't sort of just change the functional form. This is a general proof for any utility function that you use. So what you have to do now is either sort of say, well, there's some other assumptions wrong about expected utility in terms of weighting the probabilities or stuff like that that essentially can explain the phenomenon.

It could be something about [INAUDIBLE] aversion and so on. These seem kind of unlikely. The most likely thing-- and this is what I'm going to talk about next week-- is Kahneman-Tversky's sort of loss aversion framework where you say, if you put different weight on gains versus losses, then essentially you have two parameters.

You have one parameter about your risk aversion over gains. And you have a parameter that's steers kind of how you feel about losses compared to gains. So once you do that, then you have another degree of freedom. And you can explain a lot of [? favors ?] potentially.

But that's kind of the equivalent of that exactly. But the difference here is that it's not coming through the utility function, the reason being that the problem is actually not the utility function. Because as I said, there's actually no assumption here that can even be changed because it's very general for any function that you assume, be it any of the ones that I just showed you previously. OK.

So then the last choice-- [INAUDIBLE] doing we'll get started on this and then going to finish next time-- is, as your classmate was saying, about insurance choices. So how do we do that? So this is a very nice paper by Justin Sydnor, who's using sort of real world insurance choices.

And what's very nice about it is that you might not say, well, I might sort of say, well, college students and lab choices gives you funky answers. And you might not sort of believe that this is really predictive of anything in the real world. So we really want real world choices that people make in their lives.

And sort of you might also worry about demand effects and about sort of people behaving a little bit funny in experiment. So let's find real world choices that people have made and try to see. Maybe we can estimate gamma using that.

And so what Justin Sydnor has is data from a large home insurance provider. He has a bunch of standard policies. There's a random sample of those. So there are 50,000 standard policies.

What he has, importantly, he has both the options that people had-- so what is your choice set? Here's four different choices for each person. And he has then the choices that people made. Plus, he has claims that people made after that.

He has all the new customers, which matters a little bit because, you know, you might say new customers are confused. Maybe the old ones are the ones that are right. Now, the key part here is the deductible that people were choosing. What is a deductible? Yes.

AUDIENCE: How much you'll pay out of pocket before the insurance kicks in?

FRANK SCHILBACH: Exactly. So this is expenses paid out of pocket before the insurer starts paying any expenses. That would be like, if you had some damage to your house for some small amount, you would have to pay that for yourself. If you have some large amount of damage, you have to still pay the small amount. And then the insurer would pay the rest.

And that's usually used to deter a large number of claims because you know the insurance company just doesn't want to pay for every $50 of damage that you might have. Because that's just really costly for them to do for administrative costs. Now, what Sydnor has is choices over menus of four deductibles.

And again, as I said, he has both individual choice set and their preferred options. If you only had the preferred options, it would be very hard to figure out kind of what actually-- because you then can't really say what's the counterfactual. You don't essentially what else he could have chosen or he or she should have chosen.

So you kind of want to different options and then sort of the outcome. And then say, OK, since you had these different options available, it must be that you preferred one or the other. It must be that you're risk averse or not.

So here's kind of what these data look like. The sample data are deductibles. OK. This is, again, the amount that you have to pay out of pocket until the insurer sort of kicks in and pays for you.

There is the premium, which is how much you have to pay every year anyway regardless of what anything would happen. There's sort of, relative to the $1,000 policy, kind of what's the premium. That's to say how much more money you have to pay for a certain premium.

So for example, choice one has a premium of $504. Choice two has a premium of $588. So that's $84 higher. You pay essentially $84 in premium every year to reduce your deductible from $1,000 to $500.

OK. So what you can do, essentially, you can reduce your deductible at a price. The price relative to the $1,000 policy is in the third column. And you see the deductible on the left-hand side. So this person is like policyholder-- yeah.

AUDIENCE: If you're comparing their choices, couldn't it be that they're just a more risky person and not necessarily risk averse?

FRANK SCHILBACH: Correct. So that's exactly right. So there's sort of unobservable risk that people might have. I'll get back to that. But essentially, what you see is that a lot of people make very similar choices. Then on average, people's risk is relatively low.

So what you're saying-- essentially, I need to know your claim rate. I need to know kind of what's your probability of actually having any damages. And it turns out claim rates are extremely low in the sample, the order of magnitude something like below 5%. Or I think it's even lower.

So it cannot be that everybody is a high risk person. It can be potentially that everybody thinks they're a high risk person. But, you know, then there's a different mistake going on.

So what he's assuming, essentially, is, on average, it must be that there's some high risk people and some low risk people. But essentially, he's sort of kind of assuming this away and sort of saying, look, on average claim rates are really low. It could be that there is some fraction of really high risk people.

But by definition, since they're only something like 5% of claim rates, it can't be that there's everybody is a high risk person. So it must be that there's some people who behave as if they're either really risk averse or as if they think they're really high risk people. And this is kind of where the old and new customers are helpful for.

Because there are some customers who have been at this company for, like, 10, 15 years. And sort of, there, you know, you kind of should know what kind of risk person you are perhaps. But that's a great question. I'll actually get back to that.

OK, so policy holder one, their home was built in 1966. He had an insured value of 180,000. The menu available for this policy in the same year was the following.

And then he has also the choice on the right-hand side. You see the policies that are chosen. So that person chose, for example, a deductible of $200 and for a price of $157 relative to paying the $1,000 deductible.

Policyholder two, similarly, sort of chose the first option. Notice that the price has changed a little bit in part because the company has sort of taken into account some covariates and some risk in some ways already. So the company is kind of pricing specifically depending on what your home value is and maybe the area and so on and so forth. But since Sydnor has all that information, he can sort of just take that into account.

Now, what can we now say about risk aversion? How can we now sort of estimate risk aversion using these choices? Yes.

AUDIENCE: If you know your house is [INAUDIBLE] and your [INAUDIBLE] problems [INAUDIBLE] so then you don't have to pay [INAUDIBLE].

FRANK SCHILBACH: Yes. So what you need to know is essentially a number of different things. You need to know the deductibles, the premiums for each option, the claim probabilities, and the wealth levels. I'll talk about this in more detail. But essentially, what you can do is, for each of these options, you can write down an indirect utility of wealth function.

You essentially can write down what's the expected utility of that option. And then you can essentially just put bounds essentially on if you preferred option one or option two. That tells me something about your gamma. And so what Sydnor then does is essentially estimate gamma using those different choices. I'm going to go over that in a lot more detail and slower next time.