The supply-and-demand model from Chapter 5 makes an assumption you may not have noticed at the time: that buyers and sellers are rational. Each buyer knows what they want, knows what they're willing to pay, and chooses the option that maximizes...
Learning Objectives
- Identify five common cognitive biases (loss aversion, present bias, anchoring, status quo bias, framing) with real examples.
- Apply prospect theory to a decision and explain how it differs from expected utility theory.
- Distinguish a 'nudge' from a mandate and identify the ethical questions nudging raises.
- Apply the behavioral lens to a standard economic prediction and evaluate whether the prediction holds.
In This Chapter
Chapter 10 — Behavioral Economics
Why People Don't Always Act the Way the Model Predicts
The supply-and-demand model from Chapter 5 makes an assumption you may not have noticed at the time: that buyers and sellers are rational. Each buyer knows what they want, knows what they're willing to pay, and chooses the option that maximizes their welfare. Each seller knows their costs, knows the market price, and produces the quantity that maximizes profit. Nobody makes mistakes. Nobody is fooled by marketing. Nobody fails to save for retirement. Nobody buys something they later regret. Nobody pays for a gym membership they don't use. The model is built on a foundation of perfect rationality.
This assumption is wrong. It is wrong in ways that have been documented for decades, that affect every market in the economy, and that explain a lot of behavior the standard model finds puzzling. Real humans exhibit systematic departures from the rational-actor model — not random noise, but consistent patterns of behavior that the model cannot explain and cannot accommodate without modification. The field that studies these departures is behavioral economics, and it has gone from a fringe specialty in the 1970s to a Nobel-Prize-winning core area of modern economics (Kahneman in 2002, Thaler in 2017, Banerjee/Duflo/Kremer in 2019 partly for behavioral applications).
This chapter is going to give you a behavioral lens. The lens is not a replacement for the standard model — it is a complement. The standard model is still the right starting point for almost any market analysis. But after this chapter, when the standard model makes a prediction, you should ask: "Does this prediction hold for actual humans, or are there behavioral departures that change the picture?" Sometimes the answer is no, and the standard model is fine. Sometimes the answer is yes, and behavioral thinking gives you a better understanding of what's actually happening.
The chapter has six sections. Section 1 introduces bounded rationality and the limits of the rational-actor model. Section 2 covers prospect theory and loss aversion (Kahneman and Tversky's foundational contribution). Section 3 covers present bias and time inconsistency (why people don't save enough). Section 4 covers anchoring, status quo bias, and framing. Section 5 introduces nudges and choice architecture (Thaler and Sunstein's contribution). Section 6 closes with how to use the behavioral lens in subsequent chapters.
10.1 Bounded rationality
The classical model of rational choice assumes that people: - Have complete and consistent preferences - Know what they want and what each option offers - Calculate optimal trade-offs accurately - Are not fooled by irrelevant information - Don't make systematic mistakes
This is sometimes called the Homo economicus model — economic man, the perfectly rational decision-maker who shows up in textbooks and disappears from real life. As a model of how humans should decide, it has some appeal. As a description of how humans actually decide, it's been falsified in hundreds of experiments and observational studies over the last fifty years.
Herbert Simon (Nobel Prize 1978) was one of the first economists to take the gap seriously. Simon argued that humans are boundedly rational — they want to make good decisions but have limited cognitive capacity, limited time, limited information, and limited attention. Faced with too many options, they fall back on rules of thumb (heuristics) that produce reasonable decisions most of the time but fail in predictable ways some of the time.
This is not a critique of human intelligence. It's a recognition that any decision-maker — human, animal, machine — has cognitive constraints, and the optimal strategy under cognitive constraints is not the same as the optimal strategy without them. You can't compute every option in the world for every decision; you have to use shortcuts. The shortcuts are mostly good. They are also predictably wrong in some situations.
The shortcuts get studied in behavioral economics. Each one has been documented in laboratory experiments, in observational data, and in real-world markets. Each one matters for how markets actually work. We will spend the rest of the chapter going through the five most important ones.
10.2 Prospect theory and loss aversion
The most important behavioral discovery in economics is prospect theory, developed by Daniel Kahneman and Amos Tversky in a 1979 paper that has become one of the most-cited works in social science. The 1979 paper documented something that the standard rational-choice model could not explain: people care more about losses than they care about equivalent gains.
The standard model says that a $100 gain and a $100 loss should have equal magnitude in your decision-making. They are mathematical inverses. Either you want a 50-50 chance of gaining $100 or losing $100 (if you're risk-neutral), or you don't (if you're risk-averse), but the asymmetry between the gain and the loss should not depend on which one you're focusing on.
Kahneman and Tversky's experiments showed that this is not how humans actually decide. Faced with a 50-50 gamble — gain $100 or lose $100 — most people refuse, even though the expected value is zero. Faced with a 50-50 gamble — gain $200 or lose $100 — most people still refuse, even though the expected value is now positive. To get most people to accept a 50-50 gamble, the gain has to be roughly twice the loss. The asymmetry is large, consistent across experiments, and impossible to derive from the standard model.
Kahneman and Tversky called this loss aversion. The basic claim: losses hurt about twice as much as equivalent gains feel good. The asymmetry is built into how human beings process gains and losses, and it shows up in almost every decision involving risk.
Why does loss aversion matter?
The implications are everywhere.
1. Endowment effect. People value things they already own more than they would value the same things if they didn't own them. In one famous experiment, half of a class of students were given coffee mugs and asked how much they would sell them for. The other half were not given mugs and asked how much they would pay to buy one. The "sellers" demanded about twice what the "buyers" were willing to pay — for the same mug. Standard rational choice says the two prices should be equal (the mug is worth what it's worth). Loss aversion says the sellers don't want to "lose" their mug, so they value it more than they would have if they didn't have it.
This shows up in housing markets, where homeowners are reluctant to sell at prices below what they paid. In labor markets, where workers refuse pay cuts but accept smaller-than-inflation raises. In investment, where investors hold losing stocks too long because they don't want to "realize" the loss. The endowment effect is one of the most-replicated findings in behavioral economics.
2. Status quo bias. People prefer the current state of affairs even when alternatives would be better. We will see this in §10.4.
3. Reference points. People judge outcomes relative to a reference point — usually their current state or recent past. A 5% raise feels good if you expected 0%; the same 5% raise feels bad if you expected 10%. Behavior depends on the reference point, not just on the absolute outcome.
4. The "framing effect" on losses. Doctors recommending surgery to a patient can frame the same outcome two ways: "this surgery has a 90% survival rate" or "this surgery has a 10% mortality rate." The numbers are mathematically identical. Patient choices are dramatically different. Loss-framing produces more refusals than gain-framing, even when the underlying probability is the same. This is loss aversion meeting the framing effect (which we'll see in §10.4).
Implications for markets
When we apply loss aversion to market analysis, several puzzles become explicable:
- Sticky wages: workers refuse pay cuts because the loss is psychologically large, even when they would accept comparable smaller raises. This is one reason wage flexibility is lower than the simple model suggests.
- Sticky housing prices: homeowners refuse to sell at a "loss" relative to what they paid. This is one reason housing markets adjust slowly to demand shocks (see Chapter 5's housing case study).
- Market herding: investors who experience losses become more conservative and pull out of risky markets, even when expected returns suggest they should stay in.
- Refusal to switch insurance: people stick with insurance plans they know even when better options exist, partly because switching feels like risking a loss.
Loss aversion is not a small effect. It is large, consistent, and shows up in markets at every scale. It is one of the most important corrections behavioral economics offers to the standard model.
10.3 Present bias and time inconsistency
The second major behavioral departure is present bias: humans systematically over-weight the present relative to the future, in a way the standard rational-choice model cannot accommodate.
The standard model assumes that people discount the future at a constant rate. If you would prefer $100 today over $110 next year, you should also prefer $100 in five years over $110 in six years (the same one-year delay, the same 10% premium for waiting). The standard "exponential discounting" model says these two choices should be made the same way.
In experiments, they aren't. Most people who prefer $100 today over $110 next year also prefer $110 in six years over $100 in five years. The present feels different from the future. When the choice is "now or later," the now feels especially valuable. When the choice is "later or even later," the difference is much smaller.
Behavioral economists call this hyperbolic discounting — the discount rate is not constant but is steeply curved at the present. The implication is that people will agree to a future plan they will not actually carry out when the time comes. You decide on January 1 to start exercising tomorrow. Tomorrow comes, and you decide to start the day after tomorrow. And so on. The consistency of "I'll start tomorrow" with the inconsistency of "tomorrow's plan" is time inconsistency — your future self has different preferences from your present self, and those preferences shift in predictable ways.
Why does present bias matter?
1. Retirement saving. The single biggest empirical application is retirement saving. The standard model says people should save enough during their working years to fund a comfortable retirement. In reality, people consistently save less than they need to. They know they should save more. They intend to save more. They don't actually do it, because the "I'll start saving more next month" promise is constantly deferred.
The empirical consequences are large. In the U.S., the median household nearing retirement has saved far less than the standard model would predict. Many retirees are not adequately funded for the lives they will live in retirement. Present bias is not the only reason — incomes are also limited and unexpected expenses arise — but it is a major contributor.
2. Health behavior. Same logic. People know they should exercise, eat better, lose weight, quit smoking, get more sleep. They intend to. They don't. The future health benefits feel abstract; the present cost (effort, hunger, withdrawal) feels concrete. Hyperbolic discounting predicts exactly this pattern.
3. Procrastination. The classic example. Tasks that have a benefit far in the future (writing a paper, applying for a job, starting a project) are repeatedly deferred. Tasks that have a benefit right now (checking phone, watching TV) are not. Present bias makes procrastination look rational from the perspective of the present-self even though it looks irrational from the perspective of the future-self.
4. Procrastination of regret. Sometimes people procrastinate decisions they know they will regret in the future. The classic example is putting off a difficult conversation, a medical test, a financial reckoning. The reckoning will come. Putting it off doesn't change the underlying problem. But the present-self gets temporary relief, even at the cost of future-self pain.
Implications for markets
Present bias means that markets selling future benefits work differently than markets selling immediate benefits. Insurance is hard to sell because the benefit (payout) is in the uncertain future and the cost (premium) is in the certain present. Exercise programs are sold to people who don't actually use them. Gym memberships generate revenue from members who don't show up. Retirement accounts are funded only when the government creates strong defaults that make it easy to save (we'll see this in §10.5).
Markets that exploit present bias — payday loans, "buy now, pay later" services, sweepstakes, certain forms of gambling — can be enormously profitable. They sell people the immediate gratification of the present at the cost of the deferred pain of the future. The standard model says these transactions should be voluntary and welfare-improving (people are revealing their preferences). The behavioral model says they may be welfare-reducing because the present-self's preferences don't match the future-self's preferences, and the future-self bears the cost.
Whether to regulate such markets is a contested question. Some behavioral economists argue for protection — restricting access to high-cost short-term loans, mandating disclosure of long-term costs, etc. Others argue that the right response is to give people better tools (including information) but not to override their stated preferences. The debate is alive and ongoing.
10.4 Anchoring, status quo bias, and framing
Three more behavioral findings, briefly:
Anchoring
When people make estimates, they "anchor" on whatever number they see first, even when the number is irrelevant. In one famous experiment, Kahneman and Tversky had subjects spin a wheel that landed on either 10 or 65 (they thought the wheel was random; it was actually rigged). Then they asked: "What percentage of African countries are members of the United Nations?" Subjects who saw 10 estimated about 25%. Subjects who saw 65 estimated about 45%. The wheel had nothing to do with the question. The anchor influenced the answer anyway.
In markets, anchoring shows up everywhere. The "manufacturer's suggested retail price" anchor makes a 30%-off discount feel like a great deal. The "originally $100, now $40" anchor makes the $40 price feel like a steal — even if the item was always sold at $40 and the $100 anchor is fictional. Negotiations are anchored by the first offer. Real estate appraisals are anchored by the listing price. Salary negotiations are anchored by the first number mentioned.
Anchoring is especially powerful when people have no other information to work with. When you genuinely don't know how much something should cost, you grab the first available number. This is one reason markets for unfamiliar goods (used cars, custom services, art) tend to have wide price dispersions — without good information, everyone is anchoring on different numbers.
Status quo bias
Closely related to loss aversion: people prefer the current state to alternatives, even when the alternatives would be objectively better. The classic example is retirement plan enrollment. In companies where retirement contribution is automatic (employees are enrolled by default and have to opt out), participation is much higher than in companies where employees have to actively sign up. The same workers, the same plans, the same financial situation — but participation is dramatically different depending on whether the default is "in" or "out."
Why? Status quo bias. The default option feels like the natural choice. Changing requires effort, attention, and the risk of regret. The standard model says these factors should be small or zero. In practice, they're enough to swing participation by tens of percentage points.
Status quo bias matters for any policy choice involving defaults. Organ donation rates are much higher in countries where organ donation is the default (with the option to opt out) than in countries where it requires active registration. Insurance enrollment is higher when default-on than default-off. Same with charitable contributions, voting registration, retirement savings, healthcare plan choices, and dozens of other domains.
The lesson: defaults are not neutral. Choosing what the default is is choosing what most people will end up doing.
Framing effects
The same information can be presented in different ways, and the presentation affects choices in ways the standard model doesn't allow. We saw the surgery example earlier ("90% survive" vs. "10% die"). Here are some others:
- "9 out of 10 dentists recommend" vs. "10% of dentists do not recommend" — same fact, different reactions
- "Beef is 90% lean" vs. "Beef is 10% fat" — same product, different sales
- "Save $10" vs. "Discount of 10%" — same offer, different responses
- A retirement plan described as "guaranteeing your security" vs. "a savings plan with average returns of 7%" — same plan, different sign-up rates
Framing effects show that human beings aren't responding only to the information itself but to how the information is presented. This is the kind of finding that makes the standard model uncomfortable — and the kind of finding that marketers have known about for a century.
10.5 Nudges and choice architecture
Richard Thaler (Nobel 2017) and Cass Sunstein wrote a 2008 book called Nudge that introduced two concepts that have shaped policy thinking around the world.
Choice architecture: every choice is presented in a context, and the context influences the choice. A grocery store deciding whether to put fruits and vegetables at eye level or candy at eye level is making a choice about choice architecture. A retirement plan administrator deciding whether to set the default contribution rate at 3% or 6% is making a choice about choice architecture. A government deciding whether to make organ donation opt-in or opt-out is making a choice about choice architecture.
The point is that these choices are unavoidable. Someone has to decide where to put the broccoli. Someone has to decide what the default rate is. There is no "neutral" choice architecture. Once you accept this, the question becomes: how should the choice architect make the decision?
Nudge: a feature of choice architecture that alters people's behavior in a predictable way without forbidding any options or significantly changing economic incentives. Auto-enrollment in retirement plans is a nudge. Putting healthier foods at eye level is a nudge. Setting up "save more tomorrow" plans (where employees commit to increasing their savings rate when they get raises in the future) is a nudge.
Nudges are designed to help people make better decisions for themselves by their own standards. They don't override choices; they make the easier choice the one that the person would, on reflection, want to make. This is different from regulation (which forbids some choices) and from taxation (which changes the economic cost of choices).
Why nudges have caught on
Nudges have become enormously influential in policy because they: 1. Respect freedom of choice (anyone can opt out) 2. Are usually inexpensive to implement (changing a default doesn't require new spending) 3. Have measurable effects (the impact on behavior can be tracked) 4. Are politically palatable (they don't require new taxes or regulations)
Many countries now have "behavioral insights teams" that apply nudge thinking to policy design. The U.K. set up a Behavioural Insights Team in 2010 (the "Nudge Unit"). The U.S. created a similar team during the Obama administration. The OECD has a behavioral insights program. Hundreds of policy decisions have been changed based on nudge research.
The ethical questions
Not everyone is enthusiastic about nudges. Critics raise several concerns:
1. Who decides what's "better"? A nudge that pushes people toward saving more for retirement assumes that retirement saving is good. What if some people genuinely prefer to spend now and accept lower retirement income? The nudge subtly substitutes the choice architect's values for the person's own.
2. Manipulation vs. persuasion. Some people argue that nudges are a form of manipulation — exploiting cognitive biases rather than presenting honest information and letting people decide. The line between "good nudge" and "manipulation" can be blurry.
3. Paternalism in disguise. Nudges are sometimes called "libertarian paternalism" — they preserve choice but push people in the direction the architect prefers. Critics argue this is paternalism with a friendly face.
4. Slippery slope. Once governments are in the business of designing choice architectures to nudge behavior, where does it stop? At what point does benevolent nudging become coercive social engineering?
These objections are real and serious. Most nudge advocates respond that the alternative — pretending choice architecture doesn't exist — is worse, because someone will design the architecture anyway, and the question is whether to do it thoughtfully or by default. But the ethical questions don't fully go away.
10.6 Using the behavioral lens
You now have a behavioral lens. The lens is not a replacement for the standard model — the standard supply-and-demand framework is still the right starting point for almost any market analysis. But the behavioral lens lets you ask sharper questions:
For any market prediction, ask: 1. Are buyers and sellers actually rational in this market, or are there behavioral departures? 2. What role does loss aversion play? Are people making decisions that the simple model wouldn't predict because they're avoiding losses? 3. Is present bias affecting choices? Are people making decisions that they would not endorse on reflection? 4. What is the choice architecture? What are the defaults? What are the anchors? How is information framed? 5. Would a nudge — a thoughtful change to the choice architecture — help people make better decisions?
In Part III (market failures), the behavioral lens will help you understand why some markets fail in ways the standard model can't fully explain. In Part IV (firm behavior), it will help you understand why firms make decisions that look irrational from the outside. In Part V–VII (macro), it will help you understand why monetary policy works partly through expectations (which are themselves partly behavioral) and why fiscal policy has effects the simple model doesn't predict. In Part VIII (contemporary topics), behavioral economics will be central to discussions of healthcare, retirement, technology, and the gig economy.
The lens is not optional. It is now part of how you think.
Key terms recap: bounded rationality — humans are rational but cognitively limited prospect theory — Kahneman-Tversky's alternative to expected utility theory loss aversion — losses hurt about twice as much as equivalent gains feel good reference point — the baseline against which gains and losses are measured endowment effect — owning something makes you value it more present bias — overweighting the present relative to the future hyperbolic discounting — declining discount rate over time time inconsistency — preferences that change as time passes anchoring — being influenced by an initial number, even if irrelevant status quo bias — preferring the current state of affairs framing effect — how information is presented affects choices nudge — a feature of choice architecture that alters behavior without forbidding options choice architecture — the design of how choices are presented
Themes touched: Behavioral (foundational — this is the chapter that establishes the lens), Markets power+imperfect, Disagreement (about how to weight behavioral vs. rational evidence), Tradeoffs, Affects daily life.