text
stringlengths
204
3.13k
): 122–36. 218 Part 3: Uncertainty and Strategy 0. Now let p be the size of positive and negative values). Because the bet is fair, E(h) the insurance premium that would make the individual exactly indifferent between taking the fair bet h and paying p with certainty to avoid the gamble7.25) where W is the individual’s current wealth. We now expand both sides of Equation 7.25 using Taylor’s series.12 Because p is a fixed amount, a linear approximation to the right side of the equation will suffice: U W ð W ð For the left side, we need a quadratic approximation to allow for the variability in the gamble, h: higher-order terms: p Þ ¼ (7.26) pU hU 0 W ð Þ þ h2 2 U 00 W ð Þ þ higher-order terms (7.27 h2 ð 2 Þ U 00 W ð Þ þ higher-order terms. (7.28) If we recall that E(h) represent E(h2)/2, we can equate Equations 7.26 and 7.28 as ¼ 0 and then drop the higher-order terms and use the constant k to or U W ð Þ % pU 0 W ð Þ ffi U W ð Þ % kU 00 W ð Þ (7.29) kU 00 W ð U 0ð W ð p : Þ ¼ kr W ffi % Þ Þ That is, the amount that a risk-averse individual is willing to pay to avoid a fair bet is approximately proportional to Pratt’s risk aversion measure.13 Because insurance premiums paid are observable in the real world, these are often used to estimate individuals’ risk aversion coefficients or to compare such coefficients among groups of individuals. Therefore, it is possible to use market information to learn a bit about attitudes toward risky situations. (7.30) Risk aversion and wealth An important question is whether risk aversion increases or decreases with wealth. Intuitively, one might think that the willingness to pay to avoid a given fair bet would decrease as wealth increases because decreasing marginal utility would make potential losses less serious for high-wealth individuals. This intuitive answer is not necessarily correct, however, because decreasing marginal utility
also makes the gains from winning gambles less attractive. Thus, the net result is indeterminate; it all depends on the precise shape of the utility function. Indeed, if utility is quadratic in wealth, W U ð Þ ¼ a þ bW þ cW2, (7.31) 12Taylor’s series provides a way of approximating any differentiable function around some point. If f (x) has derivatives of all f (x) + hf 0(x) + (h2/2)f 00(x) + higher-order terms. The point-slope formula in algebra is orders, it can be shown that f (x + h) a simple example of Taylor’s series. 13In this case, the factor of proportionality is also proportional to the variance of h because Var(h) an illustration where this equation fits exactly, see Example 7.3. E[h — E(h)]2 E(h2). For ¼ ¼ ¼ where b > 0 and c < 0, then Pratt’s risk aversion measure is r W ð Þ ¼ % U 00 W ð U 0ð W Þ Þ 2c 2cW % þ, ¼ b which, contrary to intuition, increases as wealth increases. On the other hand, if utility is logarithmic in wealth, then we have U W ð Þ ¼ ln, W ð Þ U 00 W ð U 0ð W which does indeed decrease as wealth increases The exponential utility function U W ð Þ ¼ % AW e% exp AW Þ ð% ¼ % Chapter 7: Uncertainty 219 (7.32) (7.33) (7.34) (7.35) (where A is a positive constant) exhibits constant absolute risk aversion over all ranges of wealth because now r W ð Þ ¼ % U 00 W ð U 0ð W Þ Þ ¼ AW A2e% Ae% AW ¼ A: (7.36) This feature of the exponential utility function14 can be used to provide some numeri- cal estimates of the willingness to pay to avoid gambles, as the next example shows. EXAMPLE 7.3 Constant Risk Aversion Suppose an individual whose initial wealth is W0 and whose utility function exhibits constant absolute risk aversion is facing a
� 2pp Þ ¼ ð 1= % ¼ U E ½ W ð Þ’ ¼ 1 ð %1 U W ð f Þ W ð Þ dW 1 2pp ¼ ð e% AWe%½ð W % =r l Þ % 2=2 dW: ’ (7.39) ffiffiffiffiffi Perhaps surprisingly, this integration is not too difficult to accomplish, although it does take patience. Performing this integration and taking a variety of monotonic transformations of the resulting expression yields the final result that U E ½ W ð Þ’ ffi l % Ar2 2 : (7.40) Hence expected utility is a linear function of the two parameters of the wealth probability density function, and the individual’s risk aversion parameter (A) determines the size of the negative effect of variability on expected utility. For example, suppose a person has invested his or her funds so that wealth has an expected value of $100,000 but a standard deviation (s) of $10,000. Therefore, with the Normal distribution, he or she might expect wealth to decrease below $83,500 about 5 percent of the time and increase above $116,500 a similar fraction of the time. With these parameters, expected utility is given by E[U(W )] ¼ 0.0001 95,000. Hence this person receives the same utility from his or her risky wealth as would be obtained from a certain wealth of $95,000. A more risk-averse person might have A 0.0003, and in this case the certainty equivalent of his or her wealth would be $85,000. 4, expected utility is given by 100,000 (A/2)(10,000)2. If A % (104)2 100,000 ¼ 10% 10% 0.5 ¼ ¼ % ¼ ( ( 4 QUERY: Suppose this person had two ways to invest his or her wealth: Allocation 1, m1 ¼ 107,000 and s1 ¼ 2,000. How would this person’s attitude toward risk affect his or her choice between these allocations?15 10,000; Allocation 2, m2 ¼ 102,000 and s2 ¼ Relative risk aversion It seems unlikely that the willingness to pay to
avoid a given gamble is independent of a person’s wealth. A more appealing assumption may be that such willingness to pay is inversely proportional to wealth and that the expression rr W ð Þ ¼ Wr W ð Þ ¼ % W U 00 W ð U 0ð W Þ Þ (7.41) might be approximately constant. Following the terminology proposed by J. W. Pratt,16 the rr (W ) function defined in Equation 7.41 is a measure of relative risk aversion. The power utility function W, R U ð Þ ¼! WR=R if R < 1, R ln W if R 0 ¼ 0 6¼ (7.42) 15This numerical example (roughly) approximates historical data on real returns of stocks and bonds, respectively, although the calculations are illustrative only. 16Pratt, ‘‘Risk Aversion.’’ Chapter 7: Uncertainty 221 exhibits diminishing absolute risk aversion, W r ð Þ ¼ % U 00 W ð U 0ð W Þ Þ R ð 2 WR 1 % Þ % 1 WR % 1 R, % W ¼ ¼ % but constant relative risk aversion:17 rr W ð Þ ¼ Wr W ð Þ ¼ 1 % R: (7.43) (7.44) Empirical evidence is generally consistent with values of R in the range of 1. Hence individuals seem to be somewhat more risk averse than is implied by the logarithmic utility function, although in many applications that function provides a reasonable approximation. It is useful to note that the constant relative risk aversion utility function in Equation 7.42 has the same form as the general CES utility function we first described in Chapter 3. This provides some geometric intuition about the nature of risk aversion that we will explore later in this chapter. 3 to % % EXAMPLE 7.4 Constant Relative Risk Aversion An individual whose behavior is characterized by a constant relative risk aversion utility function will be concerned about proportional gains or loss of wealth. Therefore, we can ask what fraction of initial wealth ( f ) such a person would be willing to give up to avoid a fair gamble of, say, 10 percent of initial wealth. First, we assume R 0, so the logarithmic utility function is appropriate. Setting the utility of this individual�
�s certain remaining wealth equal to the expected utility of the 10 percent gamble yields ¼ ln 1 ½ð % f W0’ ¼ Þ 0:5 ln 1:1W0Þ þ ð 0:5 ln 0:9W0Þ ð : (7.45) Because each term contains ln W0, initial wealth can be eliminated from this expression: ln 1 ð % f Þ ¼ 0:5 1:1 ln ð ½ Þ þ ln 0:9 ð Þ’ ¼ ln ð 0:99 0:5:99 0:5 Þ ¼ 0:995 hence and 0:005: f ¼ (7.46) Thus, this person will sacrifice up to 0.5 percent of wealth to avoid the 10 percent gamble. A similar calculation can be used for the case R 2 to yield ¼ % 0:015: f ¼ (7.47) Hence this more risk-averse person would be willing to give up 1.5 percent of his or her initial wealth to avoid a 10 percent gamble. QUERY: With the constant relative risk aversion function, how does this person’s willingness to pay to avoid a given absolute gamble (say, of 1,000) depend on his or her initial wealth? 17Some authors write the utility function in Equation 7.42 as U(W) R. In this case, a is the relative risk aversion measure. The constant relative risk aversion function is sometimes abbreviated as CRRA utility. a) and seek to measure a a/(1 W1 ¼ % ¼ % 1 % 222 Part 3: Uncertainty and Strategy Methods for Reducing Uncertainty and Risk We have seen that risk-averse people will avoid gambles and other risky situations if possible. Often it is impossible to avoid risk entirely. Walking across the street involves some risk of harm. Burying one’s wealth in the backyard is not a perfectly safe investment strategy because there is still some risk of theft (to say nothing of inflation). Our analysis thus far implies that people would be willing to pay something to at least reduce these risks if they cannot be avoided entirely. In the next four sections, we will study each of four different methods that individuals can take to mitigate the problem of risk and uncertainty: insurance, diversification, �
��exibility, and information. Insurance We have already discussed one such strategy: buying insurance. Risk-averse people would pay a premium to have the insurance company cover the risk of loss. Each year, people in the United States spend more than half a trillion dollars on insurance of all types. Most commonly, they buy coverage for their own life, for their home and cars, and for their health care costs. But insurance can be bought (perhaps at a high price) for practically any risk imaginable, ranging from earthquake insurance for a house along a fault line to special coverage for a surgeon against a hand injury. A risk-averse person would always want to buy fair insurance to cover any risk he or she faces. No insurance company could afford to stay in business if it offered fair insurance (in the sense that the premium exactly equals the expected payout for claims). Besides covering claims, insurance companies must also maintain records, collect premiums, investigate fraud, and perhaps return a profit to shareholders. Hence an insurance customer can always expect to pay more than an actuarially fair premium. If people are sufficiently risk averse, they will even buy unfair insurance, as shown in Example 7.2; the more risk averse they are, the higher the premium they would be willing to pay. Several factors make insurance difficult or impossible to provide. Large-scale disasters such as hurricanes and wars may result in such large losses that the insurance company would go bankrupt before it could pay all the claims. Rare and unpredictable events (e.g., war, nuclear power plant accidents) offer reliable track record for insurance companies to establish premiums. Two other reasons for absence of insurance coverage relate to the informational disadvantage the company may have relative to the customer. In some cases, the individual may know more about the likelihood that they will suffer a loss than the insurance company. Only the ‘‘worst’’ customers (those who expect larger or more likely losses) may end up buying an insurance policy. This adverse selection problem may unravel the whole insurance market unless the company can find a way to control who buys (through some sort of screening or compulsion). Another problem is that having insurance may make customers less willing to take steps to avoid losses, for example, driving more recklessly with auto insurance or eating fatty foods and smoking with health insurance. This so-called moral hazard problem again may impair the insurance market unless the insurance company can find
a way to cheaply monitor customer behavior. We will discuss the adverse selection and moral hazard problems in more detail in Chapter 18, and discuss ways the insurance company can combat these problems, which besides the above strategies include offering only partial insurance and requiring the payment of deductibles and copayments. Chapter 7: Uncertainty 223 Diversification A second way for risk-averse individuals to reduce risk is by diversifying. This is the economic principle behind the adage, ‘‘Don’t put all your eggs in one basket.’’ By suitably spreading risk around, it may be possible to reduce the variability of an outcome without lowering the expected payoff. The most familiar setting in which diversification comes up is in investing. Investors are routinely advised to ‘‘diversify your portfolio.’’ To understand the wisdom behind this advice, consider an example in which a person has wealth W to invest. This money can be invested in two independent risky assets, 1 and 2, which have equal expected values (the mean returns are m1 ¼ 2). A person whose undiversified portfolio, UP, includes just one of the assets (putting all his or her m2 and would face ‘‘eggs’’ in that ‘‘basket’’) would earn an expected return of mUP ¼ a variance of r2 m2) and equal variances (the variances are r2 m1 ¼ 1 ¼ r2 r2 r2 2. Suppose instead the individual chooses a diversified portfolio, DP. Let a1 be the fraca1 in the second. We will see that the person tion invested in the first asset and a2 ¼ can do better than the undiversified portfolio in the sense of getting a lower variance without changing the expected return. The expected return on the diversified portfolio does not depend on the allocation across assets and is the same as for either asset alone: % 1 UP ¼ 1 ¼ a1Þ To see this, refer back to the rules for computed expected values from Chapter 2. The variance will depend on the allocation between the two assets: a1l1 þ ð lDP ¼ l1 ¼ l2 ¼ (7.48) l2: % 1 r2 DP
¼ 1r2 a2 1 þ ð 1 a1Þ % 2r2 2 ¼ ð 1 2a1 þ 2a2 1Þ r2 1: % (7.49) This calculation again can be understood by reviewing the section on variances in Chapter 2. There you will be able to review the two ‘‘facts’’ used in this calculation: First, the variance of a constant times a random variable is that constant squared times the variance of a random variable; second, the variance of independent random variables, because their covariance is 0, equals the sum of the variances. Choosing a1 to minimize Equation 7.49 yields a1 ¼ r2 2. Therefore, the 1 optimal portfolio spreads wealth equally between the two assets, maintaining the same expected return as an undiversified portfolio but reducing variance by half. Diversification works here because the assets’ returns are independent. When one return is low, there is a chance the other will be high, and vice versa. Thus, the extreme returns are balanced out at least some of the time, reducing the overall variance. Diversification will work in this way as long as there is not perfect correlation in the asset returns so that they are not effectively the same asset. The less correlated the assets are, the better diversification will work to reduce the variance of the overall portfolio. 2 and r2 DP ¼ 1 The example, constructed to highlight the benefits of diversification as simply as possible, has the artificial element that asset returns were assumed to be equal. Diversification was a ‘‘free lunch’’ in that the variance of the portfolio could be reduced without reducing the expected return compared with an undiversified portfolio. If the expected return from one of the assets (say, asset 1) is higher than the other, then diversification into the other asset would no longer be a ‘‘free lunch’’ but would result in a lower expected return. Still, the benefits from risk reduction can be great enough that a risk-averse investor would be willing to put some share of wealth into the asset with the lower expected return. A practical example of this idea is related to advice one would give
to an employee of a firm with a stock purchase plan. Even if the plan allows employees to buy shares of the company’s stock at a generous discount compared with the market, the employee may still be 224 Part 3: Uncertainty and Strategy advised not to invest all savings in that stock because otherwise the employee’s entire savings, to say nothing of his or her salary and perhaps even house value (to the extent house values depend on the strength of businesses in the local economy), is tied to the fortunes of a single company, generating a tremendous amount of risk. The Extensions provide a much more general analysis of the problem of choosing the optimal portfolio. However, the principle of diversification applies to a much broader range of situations than financial markets. For example, students who are uncertain about where their interests lie or about what skills will be useful on the job market are well advised to register for a diverse set of classes rather than exclusively technical or artistic ones. Flexibility Diversification is a useful method to reduce risk for a person who can divide up a decision by allocating small amounts of a larger sum among a number of different choices. In some situations, a decision cannot be divided; it is all or nothing. For example, in shopping for a car, a consumer cannot combine the attributes that he or she likes from one model (say, fuel efficiency) with those of another (say, horsepower or power windows) by buying half of each; cars are sold as a unit. With all-or-nothing decisions, the decisionmaker can obtain some of the benefit of diversification by making flexible decisions. Flexibility allows the person to adjust the initial decision, depending on how the future unfolds. The more uncertain the future, the more valuable this flexibility. Flexibility keeps the decision-maker from being tied to one course of action and instead provides a number of options. The decision-maker can choose the best option to suit later circumstances. A good example of the value of flexibility comes from considering the fuels on which cars are designed to run. Until now, most cars were limited in how much biofuel (such as ethanol made from crops) could be combined with petroleum products (such as gasoline or diesel) in the fuel mix. A purchaser of such a car would have difficulties if governments
passed new regulations increasing the ratio of ethanol in car fuels or banning petroleum products entirely. New cars have been designed that can burn ethanol exclusively, but such cars are not useful if current conditions continue to prevail because most filling stations do not sell fuel with high concentrations of ethanol. A third type of car has internal components that can handle a variety of types of fuel, both petroleumbased and ethanol, and any proportions of the two. Such cars are expensive to build because of the specialized components involved, but a consumer might pay the additional expense anyway because the car would be useful whether or not biofuels become more important over the life of the car.18 Types of options The ability of ‘‘flexible-fuel’’ cars to be able to burn any mix of petroleum-based fuels and biofuels is valuable because it provides the owner with more options relative to a car that can run on only one type of fuel. Readers are probably familiar with the notion that options are valuable from another context where the term is frequently used—financial markets— where one hears about stock options and other forms of options contracts. There is a close connection between the option implicit in the flexible-fuel cars and these option contracts that we will investigate in more detail. Before discussing the similarities between the options arising in different contexts, we introduce some terms to distinguish them. 18While the current generation of flexible-fuel cars involve state-of-the-art technology, the first such car, produced back in 1908, was Henry Ford’s Model-T, one of the top-selling cars of all time. The availability of cheap gasoline may have swung the market toward competitors’ single-fuel cars, spelling the demise of the Model-T. For more on the history of this model, see L. Brooke, Ford Model T: The Car That Put the World on Wheels (Minneapolis: Motorbooks, 2008). Chapter 7: Uncertainty 225 Financial option contract. A financial option contract offers the right, but not the obligation, to buy or sell an asset (such as a share of stock) during some future period at a certain price Real option. A real option is an option arising in a setting outside of financial markets. The flexible-fuel car can be viewed as an ordinary car combined with an additional real option to burn biofuels if
those become more important in the future. Financial option contracts come in a variety of forms, some of which can be complex. There are also many different types of real options, and they arise in many different settings, sometimes making it difficult to determine exactly what sort of option is embedded in the situation. Still, all options share three fundamental attributes. First, they specify the underlying transaction, whether it is a stock to be traded or a car or fuel to be purchased. Second, they specify a period over which the option may be exercised. A stock option may specify a period of 1 year, for example. The option embedded in a flexible-fuel car preserves the owner’s option during the operating life of the car. The longer the period over which the option extends, the more valuable it is because the more uncertainty that can be resolved during this period. Third, the option contract specifies a price. A stock option might sell for a price of $70. If this option is later traded on an exchange, its price might vary from moment to moment as the markets move. Real options do not tend to have explicit prices, but sometimes implicit prices can be calculated. For example, if a flexible-fuel car costs $5,000 more than an otherwise equivalent car that burns one type of fuel, then this $5,000 could be viewed as the option price. Model of real options Let x embody all the uncertainty in the economic environment. In the case of the flexible-fuel car, x might reflect the price of fossil fuels relative to biofuels or the stringency of government regulation of fossil fuels. In terms of the section on statistics in Chapter 2, x is a random variable (sometimes referred to as the ‘‘state of the world’’) that can take on possibly many different values. The individual has some number, I 1, …, n, of choices currently available. Let Ai (x) be the payoffs provided by choice i, where the argument (x) allows each choice to provide a different pattern of returns depending on how the future turns out. ¼ Figure 7.2a illustrates the case of two choices. The first choice provides a decreasing payoff as x increases, indicated by the downward slope of A1. This might correspond to ownership of a car that runs only on fossil fuels; as biofuels become more important
than fossil fuels, the value of a car burning only fossil fuels decreases. The second choice provides an increasing payoff, perhaps corresponding to ownership of a car that runs only on biofuels. Figure 7.2b translates the payoffs into (von Neumann–Morgenstern) utilities that the person obtains from the payoffs by graphing U(Ai) rather than Ai. The bend introduced in moving from payoffs to utilities reflects the diminishing marginal utility from higher payoffs for a risk-averse person. If the person does not have the flexibility provided by a real option, he or she must make the choice before observing how the state x turns out. The individual should choose the single alternative that is best on average. His or her expected utility from this choice is max U E f ½ A1Þ AnÞ’g : ð (7.50) 226 Part 3: Uncertainty and Strategy FIGURE 7.2 The Nature of a Real Option Panel (a) shows the payoffs and panel (b) shows the utilities provided by two alternatives across states of the world (x). If the decision has to be made upfront, the individual chooses the single curve having the highest expected utility. If the real option to make either decision can be preserved until later, the individual can obtain the expected utility of the upper envelope of the curves, shown in bold. Payoff Utility A2 U(A2) U(A1) State x x′ A1 State x x′ (a) Payoffs from alternatives (b) Utilities from alternatives Figure 7.2 does not provide enough information to judge which expected utility is higher because we do not know the likelihoods of the different x’s, but if the x’s are about equally likely, then it looks as though the individual would choose the second alternative, providing higher utility over a larger range. The individual’s expected utility from this choice is E[U(A2)]. On the other hand, if the real option can be preserved to make a choice that responds to which state of the world x has occurred, the person will be better off. In the car application, the real option could correspond to buying a flexible-fuel car, which does not lock the buyer into one fuel but allows the choice of whatever fuel turns out to be most common or inexpensive over the life of the car. In Figure 7.2, rather than
choosing a single alternative, the person would choose the first option if x < x 0 and the second option if x > x 0. The utility provided by this strategy is given by the bold curve, which is the ‘‘upper envelope’’ of the curves for the individual options. With a general number (n) of choices, expected utility from this upper envelope of individual options is E max f ½ U,..., U A1Þ ð A1Þ’g : ð (7.51) The expected utility in Equation 7.51 is higher than in 7.50. This may not be obvious at first glance because it seems that simply swapping the order of the expectations and ‘‘max’’ operators should not make a difference. But indeed it does. Whereas Equation 7.50 is the expected utility associated with the best single utility curve, Equation 7.51 is the expected utility associated with the upper envelope of all the utility curves.19 19The result can be proved formally using Jensen’s inequality, introduced in footnote 10. The footnote discusses the implications of Jensen’s inequality for concave functions: E[ f (x)] f [E(x)]. Jensen’s inequality has the reverse implication for convex functions: E[ f (x)] f [E(x)]. In other words, for convex functions, the result is greater if the expectations operator is applied outside of the function than if the order of the two is reversed. In the options context, the ‘‘max’’ operator has the properties of a convex function. This can be seen from Figure 7.2b, where taking the upper envelope ‘‘convexifies’’ the individual curves, turning them into more of a V-shape. + ) Chapter 7: Uncertainty 227 FIGURE 7.3 More Options Cannot Make the Individual Decision-Maker Worse Off The addition of a third alternative to the two drawn in Figure 7.2 is valuable in (a) because it shifts the upper envelope (shown in bold) of utilities up. The new alternative is worthless in (b) because it does not shift the upper envelope, but the individual is not worse off for having it. Utility Utility U(A3) U(A2) U(A1) State x U(A2) U(A
3) U(A1) State x (a) Additional valuable option (b) Additional worthless option More options are better (generally) Adding more options can never harm an individual decision-maker (as long as he or she is not charged for them) because the extra options can always be ignored. This is the essence of options: They give the holder the right—but not the obligation—to choose them. Figure 7.3 illustrates this point, showing the effect of adding a third option to the two drawn in Figure 7.2. In the first panel, the person strictly benefits from the third option because there are some states of the world (the highest values of x in the figure) for which it is better than any other alternative, shifting the upper envelope of utilities (the bold curve) up. The third option is worthless in the second panel. Although the third option is not the worst option for many states of the world, it is never the best and so does not improve the upper envelope of utilities relative to Figure 7.2. Still, the addition of the third option is not harmful. This insight may no longer hold in a strategic setting with multiple decision-makers. In a strategic setting, economic actors may benefit from having some of their options cut off. This may allow a player to commit to a narrower course of action that he or she would not have chosen otherwise, and this commitment may affect the actions of other parties, possibly to the benefit of the party making the commitment. A famous illustration of this point is provided in one of the earliest treatises on military strategy, by Sun Tzu, a Chinese general writing in 400 BC. It seems crazy for an army to destroy all means of retreat, burning bridges behind itself and sinking its own ships, among other measures. Yet this is what Sun Tzu advocated as a military tactic. If the second army observes that the first cannot retreat and will fight to the death, it may retreat itself before engaging the first. We will analyze such strategic issues more formally in the next chapter on game theory. Computing option value We can push the analysis further to derive a mathematical expression for the value of a real option. Let F be the fee that has to be paid for the ability to choose the best 228 Part 3: Uncertainty and Strategy alternative after x has been realized instead of before. The individual would
be willing to pay the fee as long as E max f U ½ x A1ð Anð ð ½ Þ % F Þ’g + max U E ½ f x A1ð ð ÞÞ Anð ð : ÞÞ’g (7.52) The right side is the expected utility from making the choice beforehand, repeated from Equation 7.50. The left side allows for the choice to be made after x has occurred, a benefit, but subtracts the fee for option from every payoff. The fee is naturally assumed to be paid up front, and thus reduces wealth by F whichever option is chosen later. The real option’s value is the highest F for which Equation 7.52 is still satisfied, which of course is the F for which the condition holds with equality. EXAMPLE 7.5 Value of a Flexible-Fuel Car 1 % x be the payoff from a fossil-fuel–only car and A2(x) Let’s work out the option value provided by a flexible-fuel car in a numerical example. Let A1(x) x be the payoff from a biofuel¼ only car. The state of the world, x, reflects the relative importance of biofuels compared with fossil fuels over the car’s lifespan. Assume x is a random variable that is uniformly distributed between 0 and 1 (the simplest continuous random variable to work with here). The statistics section in Chapter 2 provides some detail on the uniform distribution, showing that the 1 in the special case when the uniform random probability density function (PDF) is f (x) variable ranges between 0 and 1. ¼ ¼ Risk neutrality. To make the calculations as easy as possible to start, suppose first that the car buyer is risk neutral, obtaining a utility level equal to the payoff level. Suppose the buyer is forced to choose a biofuel car. This provides an expected utility of E A2’ ¼ ½ 1 0 ð x A2ð f Þ x ð Þ dx ¼ x dx x2 7.53) x & & & & where the integral simplifies because f (x) 1. Similar calculations show that the expected utility from buying a fossil-fuel car is also 1/2. Therefore, if only single-fuel cars are available,
the person is indifferent between them, obtaining expected utility 1/2 from either. ¼ Now suppose that a flexible-fuel car is available, which allows the buyer to obtain either A1(x) or A2(x), whichever is higher under the latter circumstances. The buyer’s expected utility from this car is max E ½ A1, A2Þ’ ¼ ð ¼ 1 max ð % x, x f x ð Þ Þ dx ¼ 1 x dx x2 dx x Þ % þ x dx 1 1 2 ð (7.54) The second line in Equation 7.54 follows from the fact that the two integrals in the preceding expression are symmetric. Because the buyer’s utility exactly equals the payoffs, we can compute the option value of the flexible-fuel car directly by taking the difference between the expected payoffs in Equations 7.53 and 7.54, which equals 1/4. This is the maximum premium the person would pay for the flexible-fuel car over a single-fuel car. Scaling payoffs to more realistic levels by multiplying by, say, $10,000, the price premium (and the option value) of the flexible-fuel car would be $2,500. This calculation demonstrates the general insight that options are a way of dealing with uncertainty that have value even for risk-neutral individuals. The next part of the example investigates whether risk aversion makes options more or less valuable. Chapter 7: Uncertainty 229 Risk aversion. Now suppose the buyer is risk averse, having von Neumann–Morgenstern utility function U xp. The buyer’s expected utility from a biofuel car is x ð Þ ¼ ffiffiffi A2Þ A2ð Þ ffiffiffiffiffiffiffiffiffiffiffiffi p dx f x ð Þ ¼ 1 2 dx 7.55) which is the same as from a fossil-fuel car, as similar calculations show. Therefore, a single-fuel car of whatever type provides an expected utility of 2/3. The expected utility from a flexible-fuel car that costs F more than a single-fuel car is E
max ½ f U A1ð x ð Þ % F, U Þ A2ð ð x Þ % F Þ’g ¼ ¼ ¼ ¼ max p 1 ð % x % F, p dx x ð Þ 1 x F f Þ % ffiffiffiffiffiffiffiffiffiffiffi p x F % ffiffiffiffiffiffiffiffiffiffiffi dx 2 ¼ 1 p 1 ð 2 x F dx % ffiffiffiffiffiffiffiffiffiffiffi 1 0 ð 1 2 0 ð 2 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi F x dx % p 1 % ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi F 1 % 1 2 du 7.56) FIGURE 7.47Graphical Method for Computing the Premium for a Flexible-Fuel Car To find the maximum premium F that the risk-averse buyer would be willing to pay for the flexible-fuel car, we plot the expected utility from a single-fuel car from Equation 7.55 and from the flexible-fuel car from Equation 7.56 and see the value of F where the curves cross. Expected utility 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 Single fuel Flexible fuel 0.0 0.00 0.10 0.20 0.30 0.40 F 0.50 230 Part 3: Uncertainty and Strategy The calculations involved in Equation 7.56 are somewhat involved and thus require some discussion
. The second line relies on the symmetry of the two integrals appearing there, which allows us to collapse them into two times the value of one of them, and we chose the simpler of the two for these purposes. The third line uses the change of variables u F to simplify the integral. (See Equation 2.135 in Chapter 2 for another example of the change-of-variables trick and further discussion.) % ¼ x To find the maximum premium the buyer would pay for a flexible-fuel car, we can set Equations 7.55 and 7.56 equal and solve for F. Unfortunately the resulting equation is too complicated to be solved analytically. One simple approach is to graph the last line of Equation 7.56 for a range of values of F and eyeball where the graph hits the required value of 2/3 from Equation 7.55. This is done in Figure 7.4, where we see that this value of F is slightly less than 0.3 (0.294 to be more precise). Therefore, the risk-averse buyer is willing to pay a premium of 0.294 for the flexible-fuel car, which is also the option value of this type of car. Scaling up by $10,000 for more realistic monetary values, the price premium would be $2,940. This is $440 more than the risk-neutral buyer was willing to pay. Thus, the option value is greater in this case for the risk-averse buyer. QUERY: Does risk aversion always increase option value? If so, explain why. If not, modify the example with different shapes to the payoff functions to provide an example where the riskneutral buyer would pay more. Option value of delay Society seems to frown on procrastinators. ‘‘Do not put off to tomorrow what you can do today’’ is a familiar maxim. Yet the existence of real options suggests a possible value in procrastination. There may be a value in delaying big decisions—such as the purchase of a car—that are not easily reversed later. Delaying these big decisions allows the decisionmaker to preserve option value and gather more information about the future. To the outside observer, who may not understand all the uncertainties involved in the situation, it may appear that the decision-maker is too inert, failing to make what looks to be the right decision at the time. In fact, delaying may be exactly the
right choice to make in the face of uncertainty. Choosing one course of action rules out other courses later. Delay preserves options. If circumstances continue to be favorable or become even more so, the action can still be taken later. But if the future changes and the action is unsuitable, the decision-maker may have saved a lot of trouble by not making it. The value of delay can be seen by returning to the car application. Suppose for the sake of this example that only single-fuel cars (of either type, fossil fuel or biofuel) are available on the market; flexible-fuel cars have not yet been invented. Even if circumstances start to favor the biofuel car, with the number of filling stations appearing to tip toward offering biofuels, the buyer may want to hold off buying a car until he or she is more sure. This may be true even if the buyer is forgoing considerable consumer surplus from the use of a new car during the period of delay. The problem is that if biofuels do not end up taking over the market, the buyer may be left with a car that is hard to fuel up and hard to trade in for a car burning the other fuel type. The buyer would be willing to experience delay costs up to F to preserve flexibility. The value of delay hinges on the irreversibility of the underlying decision. If in the car example the buyer manufacturer could recover close to the purchase price by selling it on the used-car market, there would be no reason to delay purchasing. But it is well known that the value of a new car decreases precipitously once it is driven off the car lot (we will discuss reasons for this including the ‘‘lemons effect’’ in Chapter 18); therefore, it may not be so easy to reverse the purchase of a car. Chapter 7: Uncertainty 231 Implications for cost–benefit analysis To an outside observer, delay may seem like a symptom of irrationality or ignorance. Why is the decision-maker overlooking an opportunity to take a beneficial action? The chapter has now provided several reasons why a rational decision-maker might not want to pursue an action even though the expected benefits from the action outweigh the expected costs. First, a risk-averse individual might avoid a gamble even if it provided a positive expected monetary payoff (because of the decreasing marginal utility from money). And option value provides
a further reason for the action not to be undertaken: The decision-maker might be delaying until he or she has more certainty about the potential results of the decision. Many of us have come across the cost–benefit rule, which says that an action should be taken if anticipated costs are less than benefits. This is generally a sensible rule, providing the correct course of action in simple settings without uncertainty. One must be more careful in applying the rule in settings involving uncertainty. The correct decision rule is more complicated because it should account for risk preferences (by converting payoffs into utilities) and for the option value of delay, if present. Failure to apply the simple cost–benefit rule in settings with uncertainty may indicate sophistication rather than irrationality.20 Information The fourth method of reducing the uncertainty involved in a situation is to acquire better information about the likely outcome that will arise. We have already considered a version of this in the previous section, where we considered the strategy of preserving options while delaying a decision until better information is received. Delay involved some costs, which can be thought of as a sort of ‘‘purchase price’’ for the information acquired. Here, we will be more direct in considering information as a good that can be purchased directly and analyze in greater detail why and how much individuals are willing to pay for it. Information as a good By now it should be clear to the reader that information is a valuable economic resource. We have seen an example already: A buyer can make a better decision about which type of car to buy if he or she has better information about the sort of fuels that will be readily available during the life of the car. But the examples do not end there. Shoppers who know where to buy high-quality goods cheaply can make their budgets stretch further than those who do not; doctors can provide better medical care if they are up to date on the latest scientific research. The study of information economics has become one of the major areas in current research. Several challenges are involved. Unlike the consumer goods we have been studying thus far, information is difficult to quantify. Even if it could be quantified, information has some technical properties that make it an unusual sort of good. Most information is durable and retains value after it has been used. Unlike a hot dog, which is consumed only once, knowledge of a special sale can be used not only by the person
who 20Economists are puzzled by consumers’ reluctance to install efficient appliances even though the savings on energy bills are likely to defray the appliances’ purchase price before long. An explanation from behavioral economics is that consumers are too ignorant to perform the cost–benefit calculations or are too impatient to wait for the energy savings to accumulate. K. Hassett and G. Metcalf, in ‘‘Energy Conservation Investment: Do Consumers Discount the Future Correctly?’’ Energy Policy (June 1993): 710–16, suggest that consumer inertia may be rational delay in the face of fluctuating energy prices. See Problem 7.9 for a related numerical example. 232 Part 3: Uncertainty and Strategy discovers it but also by anyone else with whom the information is shared. The friends then may gain from this information even though they do not have to spend anything to obtain it. Indeed, in a special case of this situation, information has the characteristic of a pure public good (see Chapter 19). That is, the information is both nonrival, in that others may use it at zero cost, and nonexclusive, in that no individual can prevent others from using the information. The classic example of these properties is a new scientific discovery. When some prehistoric people invented the wheel, others could use it without detracting from the value of the discovery, and everyone who saw the wheel could copy it freely. Information is also difficult to sell because the act of describing the good that is being offered to a potential consumer gives it away to them. These technical properties of information imply that market mechanisms may often operate imperfectly in allocating resources to information provision and acquisition. After all, why invest in the production of information when one can just acquire it from others at no cost? Therefore, standard models of supply and demand may be of relatively limited use in understanding such activities. At a minimum, models have to be developed that accurately reflect the properties being assumed about the informational environment. Throughout the latter portions of this book, we will describe some of the situations in which such models are called for. Here, however, we will pay relatively little attention to supply–demand equilibria and will instead focus on an example that illustrates the value of information in helping individuals make choices under uncertainty. Quantifying the value of information We already have all the tools needed to quantify the value of information from
the section on option values. Suppose again that the individual is uncertain about what the state of the world (x) will be in the future. He or she needs to make one of n choices today (this allows us to put aside the option value of delay and other issues we have already studied). As before, Ai(x) represents the payoffs provided by choice i. Now reinterpret F as the fee charged to be told the exact value that x will take on in the future (perhaps this is the salary of the economist hired to make such forecasts). The same calculations from the option section can be used here to show that the maximum such F is again the value for which Equation 7.52 holds with equality. Just as this was the value of the real option in that section, here it is the value of information. The value of information would be lower than this F if the forecast of future conditions were imperfect rather than perfect as assumed here. Other factors affecting an individual’s value of information include the extent of uncertainty before acquiring the information, the number of options he or she can choose between, and his or her risk preferences. The more uncertainty resolved by the new information, the more valuable it is, of course. If the individual does not have much scope to respond to the information because of having only a limited range of choices to make, the information will not be valuable. The degree of risk aversion has ambiguous effects on the value of information (answering the Query in Example 7.5 will provide you with some idea why). The State-Preference Approach to Choice Under Uncertainty Although our analysis in this chapter has offered insights on a number of issues, it seems rather different from the approach we took in other chapters. The basic model of utility maximization subject to a budget constraint seems to have been lost. To make further progress in the study of behavior under uncertainty, we will develop some new Chapter 7: Uncertainty 233 techniques that will permit us to bring the discussion of such behavior back into the standard choice-theoretic framework. States of the world and contingent commodities We start by pushing a bit further on an idea already mentioned, thinking about an uncertain future in term of states of the world. We cannot predict exactly what will happen, say, tomorrow, but we assume that it is possible to categorize all the possible things that might happen into a fixed number of well-defined states. For example, we might make the crude approximation of saying that
the world will be in only one of two possible states tomorrow: It will be either ‘‘good times’’ or ‘‘bad times.’’ One could make a much finer gradation of states of the world (involving even millions of possible states), but most of the essentials of the theory can be developed using only two states. A conceptual idea that can be developed concurrently with the notion of states of the world is that of contingent commodities. These are goods delivered only if a particular state of the world occurs. As an example, ‘‘$1 in good times’’ is a contingent commodity that promises the individual $1 in good times but nothing should tomorrow turn out to be bad times. It is even possible, by stretching one’s intuitive ability somewhat, to conceive of being able to purchase this commodity: I might be able to buy from someone the promise of $1 if tomorrow turns out to be good times. Because tomorrow could be bad, this good will probably sell for less than $1. If someone were also willing to sell me the contingent commodity ‘‘$1 in bad times,’’ then I could assure myself of having $1 tomorrow by buying the two contingent commodities ‘‘$1 in good times’’ and ‘‘$1 in bad times.’’ Utility analysis Examining utility-maximizing choices among contingent commodities proceeds formally in much the same way we analyzed choices previously. The principal difference is that, after the fact, a person will have obtained only one contingent good (depending on whether it turns out to be good or bad times). Before the uncertainty is resolved, however, the individual has two contingent goods from which to choose and will probably buy some of each because he or she does not know which state will occur. We denote these two contingent goods by Wg (wealth in good times) and Wb (wealth in bad times). Assuming that utility is independent of which state occurs21 and that this individual believes that good times will occur with probability p, the expected utility associated with these two contingent goods is WgÞ þ ð ð This is the magnitude this individual seeks to maximize given his or her initial wealth, W. Wg, WbÞ ¼ ð : WbÞ (7.57) p Þ pU % U V 1 ð Prices of contingent commodities Assuming that this person
can purchase $1 of wealth in good times for pg and $1 of wealth in bad times for pb, his or her budget constraint is then pgWg þ The price ratio pg /pb shows how this person can trade dollars of wealth in good times for 0.20, the sacrifice of $1 of wealth dollars in bad times. If, for example, pg ¼ 0.80 and pb ¼ pbWb: (7.58) W ¼ 21This assumption is untenable in circumstances where utility of wealth depends on the state of the world. For example, the utility provided by a given level of wealth may differ depending on whether an individual is ‘‘sick’’ or ‘‘healthy.’’ We will not pursue such complications here, however. For most of our analysis, utility is assumed to be concave in wealth: U 0(W) > 0, U 00(W) < 0. 234 Part 3: Uncertainty and Strategy in good times would permit this person to buy contingent claims yielding $4 of wealth should times turn out to be bad. Whether such a trade would improve utility will, of course, depend on the specifics of the situation. But looking at problems involving uncertainty as situations in which various contingent claims are traded is the key insight offered by the state-preference model. Fair markets for contingent goods If markets for contingent wealth claims are well developed and there is general agreement about the likelihood of good times (p), then prices for these claims will be actuarially fair—that is, they will equal the underlying probabilities: pg ¼ pb ¼ Hence the price ratio pg /pb will simply reflect the odds in favor of good times: pg pb ¼ p, 1 p: % p p 1 : (7.59) (7.60) In our previous example, if pg ¼ 4. In this case the odds in favor of good times would be stated as ‘‘4 to 1.’’ Fair markets for contingent claims (such as insurance markets) will also reflect these odds. An analogy is provided by the ‘‘odds’’ quoted in horse races. These odds are ‘‘fair’’ when they reflect the true probabilities that various horses will win. 0.2, then p/(1
g ¼ Wb—a point on the ‘‘certainty line’’ where wealth (W,) is independent of which state of the world occurs. At W, the slope of the indifference curve [p /(1 p)] is precisely equal to the price ratio pg /pb. If the market for contingent wealth claims were not fair, utility maximization might not occur on the certainty line. Suppose, for example, that p /(1 4 but that pg /pb ¼ 2 because ensuring wealth in bad times proves costly. In this case the budget constraint would resemble line I0 in Figure 7.5, and utility maximization would occur below the certainty line.23 In this case this individual would gamble a bit by opting for Wg > Wb because claims on Wb are relatively costly. Example 7.6 shows the usefulness of this approach in evaluating some of the alternatives that might be available. p) ¼ % % EXAMPLE 7.6 Insurance in the State-Preference Model We can illustrate the state-preference approach by recasting the auto insurance illustration from Example 7.2 as a problem involving the two contingent commodities ‘‘wealth with no theft’’ (Wg) and ‘‘wealth with a theft’’ (Wb). If, as before, we assume logarithmic utility and that the probability of a theft (i.e., 1 p) is 0.25, then % 23Because (as Equation 7.61 shows) the MRS on the certainty line is always p /(1 must occur below the line. % p), tangencies with a flatter slope than this 236 Part 3: Uncertainty and Strategy expected utility 0:75U WgÞ þ ð 0:75 ln Wg þ 0:25U WbÞ ð 0:25 ln Wb: ¼ ¼ (7.63) If the individual takes no action, then utility is determined by the initial wealth endowment, W,g ¼ 100,000 and W,b ¼ 80,000, so expected utility 0:75 ln 100,000 11:45714: þ ¼ ¼ 0:25 ln 80,000 (7.64) To study trades away from these initial endowments, we write the budget constraint in terms of the prices of the contingent commodities, pg and pb: pgW,g
þ Assuming that these prices equal the probabilities of the two states ( pg ¼ constraint can be written pbW,b ¼ pgWg þ pbWb: (7.65) 0.75, pb ¼ 0.25), this 0:75 100,000 ð 0:25 80,000 ð 95,000 0:75Wg þ 0:25Wb; Þ þ Þ ¼ that is, the expected value of wealth is $95,000, and this person can allocate this amount between Wb Wg and Wb. Now maximization of utility with respect to this budget constraint yields Wg ¼ 95,000. Consequently, the individual will move to the certainty line and receive an expected ¼ utility of ¼ (7.66) expected utility ln 95,000 ¼ ¼ 11:46163; (7.67) a clear improvement over doing nothing. To obtain this improvement, this person must be able to transfer $5,000 of wealth in good times (no theft) into $15,000 of extra wealth in bad times (theft). A fair insurance contract would allow this because it would cost $5,000 but return $20,000 should a theft occur (but nothing should no theft occur). Notice here that the wealth changes promised by insurance—dWb/dWg ¼ 3—exactly equal the negative p/(1 of the odds ratio ¼ % % 15,000/ 3. 0.75/0.25 5,000 ¼ % ¼ % p) % % A policy with a deductible provision. A number of other insurance contracts might be utility improving in this situation, although not all of them would lead to choices that lie on the certainty line. For example, a policy that cost $5,200 and returned $20,000 in case of a theft would permit this person to reach the certainty line with Wg ¼ 94,800 and expected utility ln 94,800 ¼ ¼ (7.68) Wb ¼ 11:45953; which also exceeds the utility obtainable from the initial endowment. A policy that costs $4,900 and requires the individual to incur the first $1,000 of a loss from theft would yield Wg ¼ Wb ¼ then 100,000 80,000 4,900 4,900 95,100, ¼ 19,000 % % þ 94
,100; ¼ expected utility 0:75 ln 95,100 11:46004: þ ¼ ¼ 0:25 ln 94,100 (7.69) (7.70) Although this policy does not permit this person to reach the certainty line, it is utility improving. Insurance need not be complete to offer the promise of higher utility. QUERY: What is the maximum amount an individual would be willing to pay for an insurance policy under which he or she had to absorb the first $1,000 of loss? Chapter 7: Uncertainty 237 Risk aversion and risk premiums The state-preference model is also especially useful for analyzing the relationship between risk aversion and individuals’ willingness to pay for risk. Consider two people, each of whom starts with a certain wealth, W,. Each person seeks to maximize an expected utility function of the form V Wg, WbÞ ¼ ð p WR g R þ ð 1 WR b R : p Þ % (7.71) Here the utility function exhibits constant relative risk aversion (see Example 7.4). Notice also that the function closely resembles the CES utility function we examined in Chapter 3 and elsewhere. The parameter R determines both the degree of risk aversion and the degree of curvature of indifference curves implied by the function. A risk-averse individual will have a large negative value for R and have sharply curved indifference curves, such as U1 shown in Figure 7.6. A person with more tolerance for risk will have a higher value of R and flatter indifference curves (such as U2).24 Suppose now these individuals are faced with the prospect of losing h dollars of wealth in bad times. Such a risk would be acceptable to individual 2 if wealth in good times were to increase from W, to W2. For the risk-averse individual 1, however, wealth would have FIGURE 7.6 Risk Aversion and Risk Premiums Indifference curve U1 represents the preferences of a risk-averse person, whereas the person with preferences represented by U2 is willing to assume more risk. When faced with the risk of losing h in bad times, person 2 will require compensation of W2 % larger amount given by W1 % W, in good times, whereas person 1 will require a W,. Wb W* W* − h Certainty line U1 U2 Wg W* W1W2 24Tangency of U1 and U2 at W,
is ensured because the MRS along the certainty line is given by p /(1 value of R. % p) regardless of the 238 Part 3: Uncertainty and Strategy to increase to W1 to make the risk acceptable. Therefore, the difference between W1 and W2 indicates the effect of risk aversion on willingness to assume risk. Some of the problems in this chapter make use of this graphic device for showing the connection between preferences (as reflected by the utility function in Equation 7.71) and behavior in risky situations. Asymmetry of Information One obvious implication of the study of information acquisition is that the level of information that an individual buys will depend on the per-unit price of information messages. Unlike the market price for most goods (which we usually assume to be the same for everyone), there are many reasons to believe that information costs may differ significantly among individuals. Some individuals may possess specific skills relevant to information acquisition (e.g., they may be trained mechanics), whereas others may not possess such skills. Some individuals may have other types of experience that yield valuable information, whereas others may lack that experience. For example, the seller of a product will usually know more about its limitations than will a buyer because the seller will know precisely how the good was made and where possible problems might arise. Similarly, large-scale repeat buyers of a good may have greater access to information about it than would first-time buyers. Finally, some individuals may have invested in some types of information services (e.g., by having a computer link to a brokerage firm or by subscribing to Consumer Reports) that make the marginal cost of obtaining additional information lower than for someone without such an investment. All these factors suggest that the level of information will sometimes differ among the participants in market transactions. Of course, in many instances, information costs may be low and such differences may be minor. Most people can appraise the quality of fresh vegetables fairly well just by looking at them, for example. But when information costs are high and variable across individuals, we would expect them to find it advantageous to acquire different amounts of information. We will postpone a detailed study of such situations until Chapter 18. SUMMARY The goal of this chapter was to provide some basic material for the study of individual behavior in uncertain situations. The key concepts covered are listed as follows. lute risk aversion (CARA) function and the constant relative risk
aversion (CRRA) function. Neither is completely satisfactory on theoretical grounds. • The most common way to model behavior under uncertainty is to assume that individuals seek to maximize the expected utility of their actions. • Individuals who exhibit a diminishing marginal utility of wealth are risk averse. That is, they generally refuse fair bets. • Risk-averse individuals will wish to insure themselves completely against uncertain events if insurance premiums are actuarially fair. They may be willing to pay more than actuarially fair premiums to avoid taking risks. • Two utility functions have been extensively used in the study of behavior under uncertainty: the constant abso- • Methods for reducing the risk involved in a situation include transferring risk to those who can bear it more effectively through insurance, spreading risk across several activities through diversification, preserving options for dealing with the various outcomes that arise, and acquiring information to determine which outcomes are more likely. • One of the most extensively studied issues in the economics of uncertainty is the ‘‘portfolio problem,’’ which asks how an investor will split his or her wealth among available assets. A simple version of the problem is used to illustrate the value of diversification in the text; the Extensions provide a detailed analysis. Chapter 7: Uncertainty 239 • Information is valuable because it permits individuals to make better decisions in uncertain situations. Information can be most valuable when individuals have some flexibility in their decision making. • The state-preference approach allows decision making under uncertainty to be approached in a familiar choice-theoretic framework. PROBLEMS 7.1 George is seen to place an even-money $100,000 bet on the Bulls to win the NBA Finals. If George has a logarithmic utilityof-wealth function and if his current wealth is $1,000,000, what must he believe is the minimum probability that the Bulls will win? 7.2 Show that if an individual’s utility-of-wealth function is convex then he or she will prefer fair gambles to income certainty and may even be willing to accept somewhat unfair gambles. Do you believe this sort of risk-taking behavior is common? What factors might tend to limit its occurrence? 7.3 An individual purchases a dozen eggs and must take them home. Although making trips home is costless, there is a 50 percent chance that all the eggs carried on any one trip will be broken during the trip. The individual considers
two strategies: (1) take all 12 eggs in one trip; or (2) take two trips with 6 eggs in each trip. a. List the possible outcomes of each strategy and the probabilities of these outcomes. Show that, on average, 6 eggs will remain unbroken after the trip home under either strategy. b. Develop a graph to show the utility obtainable under each strategy. Which strategy will be preferable? c. Could utility be improved further by taking more than two trips? How would this possibility be affected if additional trips were costly? 7.4 Suppose there is a 50–50 chance that a risk-averse individual with a current wealth of $20,000 will contract a debilitating disease and suffer a loss of $10,000. a. Calculate the cost of actuarially fair insurance in this situation and use a utility-of-wealth graph (such as shown in Figure 7.1) to show that the individual will prefer fair insurance against this loss to accepting the gamble uninsured. b. Suppose two types of insurance policies were available: (1) a fair policy covering the complete loss; and (2) a fair policy covering only half of any loss incurred. Calculate the cost of the second type of policy and show that the individual will generally regard it as inferior to the first. 7.5 Ms. Fogg is planning an around-the-world trip on which she plans to spend $10,000. The utility from the trip is a function of how much she actually spends on it (Y), given by U Y ð Þ ¼ ln Y: a. If there is a 25 percent probability that Ms. Fogg will lose $1,000 of her cash on the trip, what is the trip’s expected utility? b. Suppose that Ms. Fogg can buy insurance against losing the $1,000 (say, by purchasing traveler’s checks) at an ‘‘actuarially fair’’ premium of $250. Show that her expected utility is higher if she purchases this insurance than if she faces the chance of losing the $1,000 without insurance. c. What is the maximum amount that Ms. Fogg would be willing to pay to insure her $1,000? 240 Part 3: Uncertainty and Strategy 7.6 In deciding to park in an illegal place, any individual knows that the probability of getting a ticket is p and that the fine for
receiving the ticket is f. Suppose that all individuals are risk averse (i.e., U 00(W) < 0, where W is the individual’s wealth). Will a proportional increase in the probability of being caught or a proportional increase in the fine be a more effective deterrent to illegal parking? Hint: Use the Taylor series approximation U(W f ) % ¼ U(W) % f U 0(W) þ ( f 2/2)U 00(W ). 7.7 A farmer believes there is a 50–50 chance that the next growing season will be abnormally rainy. His expected utility function has the form expected utility 1 2 ¼ ln Y NR þ 1 2 ln Y R, where YNR and YR represent the farmer’s income in the states of ‘‘normal rain’’ and ‘‘rainy,’’ respectively. a. Suppose the farmer must choose between two crops that promise the following income prospects: Crop Wheat Corn YNR YR $28,000 $19,000 $10,000 $15,000 Which of the crops will he plant? b. Suppose the farmer can plant half his field with each crop. Would he choose to do so? Explain your result. c. What mix of wheat and corn would provide maximum expected utility to this farmer? d. Would wheat crop insurance—which is available to farmers who grow only wheat and which costs $4,000 and pays off $8,000 in the event of a rainy growing season—cause this farmer to change what he plants? 7.8 In Equation 7.30 we showed that the amount an individual is willing to pay to avoid a fair gamble (h) is given by 0.5E(h2)r(W ), where r(W ) is the measure of absolute risk aversion at this person’s initial level of wealth. In this p problem we look at the size of this payment as a function of the size of the risk faced and this person’s level of wealth. ¼ a. Consider a fair gamble (v) of winning or losing $1. For this gamble, what is E(v2)? b. Now consider varying the gamble in part (a) by multiplying each prize by a positive constant k. Let h value of E(h2)? kv. What is the ¼ c. Suppose this person has a
reduces to the CRRA function given in Chapter 7 (see foot- c. Use your result from part (a) to show that if g fi d. Let the constant found in part (c) be represented by A. Show that the implied form for the utility function in this case is, then r (W ) is a constant for this function. 1 the CARA function given in Equation 7.35. e. Finally, show that a quadratic utility function can be generated from the HARA function simply by setting g f. Despite the seeming generality of the HARA function, it still exhibits several limitations for the study of behavior in uncer- ¼ % 1. tain situations. Describe some of these shortcomings. 7.11 Prospect theory Two pioneers of the field of behavioral economics, Daniel Kahneman and Amos Tversky (winners of the Nobel Prize in economics in 2002), conducted an experiment in which they presented different groups of subjects with one of the following two scenarios: • • Scenario 1: In addition to $1,000 up front, the subject must choose between two gambles. Gamble A offers an even chance of winning $1,000 or nothing. Gamble B provides $500 with certainty. Scenario 2: In addition to $2,000 given up front, the subject must choose between two gambles. Gamble C offers an even chance of losing $1,000 or nothing. Gamble D results in the loss of $500 with certainty. a. Suppose Standard Stan makes choices under uncertainty according to expected utility theory. If Stan is risk neutral, what choice would he make in each scenario? b. What choice would Stan make if he is risk averse? c. Kahneman and Tversky found 16 percent of subjects chose A in the first scenario and 68 percent chose C in the second scenario. Based on your preceding answers, explain why these findings are hard to reconcile with expected utility theory. d. Kahneman and Tversky proposed an alternative to expected utility theory, called prospect theory, to explain the experimental results. The theory is that people’s current income level functions as an ‘‘anchor point’’ for them. They are risk averse over gains beyond this point but sensitive to small losses below this point. This sensitivity to small losses is the opposite of risk aversion: A risk-averse person suffers disproportionately more from a large than a small loss
much more than is consistent with the degree of risk aversion suggested by other data. See N. R. Kocherlakota, ‘‘The Equity Premium: It’s Still a Puzzle,’’ Journal of Economic Literature (March 1996): 42–71. 7.13 Graphing risky investments Investment in risky assets can be examined in the state-preference framework by assuming that W, dollars invested in an asset with a certain return r will yield W,(1 + r) in both states of the world, whereas investment in a risky asset will yield W,(1 + rg) in good times and W,(1 + rb) in bad times (where rg > r > rb). a. Graph the outcomes from the two investments. b. Show how a ‘‘mixed portfolio’’ containing both risk-free and risky assets could be illustrated in your graph. How would you show the fraction of wealth invested in the risky asset? c. Show how individuals’ attitudes toward risk will determine the mix of risk-free and risky assets they will hold. In what case would a person hold no risky assets? d. If an individual’s utility takes the constant relative risk aversion form (Equation 7.42), explain why this person will not change the fraction of risky assets held as his or her wealth increases.25 7.14 The portfolio problem with a Normally distributed risky asset In Example 7.3 we showed that a person with a CARA utility function who faces a Normally distributed risk will have expected utility of the form E W is its variance. Use this fact to solve for the optimal portfolio allocation for a person with a CARA utility function who must invest k of his or her wealth in a Normally distributed risky asset whose expected return is mr and variance in return is r2 r (your answer should depend on A). Explain your results intuitively. W, where mW is the expected value of wealth and r2 r2 Þ lW % ð W ð Þ’ ¼ A=2 U ½ SUGGESTIONS FOR FURTHER READING Arrow, K. J. ‘‘The Role of Securities in the Optimal Allocation of Risk Bearing.’’ Review of Economic Studies 31 (1963): 91–96. Introduces the state-preference concept and interprets securities as claims on contingent commodities. _____. ‘‘Uncertain
ty and the Welfare Economics of Medical Care.’’ American Economic Review 53 (1963): 941–73. Excellent discussion of the welfare implications of insurance. Has a clear, concise, mathematical appendix. Should be read in conjunction with Pauly’s article on moral hazard (see Chapter 18). Bernoulli, D. ‘‘Exposition of a New Theory on the Measurement of Risk.’’ Econometrica 22 (1954): 23–36. Reprint of the classic analysis of the St. Petersburg paradox. Dixit, A. K., and R. S. Pindyck. Investment under Uncertainty. Princeton, NJ: Princeton University Press, 1994. Focuses mainly on the investment decision by firms but has good coverage of option concepts. Friedman, M., and L. J. Savage. Choice.’’ Journal of Political Economy 56 (1948): 279–304. ‘‘The Utility Analysis of Analyzes why individuals may both gamble and buy insurance. Very readable. Gollier, Christian. The Economics of Risk and Time. Cambridge, MA: MIT Press, 2001. Contains a complete treatment of many of the issues discussed in this chapter. Especially good on the relationship between allocation under uncertainty and allocation over time. 25This problem is based on J. E. Stiglitz, ‘‘The Effects of Income, Wealth, and Capital Gains Taxation in Risk Taking,’’ Quarterly Journal of Economics (May 1969): 263–83. Chapter 7: Uncertainty 243 Mas-Colell, Andreu, Michael D. Whinston, and Jerry R. Green. Microeconomic Theory. New York: Oxford University Press, 1995, chap. 6. Provides a good summary of the foundations of expected utility theory. Also examines the ‘‘state independence’’ assumption in detail and shows that some notions of risk aversion carry over into cases of state dependence. Pratt, J. W. ‘‘Risk Aversion in the Small and in the Large.’’ Econometrica 32 (1964): 122–36. Theoretical development of risk-aversion measures. Fairly technical treatment but readable. Rothschild, M., and J. E. Stiglitz. ‘‘Increasing Risk: 1. A Definition.’’ Journal of Economic Theory 2 (
1970): 225–43. Develops an economic definition of what it means for one gamble to be ‘‘riskier’’ than another. A sequel article in the Journal of Economic Theory provides economic illustrations. Silberberg, E., and W. Suen. The Structure of Economics: A Mathematical Analysis, 3rd ed. Boston: Irwin/McGraw-Hill, 2001. Chapter 13 provides a nice introduction to the relationship between statistical concepts and expected utility maximization. Also shows in detail the integration mentioned in Example 7.3. EXTENSIONS THE PORTFOLIO PROBLEM One of the classic problems in the theory of behavior under uncertainty is the issue of how much of his or her wealth a risk-averse investor should invest in a risky asset. Intuitively, it seems that the fraction invested in risky assets should be smaller for more risk-averse investors, and one goal of our analysis in these Extensions will be to show that formally. We will then see how to generalize the model to consider portfolios with many such assets, finally working up to the Capital Asset Pricing model, a staple of financial economics courses. E7.1 Basic model with one risky asset To get started, assume that an investor has a certain amount of wealth, W0, to invest in one of two assets. The first asset yields a certain return of rf, whereas the second asset’s return is a random variable, r. If we let the amount invested in the risky asset be denoted by k, then this person’s wealth at the end of one period will be rf Þ þ W0 % 1 W0ð rf Þ þ 1 k Þð : rf Þ % (i) % rf ) > 0, and this will imply k Notice three things about this end-of-period wealth. First, W is a random variable because its value depends on r. Second, k can be either positive or negative here depending on whether this person buys the risky asset or sells it short. As we shall see, how0. ever, in the usual case E(r Finally, notice also that Equation i allows for a solution in which k > W0. In this case, this investor would leverage his or her investment in the risky asset by borrowing at the risk-free rate rf. If we let U(W
) represent this investor’s utility function, then the von Neumann–Morgenstern theorem states that he or she will choose k to maximize E[U(W)]. The first-order condition for such a maximum is 1 W0ð ð rf ÞÞ’ k ð @E @E % + U r ½ Þ’ ½ U W ð @k ¼ (ii) E U 0 r ½ ¼ ( ð % In calculating this first-order condition, we can differentiate through the expected value operator, E. See Chapter 2 for a discussion of differentiating integrals (of which an expected value operator is an example). Equation ii involves the expected value of the product of marginal utility and the term r rf is positive or negative will depend on how well the risky assets rf. Both of these terms are random. Whether r % % þ rf Þ þ @k rf Þ’ ¼ 0: perform over the next period. But the return on this risky asset will also affect this investor’s end-of-period wealth and thus will affect his or her marginal utility. If the investment does well, W will be large and marginal utility will be relatively low (because of diminishing marginal utility). If the investment does poorly, wealth will be relatively low and marginal utility will be relatively high. Hence in the expected rf value calculation in Equation ii, negative outcomes for r will be weighted more heavily than positive outcomes to take the utility consequences of these outcomes into account. If the expected value in Equation ii were positive, a person could increase his or her expected utility by investing more in the risky asset. If the expected value were negative, he or she could increase expected utility by reducing the amount of the risky asset held. Only when the first-order condition holds will this person have an optimal portfolio. % % Two other conclusions can be drawn from Equation ii. rf) > 0, an investor will choose positive First, as long as E(r amounts of the risky asset. To see why, notice that meeting Equation ii will require that fairly large values of U 0 be attached to situations where r rf turns out to be negative. That can only happen if the investor owns positive amounts of the risky asset so that end-of-period wealth is low in such situations. % A second conclusion from Equation ii is that investors
who are more risk averse will hold smaller amounts of the risky asset. Again, the reason relates to the shape of the U 0 function. For risk-averse investors, marginal utility rises rapidly as wealth falls. Hence they need relatively little exposure to potential negative outcomes from holding the risky asset to satisfy Equation ii. E7.2 CARA utility To make further progress on the portfolio problem requires that we make some specific assumptions about the investor’s utility function. Suppose it is given by the CARA form: AW ). Then the marginal utility function is U(W) exp( given by U 0(W) AW ); substituting for end-of-period A exp( wealth, we have exp A exp ½% 1 W0ð A ð 1 AW0ð rf Þ þ þ exp rf Þ’ k r ð % Ak rf ÞÞ’ r % ð : rf Þ’ ¼ ½% þ That is, the marginal utility function can be separated into a random part and a nonrandom part (both initial wealth and ½% (iii) the risk-free rate are nonrandom). Hence the optimality condition from Equation ii can be written as ½ ½ r E 0: % % % U 0 (iv) r ( ð ½% ð% rf Þ’ ¼ rf Þ’ ¼ A exp exp E rf Þ’ þ rf ÞÞ ( ð 1 AW0ð r Ak % ð Now we can divide by the exponential function of initial leaving an optimality condition that involves only wealth, terms in k, A, and r rf. Solving this condition for the optimal level of k can in general be difficult (but see Problem 7.14). Regardless of the specific solution, however, Equation iv shows that this optimal investment amount will be a constant regardless of the level of initial wealth. Hence the CARA function implies that the fraction of wealth that an investor holds in risky assets should decrease as wealth increases—a conclusion that seems precisely contrary to empirical data, which tend to show the fraction of wealth held in risky assets increasing with wealth. If we instead assumed utility took the CRRA rather than the CARA form, we could show (with some patience) that all individuals with the same risk tolerance will hold the same fraction of wealth in
risky assets, regardless of their absolute levels of wealth. Although this conclusion is slightly more in accord with the facts than is the conclusion from the CARA function, it still falls short of explaining why the fraction of wealth held in risky assets tends to increase with wealth. E7.3 Portfolios of many risky assets Additional insight can be gained if the model is generalized to allow for many risky assets. Let the return on each of n risky assets be the random variable ri (i 1,…, n). The expected values and variances of these assets’ returns are denoted by E(ri) i, respectively. An investor who invests a portion of his or her wealth in a portfolio of these assets will obtain a random return (rp) given by mi and Var(riÞ ¼ r2 ¼ ¼ rp ¼ n 1 i X ¼ airi, (v) where ai ( 1 ai ¼ asset i and where return on this portfolio will be + n i ¼ 0) is the fraction of the risky portfolio held in 1. In this situation, the expected P E rpÞ ¼ ð lp ¼ aili: n 1 i X ¼ (vi) If the returns of each asset are independent, then the variance of the portfolio’s return will be Var(rpÞ ¼ r2 p ¼ i r2 a2 i : n 1 i X ¼ (vii) If the returns are not independent, Equation vii would have to be modified to take covariances among the returns into account. Using this general notation, we now proceed to look at some aspects of this portfolio allocation problem. Chapter 7: Uncertainty 245 E7.4 Optimal portfolios With many risky assets, the optimal portfolio problem can be divided into two steps. The first step is to consider portfolios of just the risky assets. The second step is to add in the riskless one. To solve for the optimal portfolio of just the risky assets, one can proceed as in the text, where in the section on diversification we looked at the optimal investment weights across just two risky assets. Here, we will choose a general set of asset weightings (the ai) to minimize the variance (or standard deviation) of the portfolio for each potential expected return. The solution to this problem yields an ‘‘effici
ency frontier’’ for risky asset portfolios such as that represented by the line EE in Figure E7.1. Portfolios that lie below this frontier are inferior to those on the frontier because they offer lower expected returns for any degree of risk. Portfolio returns above the frontier are unattainable. Sharpe (1970) discusses the mathematics associated with constructing the EE frontier. Now add a risk-free asset with expected return mf and sf ¼ 0, shown as point R in Figure E7.1. Optimal portfolios will now consist of mixtures of this asset with risky ones. All such portfolios will lie along the line RP in the figure, because this shows the maximum return attainable for each value of s for various portfolio allocations. These allocations will contain only one specific set of risky assets: the set represented by point M. In equilibrium this will be the ‘‘market portfolio’’ consisting of all capital assets held in proportion to their market valuations. This market portfolio will provide an expected return of mM and a standard deviation of that return of sM. The equation for the line RP that represents any mixed portfolio is given by the linear equation lp ¼ lf þ lf lM % rM rp: ( (viii) This shows that the market line RP permits individual investors to ‘‘purchase’’ returns in excess of the risk-free return (mM % mf) by taking on proportionally more risk (sP /sM). For choices on RP to the left of the market point M, sP /sM < 1 and mf < mP < mM. High-risk points to the right of M—which can be obtained by borrowing to produce a leveraged portfolio—will have sP/sM > 1 and will promise an expected return in excess of what is provided by the market portfolio ( mP > mM ). Tobin (1958) was one of the first economists to recognize the role that risk-free assets play in identifying the market portfolio and in setting the terms on which investors can obtain returns above risk-free levels. E7.5 Individual choices Figure E7.2 illustrates the portfolio choices of various investors facing the options offered by the line RP. This figure illustrates the type of portfolio choice model previously described in this chapter. Individuals with low tolerance for risk (I ) will opt
for portfolios that are heavily weighted toward the riskfree asset. Investors willing to assume a modest degree of risk 246 Part 3: Uncertainty and Strategy FIGURE E7.1 Efficient Portfolios The frontier EE represents optimal mixtures of risky assets that minimize the standard deviation of the portfolio, sP, for each expected return, mP. A risk-free asset with return mf offers investors the opportunity to hold mixed portfolios along RP that mix this risk-free asset with the market portfolio, M FIGURE E7.2 Investor Behavior and Risk Aversion Given the market options RP, investors can choose how much risk they wish to assume. Very risk-averse investors (UI) will hold mainly risk-free assets, whereas risk takers (UIII) will opt for leveraged portfolios. UIII P UII UI M P R f P (II ) will opt for portfolios close to the market portfolio. Highrisk investors (III ) may opt for leveraged portfolios. Notice that all investors face the same ‘‘price’’ of risk (mM % mf) with their expected returns being determined by how much relative risk (sP/sM) they are willing to incur. Notice also that the risk associated with an investor’s portfolio depends only on the fraction of the portfolio invested in the market portfolio (a) 2 because r2 a a and so Þ the investor’s choice of portfolio is equivalent to his or her choice of risk. 0. Hence sP/sM ¼ 1 M þ ð a2r2 P ¼ % ( Mutual funds The notion of portfolio efficiency has been widely applied to the study of mutual funds. In general, mutual funds are a good answer to small investors’ diversification needs. Because such funds pool the funds of many individuals, they are able to achieve economies of scale in transactions and management costs. This permits fund owners to share in the fortunes of a much wider variety of equities than would be possible if each acted alone. But mutual fund managers have incentives of their own; therefore, the portfolios they hold may not always be perfect representations of the risk attitudes of their clients. For example, Scharfstein and Stein (1990) developed a model that shows why mutual fund managers have incentives to ‘‘follow the herd’’ in their investment picks. Other studies, such as the classic investigation by Jensen (1968
), find that mutual fund managers are seldom able to attain extra returns large enough to offset the expenses they charge investors. In recent years this has led many mutual fund buyers to favor ‘‘index’’ funds that seek simply to duplicate the market average (as represented, say, by the Standard and Poor’s 500 stock index). Such funds have low expenses and therefore permit investors to achieve diversification at minimal cost. E7.6 Capital asset pricing model Although the analysis of E7.5 shows how a portfolio that mixes a risk-free asset with the market portfolio will be priced, it does not describe the risk–return trade-off for a single asset. Because (assuming transactions are costless) an investor can always avoid risk unrelated to the overall market by choosing to diversify with a ‘‘market portfolio,’’ such ‘‘unsystematic’’ risk will not warrant any excess return. An asset will, however, earn an excess return to the extent that it contributes to overall market risk. An asset that does not yield such extra returns would not be held in the market portfolio, so it would not be held at all. This is the fundamental insight of the capital asset pricing model (CAPM). To examine these results formally, consider a portfolio that combines a small amount (a) of an asset with a random return of x with the market portfolio (which has a random return of M ). The return on this portfolio (z) would be given by z ax 1 þ ð % a M: Þ ¼ (ix) Chapter 7: Uncertainty 247 The expected return is lz ¼ alx þ ð 1 a lM Þ % with variance r2 z ¼ a2r2 1 x þ ð a Þ % 2r2 M þ 2a 1 ð % a rx,M, Þ (x) (xi) where sx,M is the covariance between the return on x and the return on the market. But our previous analysis shows lz ¼ lf þ ð lM % lf Þ ( rz rM : (xii) Setting Equation x equal to xii and differentiating with respect to a yields @lz @a ¼ lx % lM ¼ lf lM % rM @rz @a : (xiii) By calculating @rz=
@a from Equation xi and taking the limit as a approaches zero, we get lM % rM rx,M % rM lM ¼ lx % (xiv) r2 M lf, ( ’ or, rearranging terms, lx ¼ lf þ ð lM % lf Þ ( rx;M r2 M : (xv) Again, risk has a reward of mM % mf, but now the quantity of risk is measured by rx,M=r2 M. This ratio of the covariance between the return x and the market to the variance of the market return is referred to as the beta coefficient for the asset. Estimated beta coefficients for financial assets are reported in many publications. Studies of the CAPM This version of the CAPM carries strong implications about the determinants of any asset’s expected rate of return. Because of this simplicity, the model has been subject to a large number of empirical tests. In general these find that the model’s measure of systemic risk (beta) is indeed correlated with expected returns, whereas simpler measures of risk (e.g., the standard deviation of past returns) are not. Perhaps the most influential early empirical test that reached such a conclusion was that of Fama and MacBeth (1973). But the CAPM itself explains only a small fraction of differences in the returns of various assets. And contrary to the CAPM, a number of authors have found that many other economic factors significantly affect expected returns. Indeed, a prominent challenge to the CAPM comes from one of its original founders—see Fama and French (1992). References Fama, E. F., and K. R. French. ‘‘The Cross Section of Expected Stock Returns.’’ Journal of Finance 47 (1992): 427–66. 248 Part 3: Uncertainty and Strategy Fama, E. F., and J. MacBeth. ‘‘Risk Return and Equilibrium.’’ Sharpe, W. F. Portfolio Theory and Capital Markets. New Journal of Political Economy 8 (1973): 607–36. York: McGraw-Hill, 1970. Jensen, M. ‘‘The Performance of Mutual Funds in the Period 1945–1964.’’
Journal of Finance (May 1968): 386–416. Scharfstein, D. S., and J. Stein. ‘‘Herd Behavior and Investment.’’ American Economic Review (June 1990): 465–89. Tobin, J. ‘‘Liquidity Preference as Behavior towards Risk.’’ Review of Economic Studies (February 1958): 65–86. This page intentionally left blank C H A P T E R EIGHT Game Theory This chapter provides an introduction to noncooperative game theory, a tool used to understand the strategic interactions among two or more agents. The range of applications of game theory has been growing constantly, including all areas of economics (from labor economics to macroeconomics) and other fields such as political science and biology. Game theory is particularly useful in understanding the interaction between firms in an oligopoly, so the concepts learned here will be used extensively in Chapter 15. We begin with the central concept of Nash equilibrium and study its application in simple games. We then go on to study refinements of Nash equilibrium that are used in games with more complicated timing and information structures. Basic Concepts Thus far in Part 3 of this text, we have studied individual decisions made in isolation. In this chapter we study decision making in a more complicated, strategic setting. In a strategic setting, a person may no longer have an obvious choice that is best for him or her. What is best for one decision-maker may depend on what the other is doing and vice versa. For example, consider the strategic interaction between drivers and the police. Whether drivers prefer to speed may depend on whether the police set up speed traps. Whether the police find speed traps valuable depends on how much drivers speed. This confusing circularity would seem to make it difficult to make much headway in analyzing strategic behavior. In fact, the tools of game theory will allow us to push the analysis nearly as far, for example, as our analysis of consumer utility maximization in Chapter 4. There are two major tasks involved when using game theory to analyze an economic situation. The first is to distill the situation into a simple game. Because the analysis involved in strategic settings quickly grows more complicated than in simple decision problems, it is important to simplify the setting as much as possible by retaining only a few essential elements. There is a certain art to distilling games from situations that is hard to teach
. The examples in the text and problems in this chapter can serve as models that may help in approaching new situations. The second task is to ‘‘solve’’ the given game, which results in a prediction about what will happen. To solve a game, one takes an equilibrium concept (e.g., Nash equilibrium) and runs through the calculations required to apply it to the given game. Much of the chapter will be devoted to learning the most widely used equilibrium concepts and to practicing the calculations necessary to apply them to particular games. A game is an abstract model of a strategic situation. Even the most basic games have three essential elements: players, strategies, and payoffs. In complicated settings, it is sometimes also necessary to specify additional elements such as the sequence of moves 251 252 Part 3: Uncertainty and Strategy and the information that players have when they move (who knows what when) to describe the game fully. Players Each decision-maker in a game is called a player. These players may be individuals (as in poker games), firms (as in markets with few firms), or entire nations (as in military conflicts). A player is characterized as having the ability to choose from among a set of possible actions. Usually the number of players is fixed throughout the ‘‘play’’ of the game. Games are sometimes characterized by the number of players involved (two-player, three-player, or n-player games). As does much of the economic literature, this chapter often focuses on two-player games because this is the simplest strategic setting. We will label the players with numbers; thus, in a two-player game we will have players 1 and 2. In an n-player game we will have players 1, 2,…, n, with the generic player labeled i. Strategies Each course of action open to a player during the game is called a strategy. Depending on the game being examined, a strategy may be a simple action (drive over the speed limit or not) or a complex plan of action that may be contingent on earlier play in the game (say, speeding only if the driver has observed speed traps less than a quarter of the time in past drives). Many aspects of game theory can be illustrated in games in which players choose between just two possible actions. Let S1 denote the set of strategies open to player 1, S2 the set open to player 2, and S
1 be a particular strategy chosen by S2 the particular strategy chosen by player 2, Si for player i. A strategy profile will refer to a listing of particular strategies (more generally) Si the set open to player i. Let s1 2 player 1 from the set of possibilities, s2 2 and si 2 chosen by each of a group of players. Payoffs The final return to each player at the conclusion of a game is called a payoff. Payoffs are measured in levels of utility obtained by the players. For simplicity, monetary payoffs (say, profits for firms) are often used. More generally, payoffs can incorporate nonmonetary factors such as prestige, emotion, risk preferences, and so forth. In a two-player game, u1(s1, s2) denotes player 1’s payoff given that he or she chooses s1 and the other player chooses s2 and similarly u2(s2, s1) denotes player 2’s payoff.1 The fact that player 1’s payoff may depend on player 2’s strategy (and vice versa) is where the strategic interdependence shows up. In an n-player game, we can write the payoff of a i), which depends on player i’s own strategy si and the profile generic player i as ui(si, s s 1, …, sn) of the strategies of all players other than i. (s1, …, si 1, si! i ¼!! þ Prisoners’ Dilemma The Prisoners’ Dilemma, introduced by A. W. Tucker in the 1940s, is one of the most famous games studied in game theory and will serve here as a nice example to illustrate all the notation just introduced. The title stems from the following situation. Two suspects are arrested for a crime. The district attorney has little evidence in the case and is eager to extract a confession. She separates the suspects and tells each: ‘‘If you fink on your companion but your companion doesn’t fink on you, I can promise you a reduced 1Technically, these are the von Neumann–Morgenstern utility functions from the previous chapter. Chapter 8: Game Theory 253 (one-year) sentence, whereas your companion will get four years. If you both fi
nk on each other, you will each get a three-year sentence.’’ Each suspect also knows that if neither of them finks then the lack of evidence will result in being tried for a lesser crime for which the punishment is a two-year sentence. Boiled down to its essence, the Prisoners’ Dilemma has two strategic players: the suspects, labeled 1 and 2. (There is also a district attorney, but because her actions have already been fully specified, there is no reason to complicate the game and include her in the specification.) Each player has two possible strategies open to him: fink or remain {fink, silent}. To avoid negative silent. Therefore, we write their strategy sets as S1 ¼ numbers we will specify payoffs as the years of freedom over the next four years. For example, if suspect 1 finks and suspect 2 does not, suspect 1 will enjoy three years of freedom and suspect 2 none, that is, u1(fink, silent) 3 and u2(silent, fink) S2 ¼ 0. ¼ ¼ Normal form The Prisoners’ Dilemma (and games like it) can be summarized by the matrix shown in Figure 8.1, called the normal form of the game. Each of the four boxes represents a different combination of strategies and shows the players’ payoffs for that combination. The usual convention is to have player 1’s strategies in the row headings and player 2’s in the column headings and to list the payoffs in order of player 1, then player 2 in each box. Thinking strategically about the Prisoners’ Dilemma Although we have not discussed how to solve games yet, it is worth thinking about what we might predict will happen in the Prisoners’ Dilemma. Studying Figure 8.1, on first thought one might predict that both will be silent. This gives the most total years of freedom for both (four) compared with any other outcome. Thinking a bit deeper, this may not be the best prediction in the game. Imagine ourselves in player 1’s position for a moment. We do not know what player 2 will do yet because we have not solved out the game, so let’s investigate each possibility. Suppose player 2 chose
to fink. By finking ourselves we would earn one year of freedom versus none if we remained silent, so finking is better for us. Suppose player 2 chose to remain silent. Finking is still better for us than remaining silent because we get three rather than two years of freedom. Regardless of what the other player does, finking is better for us than being silent because it results in an extra year of freedom. Because players are symmetric, the same reasoning holds if we FIGURE 8.1 Normal Form for the Prisoners’ Dilemma Suspect 2 Fink Silent Fink u1! 1, u2! 1 u1! 3, u2! 0 1 t c e p s u S Silent u1! 0, u2! 3 u1! 2, u2! 2 254 Part 3: Uncertainty and Strategy imagine ourselves in player 2’s position. Therefore, the best prediction in the Prisoners’ Dilemma is that both will fink. When we formally introduce the main solution concept— Nash equilibrium—we will indeed find that both finking is a Nash equilibrium. The prediction has a paradoxical property: By both finking, the suspects only enjoy one year of freedom, but if they were both silent they would both do better, enjoying two years of freedom. The paradox should not be taken to imply that players are stupid or that our prediction is wrong. Rather, it reveals a central insight from game theory that pitting players against each other in strategic situations sometimes leads to outcomes that are inefficient for the players.2 The suspects might try to avoid the extra prison time by coming to an agreement beforehand to remain silent, perhaps reinforced by threats to retaliate afterward if one or the other finks. Introducing agreements and threats leads to a game that differs from the basic Prisoners’ Dilemma, a game that should be analyzed on its own terms using the tools we will develop shortly. Solving the Prisoners’ Dilemma was easy because there were only two players and two strategies and because the strategic calculations involved were fairly straightforward. It would be useful to have a systematic way of solving this as well as more complicated games. Nash equilibrium provides us with such a systematic solution. Nash Equilibrium In the economic theory of markets, the concept of equilibrium is developed to indicate a situation in
Prize in economics. Chapter 8: Game Theory 255 A technicality embedded in the definition is that there may be a set of best responses i). rather than a unique one; that is why we used the set inclusion notation si 2 There may be a tie for the best response, in which case the set BRi(s i) will contain more than one element. If there is not a tie, then there will be a single best response si and we can simply write si ¼ We can now define a Nash equilibrium in an n-player game as follows. BRi(s BRi(s i).!!! Nash equilibrium. A Nash equilibrium is a strategy profile player i s’i 2 such that, for each i. That is, is a best response to the other players’ equilibrium strategies s’! 1, 2, …, n, s’i i). ¼ BRi (s’! s’1, s’2,..., s’n "! These definitions involve a lot of notation. The notation is a bit simpler in a two-player is a Nash equilibrium if s’1 and s’2 are mutual best game. In a two-player game, responses against each other: s’1, s’2! s’1, s’2Þ & " u1ð u1ð s1, s’2Þ for all s1 2 S1 and u2ð s’1, s’2Þ & u2ð s2, s’1Þ for all S2: s2 2 (8:2) (8:3) A Nash equilibrium is stable in that, even if all players revealed their strategies to each other, no player would have an incentive to deviate from his or her equilibrium strategy and choose something else. Nonequilibrium strategies are not stable in this way. If an outcome is not a Nash equilibrium, then at least one player must benefit from deviating. Hyper-rational players could be expected to solve the inference problem and deduce that all would play a Nash equilibrium (especially if there is a unique Nash equilibrium). Even if players are not hyper-rational, over the long run we can expect their play to converge to a Nash equilibrium as they abandon strategies that are not mutual best
responses. Besides this stability property, another reason Nash equilibrium is used so widely in economics is that it is guaranteed to exist for all games we will study (allowing for mixed strategies, to be defined below; Nash equilibria in pure strategies do not have to exist). The mathematics behind this existence result are discussed at length in the Extensions to this chapter. Nash equilibrium has some drawbacks. There may be multiple Nash equilibria, making it hard to come up with a unique prediction. Also, the definition of Nash equilibrium leaves unclear how a player can choose a best-response strategy before knowing how rivals will play. Nash equilibrium in the Prisoners’ Dilemma Let’s apply the concepts of best response and Nash equilibrium to the example of the Prisoners’ Dilemma. Our educated guess was that both players will end up finking. We will show that both finking is a Nash equilibrium of the game. To do this, we need to show that finking is a best response to the other players’ finking. Refer to the payoff matrix in Figure 8.1. If player 2 finks, we are in the first column of the matrix. If player 1 also finks, his payoff is 1; if he is silent, his payoff is 0. Because he earns the most from finking given player 2 finks, finking is player 1’s best response to player 2’s finking. Because players are symmetric, the same logic implies that player 2’s finking is a best response to player 1’s finking. Therefore, both finking is indeed a Nash equilibrium. We can show more: that both finking is the only Nash equilibrium. To do so, we need to rule out the other three outcomes. Consider the outcome in which player 1 finks and player 2 is silent, abbreviated (fink, silent), the upper right corner of the 256 Part 3: Uncertainty and Strategy matrix. This is not a Nash equilibrium. Given that player 1 finks, as we have already said, player 2’s best response is to fink, not to be silent. Sy
mmetrically, the outcome in which player 1 is silent and player 2 finks in the lower left corner of the matrix is not a Nash equilibrium. That leaves the outcome in which both are silent. Given that player 2 is silent, we focus our attention on the second column of the matrix: The two rows in that column show that player 1’s payoff is 2 from being silent and 3 from finking. Therefore, silent is not a best response to fink; thus, both being silent cannot be a Nash equilibrium. To rule out a Nash equilibrium, it is enough to find just one player who is not playing a best response and thus would want to deviate to some other strategy. Considering the outcome (fink, silent), although player 1 would not deviate from this outcome (he earns 3, which is the most possible), player 2 would prefer to deviate from silent to fink. Symmetrically, considering the outcome (silent, fink), although player 2 does not want to deviate, player 1 prefers to deviate from silent to fink, so this is not a Nash equilibrium. Considering the outcome (silent, silent), both players prefer to deviate to another strategy, more than enough to rule out this outcome as a Nash equilibrium. Underlining best-response payoffs A quick way to find the Nash equilibria of a game is to underline best-response payoffs in the matrix. The underlining procedure is demonstrated for the Prisoners’ Dilemma in Figure 8.2. The first step is to underline the payoffs corresponding to player 1’s best 1 in responses. Player 1’s best response is to fink if player 2 finks, so we underline u1 ¼ the upper left box, and to fink if player 2 is silent, so we underline u1 ¼ 3 in the upper left box. Next, we move to underlining the payoffs corresponding to player 2’s best 1 in responses. Player 2’s best response is to fink if player 1 finks, so we underline u2 ¼ the upper left box, and to fink if player 1 is silent, so we underline u
2 ¼ 3 in the lower left box. Now that the best-response payoffs have been underlined, we look for boxes in which every player’s payoff is underlined. These boxes correspond to Nash equilibria. (There may be additional Nash equilibria involving mixed strategies, defined later in the chapter.) In Figure 8.2, only in the upper left box are both payoffs underlined, verifying that (fink, fink)—and none of the other outcomes—is a Nash equilibrium. FIGURE 8.2 Underlining Procedure in the Prisoners’ Dilemma Suspect 2 Fink Silent Fink u1! 1, u2! 1 u1! 3, u2! 0 1 t c e p s u S Silent u1! 0, u2! 3 u1! 2, u2! 2 Chapter 8: Game Theory 257 Dominant strategies (Fink, fink) is a Nash equilibrium in the Prisoners’ Dilemma because finking is a best response to the other player’s finking. We can say more: Finking is the best response to all the other player’s strategies, fink and silent. (This can be seen, among other ways, from the underlining procedure shown in Figure 8.2: All player 1’s payoffs are underlined in the row in which he plays fink, and all player 2’s payoffs are underlined in the column in which he plays fink.) A strategy that is a best response to any strategy the other players might choose is called a dominant strategy. Players do not always have dominant strategies, but when they do there is strong reason to believe they will play that way. Complicated strategic considerations do not matter when a player has a dominant strategy because what is best for that player is independent of what others are doing. Dominant strategy. A dominant strategy is a strategy s’i for player i that is a best response to all strategy profiles of other players. That is, s’i 2 s BRið for all s iÞ i.!! Note the difference between a Nash equilibrium strategy and a dominant strategy. A strategy that is part of a Nash equilibrium need only be a best response to one strategy profile of other players
—namely, their equilibrium strategies. A dominant strategy must be a best response not just to the Nash equilibrium strategies of other players but to all the strategies of those players. If all players in a game have a dominant strategy, then we say the game has a dominant strategy equilibrium. As well as being the Nash equilibrium of the Prisoners’ Dilemma, (fink, fink) is a dominant strategy equilibrium. It is generally true for all games that a dominant strategy equilibrium, if it exists, is also a Nash equilibrium and is the unique such equilibrium. Battle of the Sexes The famous Battle of the Sexes game is another example that illustrates the concepts of best response and Nash equilibrium. The story goes that a wife (player 1) and husband (player 2) would like to meet each other for an evening out. They can go either to the ballet or to a boxing match. Both prefer to spend time together than apart. Conditional on being together, the wife prefers to go to the ballet and the husband to the boxing match. The normal form of the game is presented in Figure 8.3. For brevity we dispense with the Player 2 (Husband) Boxing Ballet Ballet 2, 1 0 ( Boxing 0, 0 1 FIGURE 8.3 Normal Form for the Battle of the Sexes 258 Part 3: Uncertainty and Strategy u1 and u2 labels on the payoffs and simply re-emphasize the convention that the first payoff is player 1’s and the second is player 2’s. We will examine the four boxes in Figure 8.3 and determine which are Nash equilibria and which are not. Start with the outcome in which both players choose ballet, written (ballet, ballet), the upper left corner of the payoff matrix. Given that the husband plays ballet, the wife’s best response is to play ballet (this gives her her highest BR1(ballet). [We do not need the payoff in the matrix of 2). Using notation, ballet BR1(ballet)’’ because the husband has only fancy set-inclusion symbol as in ‘‘ballet one best response to the wife’s choosing ballet.] Given that the wife plays ballet, the husband’s best response is to play ballet. If he deviated to boxing, then he would earn 0 rather than 1 because they would end up not coordinating.
Using notation, ballet ¼ BR2(ballet). Thus, (ballet, ballet) is indeed a Nash equilibrium. Symmetrically, (boxing, boxing) is a Nash equilibrium. ¼ 2 Consider the outcome (ballet, boxing) in the upper left corner of the matrix. Given the husband chooses boxing, the wife earns 0 from choosing ballet but 1 from choosing boxing; therefore, ballet is not a best response for the wife to the husband’s choosing boxing. In notation, ballet = BR1(boxing). Hence (ballet, boxing) cannot be a Nash 2 equilibrium. [The husband’s strategy of boxing is not a best response to the wife’s playing ballet either; thus, both players would prefer to deviate from (ballet, boxing), although we only need to find one player who would want to deviate to rule out an outcome as a Nash equilibrium.] Symmetrically, (boxing, ballet) is not a Nash equilibrium either. The Battle of the Sexes is an example of a game with more than one Nash equilibrium (in fact, it has three—a third in mixed strategies, as we will see). It is hard to say which of the two we have found thus far is more plausible because they are symmetric. Therefore, it is difficult to make a firm prediction in this game. The Battle of the Sexes is also an example of a game with no dominant strategies. A player prefers to play ballet if the other plays ballet and boxing if the other plays boxing. Figure 8.4 applies the underlining procedure, used to find Nash equilibria quickly, to the Battle of the Sexes. The procedure verifies that the two outcomes in which the players succeed in coordinating are Nash equilibria and the two outcomes in which they do not coordinate are not. Examples 8.1 and 8.2 provide additional practice in finding Nash equilibria in more complicated settings (a game that has many ties for best responses in Example 8.1 and a game that has three strategies for each player in Example 8.2). FIGURE 8.4 Underlining Procedure in the Battle of the Sexes Player 2 (Husband) Boxing Ballet Ballet 2, 1 0 ( Boxing 0, 0 1, 2 Chapter 8: Game Theory 259 EXAMPLE 8.1 The Prisoners’ Dilemma Redux
In this variation on the Prisoners’ Dilemma, a suspect is convicted and receives a sentence of four years if he is finked on and goes free if not. The district attorney does not reward finking. Figure 8.5 presents the normal form for the game before and after applying the procedure for underlining best responses. Payoffs are again restated in terms of years of freedom. FIGURE 8.58The Prisoners’ Dilemma Redux (a) Normal form Suspect 2 Fink Silent Fink 0, 0 1, 0 Silent 0, 1 1b) Underlining procedure Suspect 2 Fink Silent Fink 0, 0 1, 0 Silent 0, 1 1 Ties for best responses are rife. For example, given player 2 finks, player 1’s payoff is 0 whether he finks or is silent. Thus, there is a tie for player 1’s best response to player 2’s finking. This is an example of the set of best responses containing more than one element: BR1 (fink) {fink, silent}. ¼ The underlining procedure shows that there is a Nash equilibrium in each of the four boxes. Given that suspects receive no personal reward or penalty for finking, they are both indifferent between finking and being silent; thus, any outcome can be a Nash equilibrium. QUERY: Does any player have a dominant strategy? EXAMPLE 8.2 Rock, Paper, Scissors Rock, Paper, Scissors is a children’s game in which the two players simultaneously display one of three hand symbols. Figure 8.6 presents the normal form. The zero payoffs along the diagonal show that if players adopt the same strategy then no payments are made. In other cases, the payoffs indicate a $1 payment from loser to winner under the usual hierarchy (rock breaks scissors, scissors cut paper, paper covers rock). As anyone who has played this game knows, and as the underlining procedure reveals, none of the nine boxes represents a Nash equilibrium. Any strategy pair is unstable because it offers 260 Part 3: Uncertainty and Strategy at least one of the players an incentive to deviate. For example, (scissors, scissors) provides an incentive for either player 1 or 2 to choose rock; (paper, rock) provides an incentive for player 2 to choose
scissors. FIGURE 8.68Rock, Paper, Scissors (a) Normal form Player 2 Rock Paper Scissors Rock 0, 0 −1, 1 1, −1 1 r e y a l P Paper 1, −1 0, 0 −1, 1 Scissors −1, 1 1, −1 0, 0 (b) Underlining procedure Player 2 Rock Paper Scissors Rock 0, 0 −1, 1 1, −1 1 r e y a l P Paper 1, −1 0, 0 −1, 1 Scissors −1, 1 1, −1 0, 0 The game does have a Nash equilibrium—not any of the nine boxes in the figure but in mixed strategies, defined in the next section. QUERY: Does any player have a dominant strategy? Why is (paper, scissors) not a Nash equilibrium? Mixed Strategies Players’ strategies can be more complicated than simply choosing an action with certainty. In this section we study mixed strategies, which have the player randomly select from several possible actions. By contrast, the strategies considered in the examples thus far have a player choose one action or another with certainty; these are called pure strategies. For example, in the Battle of the Sexes, we have considered the pure strategies of choosing either ballet or boxing for sure. A possible mixed strategy in this game would be Chapter 8: Game Theory 261 to flip a coin and then attend the ballet if and only if the coin comes up heads, yielding a 50–50 chance of showing up at either event. Although at first glance it may seem bizarre to have players flipping coins to determine how they will play, there are good reasons for studying mixed strategies. First, some games (such as Rock, Paper, Scissors) have no Nash equilibria in pure strategies. As we will see in the section on existence, such games will always have a Nash equilibrium in mixed strategies; therefore, allowing for mixed strategies will enable us to make predictions in such games where it was impossible to do so otherwise. Second, strategies involving randomization are natural and familiar in certain settings. Students are familiar with the setting of class exams. Class time is usually too limited for the professor to examine students on every topic taught in class, but it may be sufficient to test students on a subset of topics to induce them to study all the material. If students knew which topics were on the test,
then they might be inclined to study only those and not the others; therefore, the professor must choose the topics at random to get the students to study everything. Random strategies are also familiar in sports (the same soccer player sometimes shoots to the right of the net and sometimes to the left on penalty kicks) and in card games (the poker player sometimes folds and sometimes bluffs with a similarly poor hand at different times).4 rm i,..., rm r1, where rm i Formal definitions To be more formal, suppose that player i has a set of M possible actions Ai ¼ i,..., aM i,..., am a1, where the subscript refers to the player and the superscript to the i g f different choices. A mixed strategy is a probability distribution over the M actions, i,..., rM is a number between 0 and 1 that indicates the si ¼ ð i Þ probability of player i playing action am i. The probabilities in si must sum to unity: rM r1 In the Battle of the Sexes, for example, both players have the same two actions of ballet and boxing, so we can write A1 ¼ {ballet, boxing}. We can write a mixed strategy as a pair of probabilities (s, 1 s), where s is the probability that the player chooses ballet. The probabilities must sum to unity, and so, with two actions, once the probability of one action is specified, the probability of the other is determined. Mixed strategy (1/3, 2/3) means that the player plays ballet with probability 1/3 and boxing with probability 2/3; (1/2, 1/2) means that the player is equally likely to play ballet or boxing; (1, 0) means that the player chooses ballet with certainty; and (0, 1) means that the player chooses boxing with certainty. A2 ¼ i ¼! 1. In our terminology, a mixed strategy is a general category that includes the special case of a pure strategy. A pure strategy is the special case in which only one action is played with positive probability. Mixed strategies that involve two or more actions being played with positive probability are called strictly mixed strategies. Returning to the examples from the previous paragraph of mixed strategies in the Battle of the Sexes, all four strategies (1/3, 2/3), (1/2, 1/
2), (1, 0), and (0, 1) are mixed strategies. The first two are strictly mixed, and the second two are pure strategies. With this notation for actions and mixed strategies behind us, we do not need new definitions for best response, Nash equilibrium, and dominant strategy. The definitions introduced when si was taken to be a pure strategy also apply to the case in which si is taken to be a mixed strategy. The only change is that the payoff function ui(si, s i), rather! 4A third reason is that mixed strategies can be ‘‘purified’’ by specifying a more complicated game in which one or the other action is better for the player for privately known reasons and where that action is played with certainty. For example, a history professor might decide to ask an exam question about World War I because, unbeknownst to the students, she recently read an interesting journal article about it. See John Harsanyi, ‘‘Games with Randomly Disturbed Payoffs: A New Rationale for MixedStrategy Equilibrium Points,’’ International Journal of Game Theory 2 (1973): 1–23. Harsanyi was a co-winner (along with Nash) of the 1994 Nobel Prize in economics. 262 Part 3: Uncertainty and Strategy than being a certain payoff, must be reinterpreted as the expected value of a random payoff, with probabilities given by the strategies si and s i. Example 8.3 provides some practice in computing expected payoffs in the Battle of the Sexes.! EXAMPLE 8.3 Expected Payoffs in the Battle of the Sexes Let’s compute players’ expected payoffs if the wife chooses the mixed strategy (1/9, 8/9) and the husband (4/5, 1/5) in the Battle of the Sexes. The wife’s expected payoff is U1, U1ð # $ # $ 16 : 45 U1ð Þ þ ballet, ballet Þ þ boxing, ballet # $ U1ð ballet, boxing Þ boxing, boxing U1ð 8:4) To understand Equation 8.4, it is helpful to review the concept of expected value from Chapter 2. The expected value of a random variable equals the sum over all outcomes of the probability of the outcome multiplied by the value of the
random variable in that outcome. In the Battle of the Sexes, there are four outcomes, corresponding to the four boxes in Figure 8.3. Because players randomize independently, the probability of reaching a particular box equals the product of the probabilities that each player plays the strategy leading to that box. Thus, for example, the probability (boxing, ballet)—that is, the wife plays boxing and the husband plays ballet—equals (8/9) (4/5). The probabilities of the four outcomes are multiplied by the value of the relevant random variable (in this case, players 1’s payoff) in each outcome. ) Next we compute the wife’s expected payoff if she plays the pure strategy of going to ballet [the same as the mixed strategy (1, 0)] and the husband continues to play the mixed strategy (4/5, 1/5). Now there are only two relevant outcomes, given by the two boxes in the row in which the wife plays ballet. The probabilities of the two outcomes are given by the probabilities in the husband’s mixed strategy. Therefore, ballet, ballet U 1ð ballet, boxing U 1ð Þ U 1 ballet # $ Finally, we will compute the general expression for the wife’s expected payoff when she plays h): If the wife plays ballet with w) and the husband plays (h, 1 (8:5 : mixed strategy (w, 1 probability w and the husband with probability h, then!! 1 5 # $ Þ þ 8 5 w, 1 U 1ðð, w Þ h, 1 ð! h! ÞÞ ¼ ð 1 Þð h! ballet, boxing U 1ð Þ Þ h w w Þ þ ð ballet, ballet U 1ð Þ Þð boxing, ballet w U 1ð h 1 Þ Þ Þð! þ ð U 1ð h 1 w 1 Þ! Þð! þ ð h w 1 w 2 Þð Þ þ ð Þð ¼ ð! Þð h 1 1 w 1! Þð! þ ð Þ Þð 3hw: w h 1 þ ¼!! Þð h 0 boxing, boxing 1 Þ þ ð Þ w! Þð h 0 Þð Þ (8:6) QUERY: What is the husband’s expected payoff in
each case? Show that his expected payoff is 2 3hw in the general case. Given the husband plays the mixed strategy (4/5, 1/5), what strategy provides the wife with the highest payoff? 2w 2h!! þ Chapter 8: Game Theory 263 Computing mixed-strategy equilibria Computing Nash equilibria of a game when strictly mixed strategies are involved is a bit more complicated than when pure strategies are involved. Before wading in, we can save a lot of work by asking whether the game even has a Nash equilibrium in strictly mixed strategies. If it does not, having found all the pure-strategy Nash equilibria, then one has finished analyzing the game. The key to guessing whether a game has a Nash equilibrium in strictly mixed strategies is the surprising result that almost all games have an odd number of Nash equilibria.5 Let’s apply this insight to some of the examples considered thus far. We found an odd number (one) of pure-strategy Nash equilibria in the Prisoners’ Dilemma, suggesting we need not look further for one in strictly mixed strategies. In the Battle of the Sexes, we found an even number (two) of pure-strategy Nash equilibria, suggesting the existence of a third one in strictly mixed strategies. Example 8.2—Rock, Paper, Scissors—has no purestrategy Nash equilibria. To arrive at an odd number of Nash equilibria, we would expect to find one Nash equilibrium in strictly mixed strategies. EXAMPLE 8.4 Mixed-Strategy Nash Equilibrium in the Battle of the Sexes! A general mixed strategy for the wife in the Battle of the Sexes is (w, 1 – w) and for the husband is (h, 1 h), where w and h are the probabilities of playing ballet for the wife and husband, respectively. We will compute values of w and h that make up Nash equilibria. Both players have a continuum of possible strategies between 0 and 1. Therefore, we cannot write these strategies in the rows and columns of a matrix and underline best-response payoffs to find the Nash equilibria. Instead, we will use graphical methods to solve for the Nash equilibria. Given players’ general mixed strategies, we saw in Example 8.3 that the wife’s expected payoff is w, 1 U 1ðð w, Þ
h, 1 ð!! h ÞÞ ¼ h 1!! w þ 3hw: (8:7) As Equation 8.7 shows, the wife’s best response depends on h. If h < 1/3, she wants to set w as 1. When low as possible: w h 1/3, her expected payoff equals 2/3 regardless of what w she chooses. In this case there is a tie for the best response, including any w from 0 to 1. 0. If h > 1/3, her best response is to set w as high as possible: w ¼ ¼ ¼ In Example 8.3, we stated that the husband’s expected payoff is h, 1 U 2ðð h, Þ w, 1 ð!! w ÞÞ ¼ 2h 2!! 2w þ 3hw: (8:8) When w < 2/3, his expected payoff is maximized by h is maximized by h expected payoff of 2/3 regardless. 1; and when w ¼ ¼ 0; when w > 2/3, his expected payoff 2/3, he is indifferent among all values of h, obtaining an ¼ The best responses are graphed in Figure 8.7. The Nash equilibria are given by the intersection points between the best responses. At these intersection points, both players are best responding to each other, which is what is required for the outcome to be a Nash equilibrium. There are three Nash equilibria. The points E1 and E2 are the pure-strategy Nash equilibria we found before, with E1 corresponding to the pure-strategy Nash equilibrium in which both play boxing and E2 to that in which both play ballet. Point E3 is the strictly mixed-strategy Nash equilibrium, which can be spelled out as ‘‘the wife plays ballet with probability 2/3 and boxing with probability 1/3 and the husband plays ballet with probability 1/3 and boxing with probability 2/3.’’ More succinctly, having defined w and h, we may write the equilibrium as ‘‘w’ 2/3 and h’ 1/3.’’ ¼ ¼ 5John Harsanyi, ‘‘Oddness of the Number of Equilibrium Points: A New Proof,’’ International Journal
of Game Theory 2 (1973): 235–50. Games in which there are ties between payoffs may have an even or infinite number of Nash equilibria. Example 8.1, the Prisoners’ Dilemma Redux, has several payoff ties. The game has four pure-strategy Nash equilibria and an infinite number of different mixed-strategy equilibria. 264 Part 3: Uncertainty and Strategy FIGURE 8.78Nash Equilibria in Mixed Strategies in the Battle of the Sexes Ballet is chosen by the wife with probability w and by the husband with probability h. Players’ best responses are graphed on the same set of axes. The three intersection points E1, E2, and E3 are Nash equilibria. The Nash equilibrium in strictly mixed strategies, E3, is w’ 2/3 and h’ 1/3. ¼ ¼ h 1 2/3 1/3 E1 0 E2 Husband’s best response, BR2 E3 Wife’s best response, BR1 w 1/3 2/3 1 QUERY: What is a player’s expected payoff in the Nash equilibrium in strictly mixed strategies? How does this payoff compare with those in the pure-strategy Nash equilibria? What arguments might be offered that one or another of the three Nash equilibria might be the best prediction in this game? Example 8.4 runs through the lengthy calculations involved in finding all the Nash equilibria of the Battle of the Sexes, those in pure strategies and those in strictly mixed strategies. A shortcut to finding the Nash equilibrium in strictly mixed strategies is based on the insight that a player will be willing to randomize between two actions in equilibrium only if he or she gets the same expected payoff from playing either action or, in other words, is indifferent between the two actions in equilibrium. Otherwise, one of the two actions would provide a higher expected payoff, and the player would prefer to play that action with certainty. Suppose the husband is playing mixed strategy (h, 1 probability h and boxing with probability 1 ballet is! h), that is, playing ballet with h. The wife’s expected payoff from playing! U 1ð Þ ¼ ð Her expected payoff from playing boxing is ballet, (h, 1 h)! h 2 Þð 1 0 h �
�ð! Þ ¼ Þ þ ð 2h: (8:9) boxing, (h, 1 h) U 1ð h 0 Þ þ ð 1 Þð For the wife to be indifferent between ballet and boxing in equilibrium, Equations 8.9 and 8.10 must be equal: 2h 1/3. Similar calculations based on the husband’s indifference between playing ballet and boxing in equilibrium show that the h, implying h’ Þ ¼ ð Þ ¼ ¼ ¼!!!! Þð 1 h 1 1 h: (8:10) Chapter 8: Game Theory 265 wife’s probability of playing ballet in the strictly mixed strategy Nash equilibrium is w’ 2/3. (Work through these calculations as an exercise.) ¼ Notice that the wife’s indifference condition does not ‘‘pin down’’ her equilibrium mixed strategy. The wife’s indifference condition cannot pin down her own equilibrium mixed strategy because, given that she is indifferent between the two actions in equilibrium, her overall expected payoff is the same no matter what probability distribution she plays over the two actions. Rather, the wife’s indifference condition pins down the other player’s—the husband’s—mixed strategy. There is a unique probability distribution he can use to play ballet and boxing that makes her indifferent between the two actions and thus makes her willing to randomize. Given any probability of his playing ballet and boxing other than (1/3, 2/3), it would not be a stable outcome for her to randomize. Thus, two principles should be kept in mind when seeking Nash equilibria in strictly mixed strategies. One is that a player randomizes over only those actions among which he or she is indifferent, given other players’ equilibrium mixed strategies. The second is that one player’s indifference condition pins down the other player’s mixed strategy. Existence Of Equilibrium One of the reasons Nash equilibrium is so widely used is that a Nash equilibrium is guaranteed to exist in a wide class of games. This is not true for some other equilibrium concepts. Consider the dominant strategy equilibrium concept. The Prisoners’ Dilemma has a dominant strategy equilibrium (both suspects fink), but most games do not. Indeed, there are many games—including, for example, the Battle of the Sexes—in which no player has a dominant strategy
, let alone all the players. In such games, we cannot make predictions using dominant strategy equilibrium, but we can using Nash equilibrium. The Extensions section at the end of this chapter will provide the technical details behind John Nash’s proof of the existence of his equilibrium in all finite games (games with a finite number of players and a finite number of actions). The existence theorem does not guarantee the existence of a pure-strategy Nash equilibrium. We already saw an example: Rock, Paper, Scissors in Example 8.2. However, if a finite game does not have a purestrategy Nash equilibrium, the theorem guarantees that it will have a mixed-strategy Nash equilibrium. The proof of Nash’s theorem is similar to the proof in Chapter 13 of the existence of prices leading to a general competitive equilibrium. The Extensions section includes an existence theorem for games with a continuum of actions, as studied in the next section. Continuum Of Actions Most of the insight from economic situations can often be gained by distilling the situation down to a few or even two actions, as with all the games studied thus far. Other times, additional insight can be gained by allowing a continuum of actions. To be clear, we have already encountered a continuum of strategies—in our discussion of mixed strategies—but still the probability distributions in mixed strategies were over a finite number of actions. In this section we focus on continuum of actions. Some settings are more realistically modeled via a continuous range of actions. In Chapter 15, for example, we will study competition between strategic firms. In one model (Bertrand), firms set prices; in another (Cournot), firms set quantities. It is natural to allow firms to choose any non-negative price or quantity rather than artificially restricting them to just two prices (say, $2 or $5) or two quantities (say, 100 or 1,000 units). Continuous actions have several other advantages. The familiar methods from calculus can often be used to solve for Nash equilibria. It is also possible to analyze how the equilibrium 266 Part 3: Uncertainty and Strategy actions vary with changes in underlying parameters. With the Cournot model, for example, we might want to know how equilibrium quantities change with a small increase in a firm’s marginal costs or a demand
parameter. Tragedy of the Commons Example 8.5 illustrates how to solve for the Nash equilibrium when the game (in this case, the Tragedy of the Commons) involves a continuum of actions. The first step is to write down the payoff for each player as a function of all players’ actions. The next step is to compute the first-order condition associated with each player’s payoff maximum. This will give an equation that can be rearranged into the best response of each player as a function of all other players’ actions. There will be one equation for each player. With n players, the system of n equations for the n unknown equilibrium actions can be solved simultaneously by either algebraic or graphical methods. EXAMPLE 8.5 Tragedy of the Commons The term Tragedy of the Commons has come to signify environmental problems of overuse that arise when scarce resources are treated as common property.6 A game-theoretic illustration of this issue can be developed by assuming that two herders decide how many sheep to graze on the village commons. The problem is that the commons is small and can rapidly succumb to overgrazing. To add some mathematical structure to the problem, let qi be the number of sheep that 1, 2 grazes on the commons, and suppose that the per-sheep value of grazing on the herder i commons (in terms of wool and sheep-milk cheese) is ¼ v q1, q2Þ ¼ ð 120 q1 þ! ð : q2Þ (8:11) This function implies that the value of grazing a given number of sheep is lower the more sheep are around competing for grass. We cannot use a matrix to represent the normal form of this game of continuous actions. Instead, the normal form is simply a listing of the herders’ payoff functions u1ð u2ð q1, q2Þ ¼ q1, q2Þ ¼ q1v q2v q1, q2Þ ¼ ð q1, q2Þ ¼ ð 120 120 q1ð q2ð q1! q1! q2Þ, : q2Þ!! To find the Nash equilibrium, we solve herder 1’s value-maximization problem: max q1 f q1ð 120 q1! : q2Þ
g! The first-order condition for a maximum is or, rearranging, 120 2q1! q2 ¼ 0! q1 ¼ 60! q2 2 ¼ BR1ð : q2Þ (8:12) (8:13) (8:14) (8:15) Similar steps show that herder 2’s best response is q1 2 ¼ q2 ¼ 60! : q1Þ that satisfies Equations 8.15 and 8.16 The Nash equilibrium is given by the pair simultaneously. Taking an algebraic approach to the simultaneous solution, Equation 8.16 can! be substituted into Equation 8.15, which yields BR2ð q’1, q’2 (8:16) " 6This term was popularized by G. Hardin, ‘‘The Tragedy of the Commons,’’ Science 162 (1968): 1243–48. Chapter 8: Game Theory 267 60 1 2 q1 ¼ on rearranging, this implies q’1 ¼ 40 as well. Thus, each herder will graze 40 sheep on the common. Each earns a payoff of 1,600, as can be seen by substituting q’1 ¼ q’2 ¼ Equations 8.15 and 8.16 can also be solved simultaneously using graphical methods. Figure 8.8 plots the two best responses on a graph with player 1’s action on the horizontal axis and!! 40. Substituting q’1 ¼ 40 into the payoff function in Equation 8.13. q1 2 40 into Equation 8.17 implies q’2 ¼ (8:17) 60 % & ; FIGURE 8.88Best-Response Diagram for the Tragedy of the Commons The intersection, E1, between the two herders’ best responses is the Nash equilibrium. An increase in the per-sheep value of grazing in the Tragedy of the Commons shifts out herder 1’s best response, resulting in a Nash equilibrium E2 in which herder 1 grazes more sheep (and herder 2, fewer sheep) than in the original Nash equilibrium. BR1(q2) q2 120 60 40 E1 E2 0 40 60 BR2(q1) 120 q1 player 2’s on the vertical axis
. These best responses are simply lines and thus are easy to graph in this example. (To be consistent with the axis labels, the inverse of Equation 8.15 is actually what is graphed.) The two best responses intersect at the Nash equilibrium E1. The graphical method is useful for showing how the Nash equilibrium shifts with changes in the parameters of the problem. Suppose the per-sheep value of grazing increases for the first herder while the second remains as in Equation 8.11, perhaps because the first herder starts raising merino sheep with more valuable wool. This change would shift the best response out for herder 1 while leaving herder 2’s the same. The new intersection point (E2 in Figure 8.8), which is the new Nash equilibrium, involves more sheep for 1 and fewer for 2. The Nash equilibrium is not the best use of the commons. In the original problem, both herders’ per-sheep value of grazing is given by Equation 8.11. If both grazed only 30 sheep, then each would earn a payoff of 1,800, as can be seen by substituting q1 ¼ 30 into Equation 8.13. Indeed, the ‘‘joint payoff maximization’’ problem q2Þg max q1 þ q1, q2fð 30 or, more generally, by any q1 and q2 that sum to 60. is solved by q1 ¼ QUERY: How would the Nash equilibrium shift if both herders’ benefits increased by the same amount? What about a decrease in (only) herder 2’s benefit from grazing? max q1, q2fð q2 ¼ q1, q2Þg ¼ ð q2 ¼ q1! q1 þ v q2Þ q2Þð (8:18) 120! 268 Part 3: Uncertainty and Strategy As Example 8.5 shows, graphical methods are particularly convenient for quickly determining how the equilibrium shifts with changes in the underlying parameters. The example shifted the benefit of grazing to one of herders. This exercise nicely illustrates the nature of strategic interaction. Herder 2’s payoff function has not changed (only herder 1’s has), yet his equilibrium action changes. The second herder observes the first
’s higher benefit, anticipates that the first will increase the number of sheep he grazes, and reduces his own grazing in response. The Tragedy of the Commons shares with the Prisoners’ Dilemma the feature that the Nash equilibrium is less efficient for all players than some other outcome. In the Prisoners’ Dilemma, both fink in equilibrium when it would be more efficient for both to be silent. In the Tragedy of the Commons, the herders graze more sheep in equilibrium than is efficient. This insight may explain why ocean fishing grounds and other common resources can end up being overused even to the point of exhaustion if their use is left unregulated. More detail on such problems—involving what we will call negative externalities—is provided in Chapter 19. Sequential Games In some games, the order of moves matters. For example, in a bicycle race with a staggered start, it may help to go last and thus know the time to beat. On the other hand, competition to establish a new high-definition video format may be won by the first firm to market its technology, thereby capturing an installed base of consumers. Sequential games differ from the simultaneous games we have considered thus far in that a player who moves later in the game can observe how others have played up to that moment. The player can use this information to form more sophisticated strategies than simply choosing an action; the player’s strategy can be a contingent plan with the action played depending on what the other players have done. To illustrate the new concepts raised by sequential games—and, in particular, to make a stark contrast between sequential and simultaneous games—we take a simultaneous game we have discussed already, the Battle of the Sexes, and turn it into a sequential game. Sequential Battle of the Sexes Consider the Battle of the Sexes game analyzed previously with all the same actions and payoffs, but now change the timing of moves. Rather than the wife and husband making a simultaneous choice, the wife moves first, choosing ballet or boxing; the husband observes this choice (say, the wife calls him from her chosen location), and then the husband makes his choice. The wife’s possible strategies have not changed: She can choose the simple actions ballet or
boxing (or perhaps a mixed strategy involving both actions, although this will not be a relevant consideration in the sequential game). The husband’s set of possible strategies has expanded. For each of the wife’s two actions, he can choose one of two actions; therefore, he has four possible strategies, which are listed in Table 8.1. TABLE 8.1 HUSBAND’S CONTINGENT STRATEGIES Contingent Strategy Always go to the ballet Follow his wife Do the opposite Always go to boxing Written in Conditional Format (ballet | ballet, ballet | boxing) (ballet | ballet, boxing | boxing) (boxing | ballet, ballet | boxing) (boxing | ballet, boxing | boxing) Chapter 8: Game Theory 269 The vertical bar in the husband’s strategies means ‘‘conditional on’’ and thus, for example, ‘‘boxing | ballet’’ should be read as ‘‘the husband chooses boxing conditional on the wife’s choosing ballet.’’ Given that the husband has four pure strategies rather than just two, the normal form (given in Figure 8.9) must now be expanded to eight boxes. Roughly speaking, the normal form is twice as complicated as that for the simultaneous version of the game in Figure 8.2. This motivates a new way to represent games, called the extensive form, which is especially convenient for sequential games. Extensive form The extensive form of a game shows the order of moves as branches of a tree rather than collapsing everything down into a matrix. The extensive form for the sequential Battle of the Sexes is shown in Figure 8.10a. The action proceeds from left to right. Each node (shown as a dot on the tree) represents a decision point for the player indicated there. The first move belongs to the wife. After any action she might take, the husband gets to move. Payoffs are listed at the end of the tree in the same order (player 1’s, player 2’s) as in the normal form. Contrast Figure 8.10a with Figure 8.10b, which shows the extensive form for the simultaneous version of the game. It is hard to harmonize an extensive form, in which moves happen in progression, with a simultaneous game, in which everything happens at the same time. The trick is to pick one of the two players to occupy the role of the
second mover but then highlight that he or she is not really the second mover by connecting his or her decision nodes in the same information set, the dotted oval around the nodes. The dotted oval in Figure 8.10b indicates that the husband does not know his wife’s move when he chooses his action. It does not matter which player is picked for first and second mover in a simultaneous game; we picked the husband in the figure to make the extensive form in Figure 8.10b look as much like Figure 8.10a as possible. The similarity between the two extensive forms illustrates the point that that form does not grow in complexity for sequential games the way the normal form does. We FIGURE 8.9 Normal Form for the Sequential Battle of the Sexes Husband (Ballet | Ballet Ballet | Boxing) (Ballet | Ballet Boxing | Boxing) (Boxing | Ballet Ballet | Boxing) (Boxing | Ballet Boxing | Boxing) Ballet 2, 1 2, 1 0, 0 0, 0 e f i W Boxing 0, 0 1, 2 0, 0 1, 2 270 Part 3: Uncertainty and Strategy FIGURE 8.10 Extensive Form for the Battle of the Sexes In the sequential version (a), the husband moves second, after observing his wife’s move. In the simultaneous version (b), he does not know her choice when he moves, so his decision nodes must be connected in one information set. Ballet 1 Boxing 2 2 Ballet Boxing Ballet Boxing 2, 1 0, 0 0, 0 1, 2 Ballet 1 Boxing 2 2 Ballet Boxing Ballet Boxing 2, 1 0, 0 0, 0 1, 2 (a) Sequential version (b) Simultaneous version next will draw on both normal and extensive forms in our analysis of the sequential Battle of the Sexes. Nash equilibria To solve for the Nash equilibria, return to the normal form in Figure 8.9. Applying the method of underlining best-response payoffs—being careful to underline both payoffs in cases of ties for the best response—reveals three pure-strategy Nash equilibria: 1. wife plays ballet, husband plays (ballet | ballet, ballet | boxing); 2. wife plays ballet, husband plays (ballet | ballet, boxing | boxing); 3. wife plays boxing, husband plays (boxing |
ballet, boxing | boxing). As with the simultaneous version of the Battle of the Sexes, here again we have multiple equilibria. Yet now game theory offers a good way to select among the equilibria. Consider the third Nash equilibrium. The husband’s strategy (boxing | ballet, boxing | boxing) involves the implicit threat that he will choose boxing even if his wife chooses ballet. This threat is sufficient to deter her from choosing ballet. Given that she chooses boxing in equilibrium, his strategy earns him 2, which is the best he can do in any outcome. Thus, the outcome is a Nash equilibrium. But the husband’s threat is not credible—that is, it is an empty threat. If the wife really were to choose ballet first, then he would give up a payoff of 1 by choosing boxing rather than ballet. It is clear why he would want to threaten to choose boxing, but it is not clear that such a threat should be Chapter 8: Game Theory 271 believed. Similarly, the husband’s strategy (ballet | ballet, ballet | boxing) in the first Nash equilibrium also involves an empty threat: that he will choose ballet if his wife chooses boxing. (This is an odd threat to make because he does not gain from making it, but it is an empty threat nonetheless.) Another way to understand empty versus credible threats is by using the concept of the equilibrium path, the connected path through the extensive form implied by equilibrium strategies. In Figure 8.11, which reproduces the extensive form of the sequential Battle of the Sexes from Figure 8.10, a dotted line is used to identify the equilibrium path for the third of the listed Nash equilibria. The third outcome is a Nash equilibrium because the strategies are rational along the equilibrium path. However, following the wife’s choosing ballet—an event that is off the equilibrium path—the husband’s strategy is irrational. The concept of subgame-perfect equilibrium in the next section will rule out irrational play both on and off the equilibrium path. Subgame-perfect equilibrium Game theory offers a formal way of selecting the reasonable Nash equilibria in sequential games using the concept of subgame-perfect equilibrium. Subgame-perfect equilibrium is a refinement that rules out empty threats by requiring strategies to be rational even for contingencies that do not arise in equilibrium. Before defining subgame-perfect equilibrium formally,
we need a few preliminary definitions. A subgame is a part of the extensive form beginning with a decision node and including everything that branches out to the right of it. A proper subgame is a subgame FIGURE 8.11 Equilibrium Path In the third of the Nash equilibria listed for the sequential Battle of the Sexes, the wife plays boxing and the husband plays (boxing | ballet, boxing | boxing), tracing out the branches indicated with thick lines (both solid and dashed). The dashed line is the equilibrium path; the rest of the tree is referred to as being ‘‘off the equilibrium path.’’ Ballet 2 Ballet Boxing 1 Boxing Ballet 2 Boxing 2, 1 0, 0 0, 0 1, 2 272 Part 3: Uncertainty and Strategy that starts at a decision node not connected to another in an information set. Conceptually, this means that the player who moves first in a proper subgame knows the actions played by others that have led up to that point. It is easier to see what a proper subgame is than to define it in words. Figure 8.12 shows the extensive forms from the simultaneous and sequential versions of the Battle of the Sexes with boxes drawn around the proper subgames in each. The sequential version (a) has three proper subgames: the game itself and two lower subgames starting with decision nodes where the husband gets to move. The simultaneous version (b) has only one decision node—the topmost node—not connected to another in an information set. Hence this verion has only one subgame: the whole game itself Subgame-perfect equilibrium. A subgame-perfect equilibrium is a strategy profile s’1, s’2,..., s’n that is a Nash equilibrium on every proper subgame.! " A subgame-perfect equilibrium is always a Nash equilibrium. This is true because the whole game is a proper subgame of itself; thus, a subgame-perfect equilibrium must be a Nash equilibrium for the whole game. In the simultaneous version of the Battle of the Sexes, there is nothing more to say because there are no subgames other than the whole game itself. In the sequential version, subgame-perfect equilibrium has more bite. Strategies must not only form a Nash equilibrium on the whole game itself; they must also form Nash FIGURE 8.12 Proper Subgames
in the Battle of the Sexes The sequential version in (a) has three proper subgames, labeled A, B, and C. The simultaneous version in (b) has only one proper subgame: the whole game itself, labeled D. A B 2 Ballet 2, 1 D Ballet 2 Ballet Boxing Ballet Boxing 1 Boxing C Ballet 2 Boxing 0, 0 0, 0 1, 2 1 Boxing Ballet 2 Boxing 2, 1 0, 0 0, 0 1, 2 (a) Sequential (b) Simultaneous Chapter 8: Game Theory 273 equilibria on the two proper subgames starting with the decision points at which the husband moves. These subgames are simple decision problems, so it is easy to compute the corresponding Nash equilibria. For subgame B, beginning with the husband’s decision node following his wife’s choosing ballet, he has a simple decision between ballet (which earns him a payoff of 1) and boxing (which earns him a payoff of 0). The Nash equilibrium in this simple decision subgame is for the husband to choose ballet. For the other subgame, C, he has a simple decision between ballet, which earns him 0, and boxing, which earns him 2. The Nash equilibrium in this simple decision subgame is for him to choose boxing. Therefore, the husband has only one strategy that can be part of a subgame-perfect equilibrium: (ballet | ballet, boxing | boxing). Any other strategy has him playing something that is not a Nash equilibrium for some proper subgame. Returning to the three enumerated Nash equilibria, only the second is subgame perfect; the first and the third are not. For example, the third equilibrium, in which the husband always goes to boxing, is ruled out as a subgame-perfect equilibrium because the husband’s strategy (boxing | boxing) is not a Nash equilibrium in proper subgame B. Thus, subgame-perfect equilibrium rules out the empty threat (of always going to boxing) that we were uncomfortable with earlier. More generally, subgame-perfect equilibrium rules out any sort of empty threat in a sequential game. In effect, Nash equilibrium requires behavior to be rational only on the equilibrium path. Players can choose potentially irrational actions on other parts of the extensive form. In particular, one player can threaten to damage both to scare the other from choosing certain actions. Subgame-perfect equilibrium requires rational behavior both on and off the equilibrium path
. Threats to play irrationally—that is, threats to choose something other than one’s best response—are ruled out as being empty. Backward induction Our approach to solving for the equilibrium in the sequential Battle of the Sexes was to find all the Nash equilibria using the normal form and then to seek among those for the subgame-perfect equilibrium. A shortcut for finding the subgame-perfect equilibrium directly is to use backward induction, the process of solving for equilibrium by working backward from the end of the game to the beginning. Backward induction works as follows. Identify all the subgames at the bottom of the extensive form. Find the Nash equilibria on these subgames. Replace the (potentially complicated) subgames with the actions and payoffs resulting from Nash equilibrium play on these subgames. Then move up to the next level of subgames and repeat the procedure. Figure 8.13 illustrates the use of backward induction in the sequential Battle of the Sexes. First, we compute the Nash equilibria of the bottom-most subgames at the husband’s decision nodes. In the subgame following his wife’s choosing ballet, he would choose ballet, giving payoffs 2 for her and 1 for him. In the subgame following his wife’s choosing boxing, he would choose boxing, giving payoffs 1 for her and 2 for him. Next, substitute the husband’s equilibrium strategies for the subgames themselves. The resulting game is a simple decision problem for the wife (drawn in the lower panel of the figure): a choice between ballet, which would give her a payoff of 2, and boxing, which would give her a payoff of 1. The Nash equilibrium of this game is for her to choose the action with the higher payoff, ballet. In sum, backward induction allows us to jump straight to the subgame-perfect equilibrium in which the wife chooses ballet and the husband chooses (ballet | ballet, boxing | boxing), bypassing the other Nash equilibria. Backward induction is particularly useful in games that feature many rounds of sequential play. As rounds are added, it quickly becomes too hard to solve for all the Nash 274 Part 3: Uncertainty and Strategy FIGURE 8.13 Applying Backward Induction The last subgames (where player 2 moves) are replaced by the Nash equilibria on these subgames. The simple game that results at right can be
solved for player 1’s equilibrium action. Ballet 2 Ballet Boxing 1 Boxing Ballet 2 Boxing 2, 1 0, 0 0, 0 1, 2 Ballet 1 Boxing 2 plays ballet | ballet payoff 2, 1 2 plays boxing | boxing payoff 1, 2 equilibria and then to sort through which are subgame-perfect. With backward induction, an additional round is simply accommodated by adding another iteration of the procedure. Repeated Games In the games examined thus far, each player makes one choice and the game ends. In many real-world settings, players play the same game over and over again. For example, the players in the Prisoners’ Dilemma may anticipate committing future crimes and thus playing future Prisoners’ Dilemmas together. Gasoline stations located across the street from each other, when they set their prices each morning, effectively play a new pricing game every day. The simple constituent game (e.g., the Prisoners’ Dilemma or the gasoline-pricing game) that is played repeatedly is called the stage game. As we saw with the Prisoners’ Dilemma, the equilibrium in one play of the stage game may be worse for all players than some other, more cooperative, outcome. Repeated play of the stage game opens up the possibility of cooperation in equilibrium. Players can adopt trigger strategies, whereby they continue to cooperate as long as all have cooperated up to that point but revert to playing the Nash equilibrium if anyone deviates from cooperation. We will investigate the conditions under which trigger strategies work to increase players’ payoffs. As is standard in game theory, we will focus on subgame-perfect equilibria of the repeated games. Finitely repeated games For many stage games, repeating them a known, finite number of times does not increase the possibility for cooperation. To see this point concretely, suppose the Prisoners’ Chapter 8: Game Theory 275 Dilemma were played repeatedly for T periods. Use backward induction to solve for the subgame-perfect equilibrium. The lowest subgame is the Prisoners’ Dilemma stage game played in period T. Regardless of what happened before, the Nash equilibrium on this subgame is for both to fink. Folding the game back to period T 1, trigger strategies that 1 are ruled out. Although a condition period T play on what happens in period T player might like to promise
to play cooperatively in period T and thus reward the other for playing cooperatively in period T 1, we have just seen that nothing that happens in 1 affects what happens subsequently because players both fink in period T regardperiod T 1 were the last, and the Nash equilibrium of this subgame is less. It is as though period T again for both to fink. Working backward in this way, we see that players will fink each period; that is, players will simply repeat the Nash equilibrium of the stage game T times.!!!!! Reinhard Selten, winner of the Nobel Prize in economics for his contributions to game theory, showed that this logic is general: For any stage game with a unique Nash equilibrium, the unique subgame-perfect equilibrium of the finitely repeated game involves playing the Nash equilibrium every period.7 If the stage game has multiple Nash equilibria, it may be possible to achieve some cooperation in a finitely repeated game. Players can use trigger strategies, sustaining cooperation in early periods on an outcome that is not an equilibrium of the stage game, by threatening to play in later periods the Nash equilibrium that yields a worse outcome for the player who deviates from cooperation.8 Rather than delving into the details of finitely repeated games, we will instead turn to infinitely repeated games, which greatly expand the possibility of cooperation. Infinitely repeated games With finitely repeated games, the folk theorem applies only if the stage game has multiple equilibria. If, like the Prisoners’ Dilemma, the stage game has only one Nash equilibrium, then Selten’s result tells us that the finitely repeated game has only one subgame-perfect equilibrium: repeating the stage-game Nash equilibrium each period. Backward induction starting from the last period T unravels any other outcomes. With infinitely repeated games, however, there is no definite ending period T from which to start backward induction. Outcomes involving cooperation do not necessarily end up unraveling. Under some conditions the opposite may be the case, with essentially anything being possible in equilibrium of the infinitely repeated game. This result is sometimes called the folk theorem because it was part of the ‘‘folk wisdom’’ of game theory before anyone bothered to prove it formally.
One difficulty with infinitely repeated games involves adding up payoffs across periods. An infinite stream of low payoffs sums to infinity just as an infinite stream of high payoffs. How can the two streams be ranked? We will circumvent this problem with the aid of discounting. Let d be the discount factor (discussed in the Chapter 17 Appendix) measuring how much a payoff unit is worth if received one period in the future rather than today. In Chapter 17 we show that d is inversely related to the interest rate.9 If the interest rate is high, then a person would much rather receive payment today than next period because investing 7R. Selten, ‘‘A Simple Model of Imperfect Competition, Where 4 Are Few and 6 Are Many,’’ International Journal of Game Theory 2 (1973): 141–201. 8J. P. Benoit and V. Krishna, ‘‘Finitely Repeated Games,’’ Econometrica 53 (1985): 890–940. 9Beware of the subtle difference between the formulas for the present value of an annuity stream used here and in Chapter 17 Appendix. There the payments came at the end of the period rather than at the beginning as assumed here. So here the present value of $1 payment per period from now on is $1 $1 d ( þ þ $1 ( d2 d3 $1 ( þ þ ::: ¼ 1 $1! : d 276 Part 3: Uncertainty and Strategy today’s payment would provide a return of principal plus a large interest payment next period. Besides the interest rate, d can also incorporate uncertainty about whether the game continues in future periods. The higher the probability that the game ends after the current period, the lower the expected return from stage games that might not actually be played. Factoring in a probability that the repeated game ends after each period makes the setting of an infinitely repeated game more believable. The crucial issue with an infinitely repeated game is not that it goes on forever but that its end is indeterminate. Interpreted in this way, there is a sense in which infinitely repeated games are more realistic than finitely repeated games with large T. Suppose we expect two neighboring gasoline stations to play a pricing game each day until electric cars
which involves the harshest possible punishment for deviation, is called the grim strategy. Less harsh punishments include the so-called tit-for-tat strategy, which involves only one round of punishment for cheating. Because the grim strategy involves the harshest punishment possible, it elicits cooperation for the largest range of cases Chapter 8: Game Theory 277 (the lowest value of d) of any strategy. Harsh punishments work well because, if players succeed in cooperating, they never experience the losses from the punishment in equilibrium.10 The discount factor d is crucial in determining whether trigger strategies can sustain cooperation in the Prisoners’ Dilemma or, indeed, in any stage game. As d approaches 1, grim-strategy punishments become infinitely harsh because they involve an unending stream of undiscounted losses. Infinite punishments can be used to sustain a wide range of possible outcomes. This is the logic behind the folk theorem for infinitely repeated games. Take any stage-game payoff for a player between Nash equilibrium one and the highest one that appears anywhere in the payoff matrix. Let V be the present discounted value of the infinite stream of this payoff. The folk theorem says that the player can earn V in some subgame-perfect equilibrium for d close enough to 1.11 Incomplete Information In the games studied thus far, players knew everything there was to know about the setup of the game, including each others’ strategy sets and payoffs. Matters become more complicated, and potentially more interesting, if some players have information about the game that others do not. Poker would be different if all hands were played face up. The fun of playing poker comes from knowing what is in your hand but not others’. Incomplete information arises in many other real-world contexts besides parlor games. A sports team may try to hide the injury of a star player from future opponents to prevent them from exploiting this weakness. Firms’ production technologies may be trade secrets, and thus firms may not know whether they face efficient or weak competitors. This section (and the next two) will introduce the tools needed to analyze games of incomplete information. The analysis integrates the material on game theory developed thus far in this chapter with the material on uncertainty and information from the previous chapter. Games of incomplete information can quickly become complicated. Players who lack full information about the game will try to use what they do know to
make inferences about what they do not. The inference process can be involved. In poker, for example, knowing what is in your hand can tell you something about what is in others’. A player who holds two aces knows that others are less likely to hold aces because two of the four aces are not available. Information on others’ hands can also come from the size of their bets or from their facial expressions (of course, a big bet may be a bluff and a facial expression may be faked). Probability theory provides a formula, called Bayes’ rule, for making inferences about hidden information. We will encounter Bayes’ rule in a later section. The relevance of Bayes’ rule in games of incomplete information has led them to be called Bayesian games. To limit the complexity of the analysis, we will focus on the simplest possible setting throughout. We will focus on two-player games in which one of the players (player 1) has private information and the other (player 2) does not. The analysis of games of incomplete information is divided into two sections. The next section begins with the simple case in which the players move simultaneously. The subsequent section then 10Nobel Prize–winning economist Gary Becker introduced a related point, the maximal punishment principle for crime. The principle says that even minor crimes should receive draconian punishments, which can deter crime with minimal expenditure on policing. The punishments are costless to society because no crimes are committed in equilibrum, so punishments never have to be carried out. See G. Becker, ‘‘Crime and Punishment: An Economic Approach,’’ Journal of Political Economy 76 (1968): 169–217. Less harsh punishments may be suitable in settings involving uncertainty. For example, citizens may not be certain about the penal code; police may not be certain they have arrested the guilty party. 11A more powerful version of the folk theorem was proved by D. Fudenberg and E. Maskin (‘‘The Folk Theorem in Repeated Games with Discounting or with Incomplete Information,’’ Econometrica 54 (1986) 533–56). Payoffs below even the Nash equilibrium ones can be generated by some subgame-perfect equilibrium, payoffs all the way down to players’ minmax level (the lowest level a player can be reduced to by all other players working against him or her). 278 Part 3: Uncertainty and Strategy analyzes
games in which the informed player 1 moves first. Such games, called signaling games, are more complicated than simultaneous games because player 1’s action may signal something about his or her private information to the uninformed player 2. We will introduce Bayes’ rule at that point to help analyze player 2’s inference about player 1’s hidden information based on observations of player 1’s action. Simultaneous Bayesian Games In this section we study a two-player, simultaneous-move game in which player 1 has private information but player 2 does not. (We will use ‘‘he’’ for player 1 and ‘‘she’’ for player 2 to facilitate the exposition.) We begin by studying how to model private information. Player types and beliefs John Harsanyi, who received the Nobel Prize in economics for his work on games with incomplete information, provided a simple way to model private information by introducing player characteristics or types.12 Player 1 can be one of a number of possible such types, denoted t. Player 1 knows his own type. Player 2 is uncertain about t and must decide on her strategy based on beliefs about t. Formally, the game begins at an initial node, called a chance node, at which a particular value tk is randomly drawn for player 1’s type t from a set of possible types T {t1, …, tk, …, tK}. Let Pr(tk) be the probability of drawing the particular type tk. Player 1 sees which type is drawn. Player 2 does not see the draw and only knows the probabilities, using them to form her beliefs about player 1’s type. Thus, the probability that player 2 places on player 1’s being of type tk is Pr(tk). ¼ Because player 1 observes his type t before moving, his strategy can be conditioned on t. Conditioning on this information may be a big benefit to a player. In poker, for example, the stronger a player’s hand, the more likely the player is to win the pot and the more aggressively the player may want to bid. Let s1(t) be player 1’s strategy contingent on his type. Because player 2 does not observe t, her strategy is simply the unconditional one, s2. As with games of complete information, players’ payoffs depend on strategies. In Bayesian games
, payoffs may also depend on types. Therefore, we write player 1’s payoff as u1(s1(t), s2, t) and player 2’s as u2(s2, s1(t), t). Note that t appears in two places in player 2’s payoff function. Player 1’s type may have a direct effect on player 2’s payoffs. Player 1’s type also has an indirect effect through its effect on player 1’s strategy s1(t), which in turn affects player 2’s payoffs. Because player 2’s payoffs depend on t in these two ways, her beliefs about t will be crucial in the calculation of her optimal strategy. Figure 8.14 provides a simple example of a simultaneous Bayesian game. Each player chooses one of two actions. All payoffs are known except for player 1’s payoff when 1 chooses U and 2 chooses L. Player 1’s payoff in outcome (U, L) is identified as his type, t. There are two possible values for player 1’s type, t 0, each occurring with equal probability. Player 1 knows his type before moving. Player 2’s beliefs are that each type has probability 1/2. The extensive form is drawn in Figure 8.15. 6 and t ¼ ¼ Bayesian–Nash equilibrium Extending Nash equilibrium to Bayesian games requires two small matters of interpretation. First, recall that player 1 may play a different action for each of his types. Equilibrium requires that player 1’s strategy be a best response for each and every one of his types. Second, recall that player 2 is uncertain about player 1’s type. Equilibrium requires 12J. Harsanyi, ‘‘Games with Incomplete Information Played by Bayesian Players,’’ Management Science 14 (1967–68): 159–82, 320–34, 486–502. Chapter 8: Game Theory 279 FIGURE 8.14 6 with probability 1/2 and t t ¼ 0 with probability 1/2. ¼ Simple Game of Incomplete Information Player 2 L R U t, 2 0, 0 D 2, 0 2, 4 1 r e y a l P FIGURE 8.15 Extensive Form for Simple Game of Incomplete Information This figure translates Figure 8.14 into an
extensive-form game. The initial chance node is indicated by an open circle. Player 2’s decision nodes are in the same information set because she does not observe player 1’s type or action before moving. t = 6 Pr = 1/2 t = 0 Pr = 1/, 2 0, 0 2, 0 2, 4 0, 2 0, 0 2, 0 2, 4 280 Part 3: Uncertainty and Strategy that player 2’s strategy maximize an expected payoff, where the expectation is taken with respect to her beliefs about player 1’s type. We encountered expected payoffs in our discussion of mixed strategies. The calculations involved in computing the best response to the pure strategies of different types of rivals in a game of incomplete information are similar to the calculations involved in computing the best response to a rival’s mixed strategy in a game of complete information. Interpreted in this way, Nash equilibrium in the setting of a Bayesian game is called Bayesian–Nash equilibrium. Next we provide a formal definition of the concept for reference. Given that the notation is fairly dense, it may be easier to first skip to Examples 8.6 and 8.7, which provide a blueprint on how to solve for equilibria in Bayesian games you might come across Bayesian–Nash equilibrium. In a two-player, simultaneous-move game in which player 1 has private information, a Bayesian–Nash equilibrium is a strategy profile is a best response to s’2 for each type t t such that s’1ð T of player 1,, s’2Þ Þ s’1ð t ð Þ 2, s’2, t U 1ð t s’1ð Þ U 1ð s01, s’2, t Þ Þ & for all s01 2 S1, (8:22) and such that s’2 is a best response to s’1ð t U 2ð, tkÞ & tkÞ Pr ð s’2, s’1ð tkÞ given player 2’s beliefs Pr(tk) about player 1’s types: Þ Pr tkÞ ð U 2ð s02, s’1ð tkÞ, tkÞ for all s02 2 S2:
(8:23) T tk2 X T tk2 X Because the difference between Nash equilibrium and Bayesian–Nash equilibrium is only a matter of interpretation, all our previous results for Nash equilibrium (including the existence proof ) apply to Bayesian–Nash equilibrium as well. EXAMPLE 8.6 Bayesian–Nash Equilibrium of Game in Figure 8.15 To solve for the Bayesian–Nash equilibrium of the game in Figure 8.15, first solve for the informed player’s (player 1’s) best responses for each of his types. If player 1 is of type t 0, then he would choose D rather than U because he earns 0 by playing U and 2 by playing D regardless of what player 2 does. If player 1 is of type t 6, then his best response is U to player 2’s playing L and D to her playing R. This leaves only two possible candidates for an equilibrium in pure strategies: ¼ ¼ 1 plays 1 plays t j t U ð D ð ¼ 6, D 6, D t j t ¼ 0 0 Þ and 2 plays L; and 2 plays R: ¼ 0), The first candidate cannot be an equilibrium because, given that player 1 plays (U |t player 2 earns an expected payoff of 1 from playing L. Player 2 would gain by deviating to R, earning an expected payoff of 2. 6, D|t ¼ ¼ ¼ Þ j j The second candidate is a Bayesian–Nash equilibrium. Given that player 2 plays R, player 1’s best response is to play D, providing a payoff of 2 rather than 0 regardless of his type. Given that both types of player 1 play D, player 2’s best response is to play R, providing a payoff of 4 rather than 0. QUERY: If the probability that player 1 is of type t be a Bayesian–Nash equilibrium? If so, compute the threshold probability. ¼ 6 is high enough, can the first candidate Chapter 8: Game Theory 281 EXAMPLE 8.7 Tragedy of the Commons as a Bayesian Game For an example of a Bayesian game with continuous actions, consider the Tragedy of the Commons in Example 8.5 but now suppose that herder 1 has private information regarding his value of grazing per sheep: where herder 1’
s type is t ¼ type) with probability 1/3. Herder 2’s value remains the same as in Equation 8.11. 130 (the ‘‘high’’ type) with probability 2/3 and t ¼ q1, q2, t v1ð t q1 þ! ð, q2Þ Þ ¼ (8:24) 100 (the ‘‘low’’ To solve for the Bayesian–Nash equilibrium, we first solve for the informed player’s (herder 1’s) best responses for each of his types. For any type t and rival’s strategy q2, herder 1’s valuemaximization problem is (8:25) (8:26) (8:27) max q1 f q1v1ð q1, q2, t Þg ¼ max q1 f t q1ð q1! : q2Þg! The first-order condition for a maximum is t 2q1!! Rearranging and then substituting the values t 0: q2 ¼ 130 and t ¼ and q1L ¼ q2 2 100, we obtain ¼ q2 2 q1H ¼ where q1H is the quantity for the ‘‘high’’ type of herder 1 (i.e., the t ‘‘low’’ type (the t 100 type). 65 50!!, Next we solve for herder 2’s best response. Herder 2’s expected payoff is ¼ 100 type) and q1L for the ¼ 2 3 ½ q2ð 120 q1H! q2Þ+ þ! 1 3 ½ 120 q2ð q1L! q2Þ+ ¼ q2ð! 120 q1!, q2Þ! (8:28) where q1H þ Rearranging the first-order condition from the maximization of Equation 8.28 with respect to q2 gives q1 ¼ (8:29) q1L: 2 3 1 3 q2 ¼ 60! q1 2 : (8:30) Substituting for q1H and q1L from Equation 8.27 into Equation 8.29 and
then substituting the resulting expression for q1 into Equation 8.30 yields q2 ¼ 30 þ q2 4, (8:31) 40. Substituting q’2 ¼ 40 back into Equation 8.27 implies q’1H ¼ implying that q’2 ¼ 30: q’1L ¼ Figure 8.16 depicts the Bayesian–Nash equilibrium graphically. Herder 2 imagines playing against an average type of herder 1, whose average best response is given by the thick dashed line. The intersection of this best response and herder 2’s at point B determines herder 2’s equilibrium quantity, q’2 ¼ 40 is given 40. The best response of the low (resp. high) type of herder 1 to q’2 ¼ by point A (resp. point C). For comparison, the full-information Nash equilibria are drawn when herder 1 is known to be the low type (point A0) or the high type (point C 0). 45 and QUERY: Suppose herder 1 is the high type. How does the number of sheep each herder grazes change as the game moves from incomplete to full information (moving from point C 0 to C)? What if herder 1 is the low type? Which type prefers full information and thus would like to signal its type? Which type prefers incomplete information and thus would like to hide its type? We will study the possibility player 1 can signal his type in the next section. 282 Part 3: Uncertainty and Strategy FIGURE 8.168Equilibrium of the Bayesian Tragedy of the Commons Best responses for herder 2 and both types of herder 1 are drawn as thick solid lines; the expected best response as perceived by 2 is drawn as the thick dashed line. The Bayesian–Nash equilibrium of the incomplete-information game is given by points A and C; Nash equilibria of the corresponding fullinformation games are given by points A0 and C 0. q2 High type’s best response Low type’s best response 40 A′ A CB C′ 0 30 40 45 2’s best response q1 Signaling Games In this section we move from simultaneous-move games of private information to sequential games in which the informed player, player 1, takes an action that is observable to player 2 before player 2 moves. Player 1’s action provides information,
a signal, that player 2 can use to update her beliefs about player 1’s type, perhaps altering the way player 2 would play in the absence of such information. In poker, for example, player 2 may take a big raise by player 1 as a signal that he has a good hand, perhaps leading player 2 to fold. A firm considering whether to enter a market may take the incumbent firm’s low price as a signal that the incumbent is a low-cost producer and thus a tough competitor, perhaps keeping the entrant out of the market. A prestigious college degree may signal that a job applicant is highly skilled. The analysis of signaling games is more complicated than simultaneous games because we need to model how player 2 processes the information in player 1’s signal and then updates her beliefs about player 1’s type. To fix ideas, we will focus on a concrete application: a version of Michael Spence’s model of job-market signaling, for which he won the Nobel Prize in economics.13 13M. Spence, ‘‘Job-Market Signaling,’’ Quarterly Journal of Economics 87 (1973): 355–74. Chapter 8: Game Theory 283 ¼ ¼ Job-market signaling H) or low-skilled Player 1 is a worker who can be one of two types, high-skilled (t (t L). Player 2 is a firm that considers hiring the applicant. A low-skilled worker is completely unproductive and generates no revenue for the firm; a high-skilled worker generates revenue p. If the applicant is hired, the firm must pay the worker w (think of this wage as being fixed by government regulation). Assume p > w > 0. Therefore, the firm wishes to hire the applicant if and only if he or she is high-skilled. But the firm cannot observe the applicant’s skill; it can observe only the applicant’s prior education. Let cH be the high type’s cost of obtaining an education and cL the low type’s cost. Assume cH < cL, implying that education requires less effort for the high-skilled applicant than the low-skilled one. We make the extreme assumption that education does not increase the worker’s productivity directly. The applicant may still decide to obtain an education because of its value as
a signal of ability to future employers. Figure 8.17 shows the extensive form. Player 1 observes his or her type at the start; player 2 observes only player 1’s action (education signal) before moving. Let Pr(H) and Pr(L) be player 2’s beliefs before observing player 1’s education signal that player 1 is high- or low-skilled. These are called player 1’s prior beliefs. Observing player 1’s action will lead player 2 to revise his or her beliefs to form what are called posterior beliefs. For FIGURE 8.17 Job-Market Signaling Player 1 (worker) observes his or her own type. Then player 1 chooses to become educated (E) or not (NE ). After observing player 1’s action, player 2 (firm) decides to make him or her a job offer (J ) or not (NJ ). The nodes in player 2’s information sets are labeled n1, …, n4 for reference. E 1 Pr(H) NE E NE Pr(L) 1 2 n1 2 n2 2 n3 2 n4 J NJ J NJ J NJ J NJ w − cH, π − w −cH, 0 w − cL, −w −cL, 0 w, π − w 0, 0 w, −w 0, 0 284 Part 3: Uncertainty and Strategy example, the probability that the worker is high-skilled is conditional on the worker’s having obtained an education, Pr(H|E), and conditional on no education, Pr(H|NE). Player 2’s posterior beliefs are used to compute his or her best response to player 1’s education decision. Suppose player 2 sees player 1 choose E. Then player 2’s expected payoff from playing J is w Pr H E p w, (8:32) Pr H ð E j p w E L Pr ð Þð! Þ þ Þð! where the left side of this equation follows from the fact that because L and H are the Pr(H|E). Player 2’s payoff from playing NJ is 0. To determine only types, Pr(L|E) the best response to E, player 2 compares the expected payoff in Equation 8.32 to 0. Player 2’s best response is J if and only if Pr(H|E) > w
/p The question remains of how to compute posterior beliefs such as Pr(H|E). Rational players use a statistical formula, called Bayes’ rule, to revise their prior beliefs to form posterior beliefs based on the observation of a signal. Bayes’ rule Bayes’ rule gives the following formula for computing player 2’s posterior belief Pr(H|E)14: Pr E H ð j Þ ¼ Pr E ð H j Þ Similarly, Pr(H|E) is given by E Pr ð j H Pr ð H H Pr Þ ð L E Pr j ð Þ Þ þ : L Pr ð Þ Þ (8:33) Pr H ð j NE Þ ¼ Pr NE H H NE Pr j ð Þ H Pr Þ þ ð Pr H ð Þ NE Pr ð L Pr L j Two sorts of probabilities appear on the left side of Equations 8.33 and 8.34: Þ Þ Þ ð ð j : (8:34) • • the prior beliefs Pr(H) and Pr(L); the conditional probabilities Pr(E|H), Pr(NE|L), and so forth. The prior beliefs are given in the specification of the game by the probabilities of the different branches from the initial chance node. The conditional probabilities Pr(E|H), Pr(NE|L), and so forth are given by player 1’s equilibrium strategy. For example, Pr(E|H) is the probability that player 1 plays E if he or she is of type H; Pr(NE|L) is the probability that player 1 plays NE if he or she is of type L; and so forth. As the schematic diagram in Figure 8.18 summarizes, Bayes’ rule can be thought of as a ‘‘black box’’ that takes prior beliefs and strategies as inputs and gives as outputs the beliefs we must know to solve for an equilibrium of the game: player 2’s posterior beliefs. 14Equation 8.33 can be derived from the definition of conditional probability in footnote 25 of Chapter 2. (Equation 8.34 can be derived similarly.) By definition, Reversing the order of the two events in the conditional probability yields Pr H ð E j Þ ¼ Pr
ð H and E E Pr ð Þ Þ : or, after rearranging, Pr H E ð j Þ ¼ Pr ð Þ H and E H Pr ð H j E ð Þ Þ Pr : H ð Þ Pr H and E ð Þ ¼ Pr Substituting the preceding equation into the first displayed equation of this footnote gives the numerator of Equation 8.33. The denominator follows because the events of player 1’s being of type H or L are mutually exclusive and jointly exhaustive, so Pr E ð Pr ð Pr ð Þ ¼ ¼ E and H Þ þ H Pr E ð H j Þ Þ þ Pr E and L ð L E Pr j ð Þ Þ Pr ð L : Þ Chapter 8: Game Theory 285 FIGURE 8.18 Bayes’ Rule as a Black Box Bayes’ rule is a formula for computing player 2’s posterior beliefs from other pieces of information in the game. Inputs Player 2’s prior beliefs Player 1’s strategy Bayes’ rule Output Player 2’s posterior beliefs When player 1 plays a pure strategy, Bayes’ rule often gives a simple result. Suppose, 0 or, in other words, that player 1 obtains for example, that Pr(E|H) an education if and only if he or she is high-skilled. Then Equation 8.33 implies 1 and Pr(E|L) ¼ ¼ H Pr Pr ð Þ þ H 0 Þ ( Pr ( 1: Pr L ð Þ ¼ (8:35) That is, player 2 believes that player 1 must be high-skilled if it sees player 1 choose E. 1—that is, suppose player 1 On the other hand, suppose that Pr(E|H) obtains an education regardless of his or her type. Then Equation 8.33 implies Pr(E|L) ¼ ¼ Pr Pr ð Þ þ H 1 Þ ( Pr ( Pr L Þ ð ¼ H Pr ð, Þ (8:36) Pr(L) because Pr(H) 1. That is, seeing player 1 play E provides no information about player 1’s type, so player 2’s posterior belief is the same as his or
her prior one. More generally, q, then Bayes’ rule implies that if player 2 plays the mixed strategy Pr(E|H) p and Pr(E|L) ¼ þ ¼ ¼ H Pr ð j E Þ ¼ p Pr p Pr H ð H ð Þ q Pr Þ þ : L ð Þ (8:37) Perfect Bayesian equilibrium With games of complete information, we moved from Nash equilibrium to the refinement of subgame-perfect equilibrium to rule out noncredible threats in sequential games. For the same reason, with games of incomplete information we move from Bayesian-Nash equilibrium to the refinement of perfect Bayesian equilibrium Perfect Bayesian equilibrium. A perfect Bayesian equilibrium consists of a strategy profile and a set of beliefs such that • • at each information set, the strategy of the player moving there maximizes his or her expected payoff, where the expectation is taken with respect to his or her beliefs; and at each information set, where possible, the beliefs of the player moving there are formed using Bayes’ rule (based on prior beliefs and other players’ strategies). 286 Part 3: Uncertainty and Strategy The requirement that players play rationally at each information set is similar to the requirement from subgame-perfect equilibrium that play on every subgame form a Nash equilibrium. The requirement that players use Bayes’ rule to update beliefs ensures that players incorporate the information from observing others’ play in a rational way. The remaining wrinkle in the definition of perfect Bayesian equilibrium is that Bayes’ rule need only be used ‘‘where possible.’’ Bayes’ rule is useless following a completely unexpected event—in the context of a signaling model, an action that is not played in equilibrium by any type of player 1. For example, if neither H nor L type chooses E in the job-market signaling game, then the denominators of Equations 8.33 and 8.34 equal zero and the fraction is undefined. If Bayes’ rule gives an undefined answer, then perfect Bayesian equilibrium puts no restrictions on player 2’s posterior beliefs and thus we can assume any beliefs we like. As we saw with games of complete information, signaling games may have multiple equilibria. The freedom to specify any beliefs when
Bayes’ rule gives an undefined answer may support additional perfect Bayesian equilibria. A systematic analysis of multiple equilibria starts by dividing the equilibria into three classes—separating, pooling, and hybrid. Then we look for perfect Bayesian equilibria within each class. In a separating equilibrium, each type of player 1 chooses a different action. Therefore, player 2 learns player 1’s type with certainty after observing player 1’s action. The posterior beliefs that come from Bayes’ rule are all zeros and ones. In a pooling equilibrium, different types of player 1 choose the same action. Observing player 1’s action provides player 2 with no information about player 1’s type. Pooling equilibria arise when one of player 1’s types chooses an action that would otherwise be suboptimal to hide his or her private information. In a hybrid equilibrium, one type of player 1 plays a strictly mixed strategy; it is called a hybrid equilibrium because the mixed strategy sometimes results in the types being separated and sometimes pooled. Player 2 learns a little about player 1’s type (Bayes’ rule refines player 2’s beliefs a bit) but does not learn player 1’s type with certainty. Player 2 may respond to the uncertainty by playing a mixed strategy itself. The next three examples solve for the three different classes of equilibrium in the job-market signaling game. EXAMPLE 8.8 Separating Equilibrium in the Job-Market Signaling Game Pr(L|NE) A good guess for a separating equilibrium is that the high-skilled worker signals his or her type by getting an education and the low-skilled worker does not. Given these strategies, player 2’s beliefs must 0 according to Bayes’ rule. Conditional on Pr(L|E) be Pr(H|E) these beliefs, if player 2 observes that player 1 obtains an education, then player 2 knows it must be at node n1 rather than n2 in Figure 8.17. Its best response is to offer a job (J), given the payoff of w > 0. If player 2 observes that player 1 does not obtain an eduation, then player 2 knows it p must be at node n4 rather than n3, and its best response is not to offer a job (NJ) because 0 > 1 and Pr(H|NE) w
. ¼ ¼ ¼ ¼! The last step is to go back and check that player 1 would not want to deviate from the separating strategy (E|H, NE|L) given that player 2 plays (J|E, NJ|NE). Type H of player 1 earns w cH by obtaining an education in equilibrium. If type H deviates and does not obtain an education, then he or she earns 0 because player 2 believes that player 1 is type L and does not offer a job. For type H cH > 0. Next, turn to type L of player 1. Type L earns 0 not to prefer to deviate, it must be that w by not obtaining an education in equilibrium. If type L deviates and obtains an education, then he cL because player 2 believes that player 1 is type H and offers a job. For type L not or she earns w cL < 0. Putting these conditions together, there is separating to prefer to deviate, we must have w equilibrium in which the worker obtains an education if and only if he or she is high-skilled and in which the firm offers a job only to applicants with an education if and only if cH < w < cL.!!!!! Another possible separating equilibrium is for player 1 to obtain an education if and only if he or she is low-skilled. This is a bizarre outcome—because we expect education to be a signal of high rather than low skill—and fortunately we can rule it out as a perfect Bayesian equilibrium. Chapter 8: Game Theory 287 Player 2’s best response would be to offer a job if and only if player 1 did not obtain an education. Type L would earn cL from playing E and w from playing NE, so it would deviate to NE.! QUERY: Why does the worker sometimes obtain an education even though it does not raise his or her skill level? Would the separating equilibrium exist if a low-skilled worker could obtain an education more easily than a high-skilled one? EXAMPLE 8.9 Pooling Equilibria in the Job-Market Signaling Game Let’s investigate a possible pooling equilibrium in which both types of player 1 choose E. For player 1 not to deviate from choosing E, player 2’s strategy must be to offer a job if and only if the worker is educated—that is, ( J|E, NJ|NE). If player 2 does
not offer jobs to educated workers, then player 1 might as well save the cost of obtaining an education and choose NE. If player 2 offers jobs to uneducated workers, then player 1 will again choose NE because he or she saves the cost of obtaining an education and still earns the wage from the job offer. Next, we investigate when ( J|E, NJ|NE) is a best response for player 2. Player 2’s posterior beliefs after seeing E are the same as his or her prior beliefs in this pooling equilibrium. Player 2’s expected payoff from choosing J is Pr L ð w Þ Pr Pr H ð H ð p Þð p Þ w w: Pr H ð E p w Pr E L j ð w j Þð! Þ þ Þ þ Þð! Þð!!! Þ ¼ ¼ For J to be a best response to E, Equation 8.38 must exceed player 2’s zero payoff from choosing NJ, which on rearranging implies that Pr(H) w/p. Player 2’s posterior beliefs at nodes n3 and n4 are not pinned down by Bayes’ rule because NE is never played in equilibrium and so seeing player 1 play NE is a completely unexpected event. Perfect Bayesian equilibrium allows us to specify any probability distribution we like for the posterior beliefs Pr(H|NE) at node n3 and Pr(L|NE) at node n4. Player 2’s payoff from choosing NJ is 0. For NJ to be a best response to NE, 0 must exceed player 2’s expected payoff from playing J: (8:38) & 0 > Pr NE p w Pr Þ þ where the right side follows because Pr(H|NE)! Þð j H ð w NE L j Þð! ð Pr(L|NE) Pr H ð NE p Þ Þ ¼ j! 1. Rearranging yields Pr(H|NE) w, w/p. In sum, for there to be a pooling equilibrium in which both types of player 1 obtain an education, we need Pr(H|NE) Pr(H). The firm has to be optimistic about the proportion of skilled workers in the population—Pr(H) must be sufficiently high—and pessimistic about the skill level of uneducated workers—Pr(
Pr H ð L Þ ð ¼ Pr H ð Pr E H ð j 0. Pr H ð 1 e ½ Þ! Þ þ Pr H ð Þ+ (8:40) and Pr(H|NE) ¼ For type L of player 1 to be willing to play a strictly mixed strategy, he or she must get the same expected payoff from playing E—which equals jw cL, given player 2’s mixed strategy—as from playing NE—which equals 0 given that player 2 does not offer a job to uneducated applicants. Hence jw ¼ Player 2 will play a strictly mixed strategy (conditional on observing E) only if he or she gets cL ¼ the same expected payoff from playing J, which equals 0 or, solving for j, j’ cL/w.!! Pr E H ð j p Þð! w Þ þ Pr E L j ð w Þð! Pr E H ð j p Þ! w, Þ ¼ (8:41) as from playing NJ, which equals 0. Setting Equation 8.41 equal to 0, substituting for Pr(H|E) from Equation 8.40, and then solving for e gives w e’ ¼ p ð w ½! 1! Pr Þ Pr H ð H ð Þ Þ+ : (8:42) QUERY: To complete our analysis: In this equilibrium, type H of player 1 cannot prefer to deviate from E. Is this true? If so, can you show it? How does the probability of type L trying to ‘‘pool’’ with the high type by obtaining an education vary with player 2’s prior belief that player 1 is the high type? Experimental Games Experimental economics is a recent branch of research that explores how well economic theory matches the behavior of experimental subjects in laboratory settings. The methods are similar to those used in experimental psychology—often conducted on campus using undergraduates as subjects—although experiments in economics tend to involve incentives in the form of explicit monetary payments paid to subjects. The importance of experimental economics was highlighted in 2002, when Vernon Smith received the Nobel Prize in economics for his pioneering work in the field. An important area in this field is the use of experimental methods to test game theory. Experiments with the Prisoners’
Dilemma There have been hundreds of tests of whether players fink in the Prisoners’ Dilemma as predicted by Nash equilibrium or whether they play the cooperative outcome of Silent. In one experiment, subjects played the game 20 times with each player being matched with a different, anonymous opponent to avoid repeated-game effects. Play converged to the Nash equilibrium as subjects gained experience with the game. Players played the cooperative Chapter 8: Game Theory 289 action 43 percent of the time in the first five rounds, falling to only 20 percent of the time in the last five rounds.15 As is typical with experiments, subjects’ behavior tended to be noisy. Although 80 percent of the decisions were consistent with Nash equilibrium play by the end of the experiment, 20 percent of them still were anomalous. Even when experimental play is roughly consistent with the predictions of theory, it is rarely entirely consistent. Experiments with the Ultimatum Game Experimental economics has also tested to see whether subgame-perfect equilibrium is a good predictor of behavior in sequential games. In one widely studied sequential game, the Ultimatum Game, the experimenter provides a pot of money to two players. The first mover (Proposer) proposes a split of this pot to the second mover. The second mover (Responder) then decides whether to accept the offer, in which case players are given the amount of money indicated, or reject the offer, in which case both players get nothing. In the subgame-perfect equilibrium, the Proposer offers a minimal share of the pot, and this is accepted by the Responder. One can see this by applying backward induction: The Responder should accept any positive division no matter how small; knowing this, the Proposer should offer the Responder only a minimal share. In experiments, the division tends to be much more even than in the subgame-perfect equilibrium.16 The most common offer is a 50–50 split. Responders tend to reject offers giving them less than 30 percent of the pot. This result is observed even when the pot is as high as $100, so that rejecting a 30 percent offer means turning down $30. Some economists have suggested that the money players receive may not be a true measure of their payoffs. They may care about other factors such as fairness and thus obtain a benefit from a more equal division of the pot. Even if a Proposer
does not care directly about fairness, the fear that the Responder may care about fairness and thus might reject an uneven offer out of spite may lead the Proposer to propose an even split. The departure of experimental behavior from the predictions of game theory was too systematic in the Ultimatum Game to be attributed to noisy play, leading some game theorists to rethink the theory and add an explicit consideration for fairness.17 Experiments with the Dictator Game To test whether players care directly about fairness or act out of fear of the other player’s spite, researchers experimented with a related game, the Dictator Game. In the Dictator Game, the Proposer chooses a split of the pot, and this split is implemented without input from the Responder. Proposers tend to offer a less-even split than in the Ultimatum Game but still offer the Responder some of the pot, suggesting that Proposers have some residual concern for fairness. The details of the experimental design are crucial, however, as one ingenious experiment showed.18 The experiment was designed so that the experimenter would never learn which Proposers had made which offers. With this element of anonymity, Proposers almost never gave an equal split to Responders and indeed took the whole pot for themselves two thirds of the time. Proposers seem to care more about appearing fair to the experimenter than truly being fair. 15R. Cooper, D. V. DeJong, R. Forsythe, and T. W. Ross, ‘‘Cooperation Without Reputation: Experimental Evidence from Prisoner’s Dilemma Games,’’ Games and Economic Behavior (February 1996): 187–218. 16For a review of Ultimatum Game experiments and a textbook treatment of experimental economics more generally, see D. D. Davis and C. A. Holt, Experimental Economics (Princeton, NJ: Princeton University Press, 1993). 17See, for example, E. Fehr and K.M. Schmidt, ‘‘A Theory of Fairness, Competition, and Cooperation,’’ Quarterly Journal of Economics (August 1999): 817–868. 18E. Hoffman, K. McCabe, K. Shachat, and V. Smith, ‘‘Preferences, Property Rights, and Anonymity in Bargaining Games,’’ Games and Economic Behavior (November 1994): 346–80. 290 Part 3: Uncertainty
and Strategy Evolutionary Games And Learning The frontier of game-theory research regards whether and how players come to play a Nash equilibrium. Hyper-rational players may deduce each others’ strategies and instantly settle on the Nash equilibrium. How can they instantly coordinate on a single outcome when there are multiple Nash equilibria? What outcome would real-world players, for whom hyper-rational deductions may be too complex, settle on? Game theorists have tried to model the dynamic process by which an equilibrium emerges over the long run from the play of a large population of agents who meet others at random and play a pairwise game. Game theorists analyze whether play converges to Nash equilibrium or some other outcome, which Nash equilibrium (if any) is converged to if there are multiple equilibria, and how long such convergence takes. Two models, which make varying assumptions about the level of players’ rationality, have been most widely studied: an evolutionary model and a learning model. In the evolutionary model, players do not make rational decisions; instead, they play the way they are genetically programmed. The more successful a player’s strategy in the population, the more fit is the player and the more likely will the player survive to pass his or her genes on to future generations and thus the more likely the strategy spreads in the population. Evolutionary models were initially developed by John Maynard Smith and other biologists to explain the evolution of such animal behavior as how hard a lion fights to win a mate or an ant fights to defend its colony. Although it may be more of a stretch to apply evolutionary models to humans, evolutionary models provide a convenient way of analyzing population dynamics and may have some direct bearing on how social conventions are passed down, perhaps through culture. In a learning model, players are again matched at random with others from a large population. Players use their experiences of payoffs from past play to teach them how others are playing and how they themselves can best respond. Players usually are assumed to have a degree of rationality in that they can choose a static best response given their beliefs, may do some experimenting, and will update their beliefs according to some reasonable rule. Players are not fully rational in that they do not distort their strategies to affect others’ learning and thus future play. Game theorists have investigated whether more- or less-sophisticated learning strategies converge more or less quickly to a Nash equilibrium. Current research seeks to integrate theory with experimental study, trying to identify the spe
cific algorithms that real-world subjects use when they learn to play games. SUMMARY This chapter provided a structured way to think about strategic situations. We focused on the most important solution concept used in game theory, Nash equilibrium. We then progressed to several more refined solution concepts that are in standard use in game theory in more complicated settings (with sequential moves and incomplete information). Some of the principal results are as follows. • All games have the same basic components: players, strategies, payoffs, and an information structure. • Games can be written down in normal form (providing a payoff matrix or payoff functions) or extensive form (providing a game tree). • Strategies can be simple actions, more complicated plans contingent on others’ actions, or even probability distributions over simple actions (mixed strategies). • A Nash equilibrium is a set of strategies, one for each player, that are mutual best responses. In other words, a player’s strategy in a Nash equilibrium is optimal given that all others play their equilibrium strategies. • A Nash equilibrium always exists in finite games (in mixed if not pure strategies). Chapter 8: Game Theory 291 • Subgame-perfect equilibrium is a refinement of Nash equilibrium that helps to rule out equilibria in sequential games involving noncredible threats. • Repeating a stage game a large number of times introduces the possibility of using punishment strategies to attain higher payoffs than if the stage game is played once. If players are sufficiently patient in an infinitely repeated game, then a folk theorem holds implying that essentially any payoffs are possible in the repeated game. • In games of private information, one player knows more about his or her ‘‘type’’ than another. Players maximize their expected payoffs given knowledge of their own type and beliefs about the others’. • In a perfect Bayesian equilibrium of a signaling game, the second mover uses Bayes’ rule to update his or her beliefs about the first mover’s type after observing the first mover’s action. • The frontier of game-theory research combines theory with experiments to determine whether players who may not be hyper-rational come to play a Nash equilibrium, which particular equilibrium (if there are more than one), and what path leads to the equilibrium. PROBLEMS 8.1
Consider the following game: Player 2 E F D A 7, 6 5, 8 0, 8 7, 6 1, 1 C 0, 0 1, 1 4, 4 a. Find the pure-strategy Nash equilibria (if any). b. Find the mixed-strategy Nash equilibrium in which each player randomizes over just the first two actions. c. Compute players’ expected payoffs in the equilibria found in parts (a) and (b). d. Draw the extensive form for this game. 8.2 The mixed-strategy Nash equilibrium in the Battle of the Sexes in Figure 8.3 may depend on the numerical values for the payoffs. To generalize this solution, assume that the payoff matrix for the game is given by Player 2 (Husband) Ballet Boxing Ballet K, 1 0 ( Boxing 0, 0 1, K where K & 1. Show how the mixed-strategy Nash equilibrium depends on the value of K. 292 Part 3: Uncertainty and Strategy 8.3 The game of Chicken is played by two macho teens who speed toward each other on a single-lane road. The first to veer off is branded the chicken, whereas the one who does not veer gains peer-group esteem. Of course, if neither veers, both die in the resulting crash. Payoffs to the Chicken game are provided in the following table. Teen 2 Veer Does not veer Veer 2, 2 1, 3 1 n e e T Does not veer 3, 1 0, 0 a. Draw the extensive form. b. Find the pure-strategy Nash equilibrium or equilibria. c. Compute the mixed-strategy Nash equilibrium. As part of your answer, draw the best-response function diagram for the mixed strategies. d. Suppose the game is played sequentially, with teen 1 moving first and committing to this action by throwing away the steering wheel. What are teen 2’s contingent strategies? Write down the normal and extensive forms for the sequential version of the game. e. Using the normal form for the sequential version of the game, solve for the Nash equilibria. f. Identify the proper subgames in the extensive form for the sequential version of the game. Use backward induction to solve for the subgame-perfect equilibrium. Explain why the other Nash equilibria of the sequential game are ‘�
�unreasonable.’’ 8.4 Two neighboring homeowners, i average benefit per hour is ¼ 1, 2, simultaneously choose how many hours li to spend maintaining a beautiful lawn. The li þ and the (opportunity) cost per hour for each is 4. Homeowner i’s average benefit is increasing in the hours neighbor j spends on his own lawn because the appearance of one’s property depends in part on the beauty of the surrounding neighborhood. 10!, lj 2 a. Compute the Nash equilibrium. b. Graph the best-response functions and indicate the Nash equilibrium on the graph. c. On the graph, show how the equilibrium would change if the intercept of one of the neighbor’s average benefit functions fell from 10 to some smaller number. 8.5 The Academy Award–winning movie A Beautiful Mind about the life of John Nash dramatizes Nash’s scholarly contribution in a single scene: His equilibrium concept dawns on him while in a bar bantering with his fellow male graduate students. They notice several women, one blond and the rest brunette, and agree that the blond is more desirable than the brunettes. The Nash character views the situation as a game among the male graduate students, along the following lines. Suppose there are n males who simultaneously approach either the blond or one of the brunettes. If male i alone approaches the blond, then he is successful in getting a date with her and earns payoff a. If one or more other males approach the blond along with i, the competition causes them all to lose her, and i (as well as the others who approached her) earns a payoff of zero. On the other hand, male i earns a payoff of b > 0 from approaching a brunette because there are more brunettes than males; therefore, i is certain to get a date with a brunette. The desirability of the blond implies a > b. a. Argue that this game does not have a symmetric pure-strategy Nash equilibrium. b. Solve for the symmetric mixed-strategy equilibrium. That is, letting p be the probability that a male approaches the blond, find p’. Chapter 8: Game Theory 293 c. Show that the more males there are, the less likely it is in the equilibrium from part (b) that the blond is approached by at least one of them. Note: This paradoxical
result was noted by S. Anderson and M. Engers in ‘‘Participation Games: Market Entry, Coordination, and the Beautiful Blond,’’ Journal of Economic Behavior & Organization 63 (2007): 120–37. 8.6 The following game is a version of the Prisoners’ Dilemma, but the payoffs are slightly different than in Figure 8.1. Suspect 2 Fink Silent Fink 0, 0 3, −1 Silent −1, 3 1. Verify that the Nash equilibrium is the usual one for the Prisoners’ Dilemma and that both players have dominant strat- egies. b. Suppose the stage game is repeated infinitely many times. Compute the discount factor required for their suspects to be able to cooperate on silent each period. Outline the trigger strategies you are considering for them. 8.7 Return to the game with two neighbors in Problem 8.5. Continue to suppose that player i’s average benefit per hour of work on landscaping is li þ Continue to suppose that player 2’s opportunity cost of an hour of landscaping work is 4. Suppose that player 1’s opportunity cost is either 3 or 5 with equal probability and that this cost is player 1’s private information. 10! : lj 2 a. Solve for the Bayesian–Nash equilibrium. b. Indicate the Bayesian–Nash equilibrium on a best-response function diagram. c. Which type of player 1 would like to send a truthful signal to player 2 if it could? Which type would like to hide his or her private information? 8.8 In Blind Texan Poker, player 2 draws a card from a standard deck and places it against her forehead without looking at it but so player 1 can see it. Player 1 moves first, deciding whether to stay or fold. If player 1 folds, he must pay player 2 $50. If player 1 stays, the action goes to player 2. Player 2 can fold or call. If player 2 folds, she must pay player 1 $50. If player 2 calls, the card is examined. If it is a low card (2–8), player 2 pays player 1 $100. If it is a high card (9, 10, jack, queen, king, or ace), player 1 pays player 2 $100. a. Draw the extensive form for the game. b.