Today’s talk’s topic is an idea so important in game theory, and with so many applications in different fields including computer science, that it earned its discoverer, together with Reinhard Selten and John Harsanyi, the 1994 Nobel Memorial Prize in Economic Sciences.

To introduce this idea, together with other basic game-theoretic notions, we resort to some examples. Here goes the first one:

Alice and Bob are planning an evening at the cinema. Alice would like to watch the romantic movie, while Bob would like to watch the action movie. Neither of them likes much the other’s favored movie: however, should they split, the sadness for being alone would be so big, that neither of them would enjoy his or her movie!

This is the kind of situation modeled by a *game in normal form*, where we have:

- A set of
*players*. - A set of
*strategies*for each player. - A collection of
*utility functions*which associate to each*strategic profile*a real number, such that is the*utility*player gets from the strategic profile .

In the case of Alice and Bob, this may be summarized with a table such as the following:

Romantic | Action | |

Romantic | ||

Action |

Such tables represent games in normal form between two players, where the *rows* of the table are labeled with the strategies suitable for the *first* player, and the *columns* of the table are labeled with the strategies suitable for the *second* player: the entries of the table indicate the values of the utility functions when the first player plays the corresponding row and the second player plays the corresponding column. When we want to emphasize the role of player in contrast to the others, we write as , and talk about the strategy of player *given* the strategic profile of the other players.

Suppose that Alice is the first player, and Bob is the second player: then the table tells us that, if they both choose the romantic movie, Alice will enjoy it a lot (utility value ) and Bob not very much (utility value ). However, if Bob *defects* from this strategic profile and goes watch the action movie, he will ultimately not enjoy it, because he will be sad for not being together with Alice—which was the entire point about organizing the evening at the movies!

Let us consider another game (a rather serious one indeed) where the players are a lion and a gazelle. The lion wants to catch the gazelle; the gazelle wants to avoid being caught by the lion. To do this, they may choose between being on the move, or staying more or less in the same place. It turns out, from observation in the field, that the table for the lion-and-gazelle situation is similar to the one below:

Move | Stay | |

Move | ||

Stay |

We observe that, for the lion, the most profitable strategy is to move. Indeed, if the gazelle moves, then the utility for the lion is if he moves, which is more than the he gets if he stays; on the other hand, if the gazelle stays, then the utility for the lion is if he moves, which is more than the he gets if he stays. A strategy such as this, which always gives the *best* possible result *independently* of the other players’ strategies, is called a *dominant strategy*. Such strategies are indeed quite rare: indeed, neither Alice nor Bob from the previous game had a dominant strategy, nor has the gazelle here, as they can maximize their own profit only by choosing the *same* strategy as the other player.

So, what if we relax the requirement, and just demand that every player chooses the most favorable strategy, *given* the strategies of the other players? This is the basic intuition under the concept of Nash equilibrium, formalized and studied by John Nash in his 1950 doctoral thesis.

**Definition 1.** A *Nash equilibrium* for a game in normal form is a strategic profile such that, for every player and every strategy feasible for player , it is the case that .

The situation when both the lion and the gazelle are on the move, is a Nash equilibrium: and is the only Nash equilibrium in the corresponding game. (By definition, every dominant strategy enters every Nash equilibrium.) The situation when both Alice and Bob go watch the romantic movie, is a Nash equilibrium: and so is the one when they go watch the action movie.

So, does every game have a Nash equilibrium?

Actually, no.

Indeed, suppose that the predator and the prey, instead of being large mammals such as the lion and the gazelle, are small insects such as a dragonfly and a mosquito. It then turns out, after careful observation, that the table for the predator-prey game gets more similar to the following:

Move | Stay | |

Move | ||

Stay |

In this situation, the dragonfly maximizes its utility if it does the *same* as the mosquito. In turn, however, the mosquito maximizes its own utility if it does the *opposite* than the dragonfly! In such a situation there can be no such thing as a Nash equilibrium as defined above.

Where determinism fails, however, randomization may help.

**Definition 2.** A *mixed strategy* for the player in a game in normal form is a probability distribution on the space of the strategies for player . A *mixed strategy profile* is a collection of mixed strategies for each player.

For example, the dragonfly might decide to move with probability , and stay still with probability ; similarly, the mosquito might decide to move with probability , and stay still with probability .

With mixed strategies, the important value for player to take into account is the *expected utility given the strategic profile*

which we may write when we want to emphasize the role of player .

Now, suppose that the dragonfly decides to set its own paramenter so that its expected utility does not change if the mosquito decides to move or to stay: this corresponds to the dragonfly maximizing its expected utility, given the mixed strategy of the mosquito. Our table tells us that this corresponds to

which has solution . In turn, if the mosquito sets its own parameter so that its own expected utility does not change if the dragonfly decides to move or stay, then

which has solution . The situation where the dragonfly moves with probability and the mosquito moves with probability is a situation none of the two insects has any advantage to change on its own part, given the choice of the other.

**Definition 3.** A *mixed strategy Nash equilibrium* for a game in normal form is a mixed strategy profile such that, for every player and every mixed strategy feasible for player , it is the case that .

And here comes Nash’s great result:

**Nash’s theorem.** Every game in normal form that allows at most finitely many *pure* strategic profiles admits at least one, *possibly mixed* strategy, Nash equilibrium.

It is actually sufficient to prove Nash’s theorem (as he did in his doctoral thesis) when there are only many players, and each of them only has finitely many pure strategies: such limitation is only apparent, because the condition that pure strategy profiles are finitely many means that all players have finitely many pure strategies, and at most finitely many of them have more than one.

The idea of the proof, which we might go through in a future Theory Lunch talk, goes as follows:

- Identify the space of mixed strategic profiles with a compact and convex set for suitable .
- For define a family of continuous transformations .
- By the Brouwer fixed-point theorem, for every there exists a mixed strategic profile such that .
- As is compact, the sequence has a limit point .
- By supposing that is not a mixed strategy Nash equilibrium we reach a contradiction.

We remark that Nash equilibria are not optimal solutions: they are, at most, *lesser evils for everyone given the circumstances*. To better explain this we illustrate a classic problem in decision theory, called the *prisoner’s dilemma*. The police has arrested two people, who are suspects in a bank robbery: however, the only evidence is about carrying firearms without license, which is a minor crime leading to a sentence of one year, compared to the ten years for bank robbery. So, while interrogating each suspect, they propose a bargain: if the person will testify against the other person for bank robbery, the police will drop the charges for carrying firearms without license. The table for the prisoner’s dilemma thus has the following form:

Quiet | Speak | |

Quiet | ||

Speak |

Then the situation where both suspects testify against each other is the only pure strategy Nash equilibrium: however, it is very far from being optimal…

Pingback: Proving the beauty of a mind | Theory Lunch