0:00

Okay folks, so we're into the last part of the course

now and we'll be talking about games on networks.

And, in particular, we're still interested in understanding networks and behavior and,

now, trying to bring strategic interaction into play,

where people's decisions depend on what other people are doing.

So the idea is that, essentially,

there's decisions to be made and it's not just a simple diffusion or contagion process.

It's not updating beliefs.

It's that people care about what other individuals are doing.

So there's complementarities.

I want to only buy a certain program if other people are using that same program.

So the way in which I write articles depends on what my co-authors are doing,

or I want to learn a certain language only

if other people are also speaking that language.

So there's going to be inter-dependencies between what individuals do.

And there could also be situations where I can free ride.

So if somebody else buys a new book,

I can borrow it from them and maybe then I don't buy it myself.

So who I know that's actually bought a book,

maybe that affects whether I buy the book,

both positively and negatively.

So there's strategic inter-dependencies.

And you know, the idea of games,

people think of games - you know,

we're not talking about Monopoly or chess, checkers etc.

We're thinking about a situation where there's interactions.

And what a given individual is going to do depends on what other individuals are doing,

so there is some game aspect to it in that sense.

But we're using game theory as a tool to try and

understand exactly how behavior relates to network structure.

Okay.

So what we're going to do is work with some basic definitions.

And I won't presume that you're so familiar with game theory beforehand.

And we'll work through the basic definitions,

which will be pretty self-contained in terms of the network setting,

then work through some examples and then afterwards we'll begin to

do a more formal analysis and more extensive analysis of how these things work.

Okay.

So the idea here is there's going to be different individuals,

they're on a network,

they're each making decisions and you care about the actions of your neighbors.

So the early literature on this came out of the computer science literature.

And what I was really interested in was how complex is the computation of

equilibrium in these settings in worst case games;

so how hard would it be for a computer to actually

find an equilibrium of one of these games in situations

where there might be a lot of - in

a case where nature was making it as hard as possible for you to find an equilibrium.

And what we're going to focus in on is sort of a second branch of this literature,

which instead of being interested in the worst case computational issues,

is instead interested in applying games on networks to actually

understand what networks have to say about how networks influence human behavior.

And the one thing that's sort of nice is a lot of

the interactions that we tend to have between individuals will have more structure.

And so the games will be nice ones;

they won't be the worst case games

that are going to be computational and complex.

They're going to be ones where we can actually

say something meaningful about the structure.

So we're going to start with this as a canonical special case.

So it's a very simple version of the game but

one that's going to be fairly widely applicable.

And so what is true is we're looking at a situation

where a person I is going to take an action. Let's let that be xi.

And we'll start with the case where it's just a binary action,

it's either zero or one.

So I either buy this book,

or I don't buy the book;

or I invest in the new technology,

I don't; I learn a language,

I don't learn a language;

I end up going to a movie, I don't go to a movie.

And the payoff is going to depend on how many neighbors choose each action.

So how many people choose action zero,

how many neighbors choose action one and how many neighbors I have.

So that's going to be what - my payoff is going to depend on on those things. Okay.

So we've got each person choosing an action in zero,

one and we're going to consider a situation where your payoffs depend on your action.

So person I's payoff depends on their action.

It's also going to depend on the number of individuals,

number of neighbors of I that choose one,

so how many of my neighbors chose one.

And it will depend on my degree,

how many neighbors I have.

So if I have a hundred neighbors,

it might be different than if I have

three neighbors and two of them are choosing action one.

Two out of three is different than two out of a hundred,

so I might care differently depending on how many neighbors I have. Okay?

So what are the main simplifying assumptions in this setting?

The main simplifying assumptions are that we've got just the zero,

one actions, so we either take an action or we don't.

I only care about the number of friends taking the action,

not the identities of them.

So I don't care whether it's - I don't have best friends and less best friends.

I treat friends equally in terms of who's taking the action.

And it also just depends on my degree,

so it - how many friends I have.

I don't have a different preference than somebody else.

So we can enrich these models later to allow for people to

have different preferences and weight things differently.

But for now let's think of a world where everybody treats

their friends equally and they - it only matters how many friends they have,

not who their friends are.

Okay. So let's let's look at it as an example of a simple game of complements.

I'm willing to choose this new technology if and only if at least t neighbors do.

So this is a game I suppose I - you know I'm learning to play bridge, a card game.

I have to have at least three friends who play bridge

before I'm going to learn to play bridge, right?

So my payoff to playing action zero,

if I don't learn it I just get a zero.

And one example of this would be that I get a payoff from action - playing action

one which looks like minus this threshold plus how many friends play it.

So if this threshold was three,

then I get minus three plus how many of my friends play it.

So, for instance, if at least three of my friends play it,

then I'm going to get a payoff of zero,

if four of my friends play it, I get a payoff of one,

if five of my friends play it,

I get a payoff of two and so forth.

So this would be a very simple example,

where I'm going to be willing to choose action one

if and only if at least two of my neighbors do.

And you can you could write down all kinds of different payoff matrices.

This is just one example.

And so let's think of of looking at a network now.

And we've got a situation where we've got a bunch of different people.

And a person is willing to take action one if and only if at least two - t - sorry,

two neighbors do, okay?

So this is a game where once I have at least two of

my friends who take - bought this new technology, I'm willing to do it.

Otherwise I don't. Okay.

So what do we know first of all? Well, if we look at this network,

all these blue people,

they're going to take action zero because they only have one friend.

Actually, sorry, this person has two friends;

that action shouldn't be called as a zero.

So these three individuals only have one friend.

So they're definitely going to have to take action zero.

There's no way they're going to have at least two neighbors do it.

But what we can do is we can ask what about this player, right?

Well their action is going to depend on what their other friends do, okay?

And one possibility is that we set, for instance,

these three individuals altered to playing action one.

Right. So if these two individuals are doing it,

then this person is willing to,

they're all willing to because now they each have at least two friends doing it.

So one possibility would be to stick it where we were before,

where nobody takes the action because nobody else

does and so the technology never gets off the ground.

So it's possible that just,

if it's a technology that needs people to

want to communicate with other people and to have other people do it before they do it,

there's a possibility of never getting it seated,

it never gets off the ground.

Another possibility is, yes,

these three people all adopt it because they each have two friends who do it.

And so that's also an equilibrium, okay?

Now if these are the only people adopting,

then nobody else actually wants to do it because all the other individuals still have,

at most, one friend who did it,

so nobody else is above their threshold.

And indeed it's still in equilibrium

for these three people to do it and nobody else to do it, right?

So nobody else wants to take the action

because none of the other people have two neighbors who do. Okay?

9:24

So that's one type of game.

Let's take a look at a game which is going to have a sort of an opposite feature to this.

So this was one which had a feature that if more of my friends take the action,

then I'm more likely to want to take the action.

So compatible technologies will have that kind of feature.

But now let's think of the example where if somebody else,

one of my friends buys the book,

I don't buy the book because now I can borrow it from them, okay?

So I'm willing to buy the book if and only if none of my neighbors do.

So, for instance, what if I - if I don't buy the book, what's my payoff?

My payoff is one.

If some of my neighbors - one of my neighbors buys the book,

if the number of neighbors who bought the book is positive,

I can borrow it from them.

I get a payoff of one.

If none of my neighbors bought the book,

I can't borrow it, I get a pay off of zero;

I didn't buy it.

Now, instead, I could buy it myself.

And if I end up buying the book myself,

then what do I end up with?

I end up with a payoff of one minus c,

where c is the cost of the book, right?

So I'm in a situation where,

well, in terms of my payoffs here,

my optimal payoff would be I'd love to have one of my friends buy it,

me not buy it and borrow it from them.

That would give me the payoff of one;

that's my best possible payoff.

My worst payoff is nobody buys it and I don't buy it.

So if nobody - if none of my friends buy it,

then I would actually be willing to buy it,

and as long as c is less than one.

And the situation that wouldn't be in equilibrium is going to

be one where none of my friends buy it and I don't buy it.

So if they don't buy it I buy it.

But I won't buy it if one of my friends does. Okay?

So if we look at that example this is

known as what's called a best-shot public goods game.

So what matters to any individual is sort of

the max of the actions of their - in their neighborhood.

And so that's called the best-shot public goods game.

So an agent is willing to take action one if and only if no neighbors do.

So here would be an equilibrium of that game.

This person takes action one,

none of the neighbors do.

This person takes action one because no neighbors do and so forth, right?

That's an equilibrium of this game, okay?

That's a different game and it's going to

have a different shaped equilibrium to what we had before.

Here now we have these people taking action one.

There's multiple equilibria to this game.

There can be different combinations of things that are equilibria

and we'll take a look at that in more detail.