0:00

Okay, hi folks. So, we're talking about learning still,

Â and we're going to talk more about the DeGroot model now, and try and understand

Â some of its convergence properties. And one, one thing just to start with,

Â let me, sort of give you reiterate a little bit on the structure of the model

Â here. So, we have a bi of t looking like the

Â sum over js of Tij bj at t minus 1. So how much the person i puts on

Â different people's weight of beliefs in the previous period.

Â So it's a very simple model. The nice thing about this is that if we

Â want to think about the belief structure. If we're talking about what the beliefs

Â b1, so we represent this as a vector of b2's belief at t and so forth down to bn

Â of t. This is just equal to this T matrix times

Â the vector b1 at t minus 1 and so forth, bn, t minus 1.

Â Right. So, when we're thinking about what this

Â looks like, we can think of this as represent, as b at time t, as a vector,

Â as just equal to t times b at time t minus 1.

Â Okay. So this fact that this thing looks like a

Â very simple vector means that it's very easy to work with in terms of updating

Â and some of the mathematics behind it. And in particular this means that we can

Â represent the belief at some time t, b of t is just going to be equal to T raised

Â to the tth power, times b at 0. Right?

Â because b1 is equal to t times this, right?

Â So to get b1 you do this, to get t2 you do the belief.

Â Right? So, we're just raising this to t powers.

Â And, the reason that this model's going to be so nice to work with is that

Â we know a lot about matrices raised to powers, and in particular matrices where

Â the rows all sum to one and are non-negative.

Â Those are nicely behaved matrices and there's a lot of study that's been

Â developed in, in different areas of mathematics, in particular in the study

Â of Markov chains which have these kinds of properties in understanding these kind

Â of, of mathematics. So the beliefs take a very simple form,

Â and what we can do now is begin to ask questions about, when is there

Â convergence, when is there a consensus, who has influence, and when is there

Â convergence, when is there consensus. That's just going to depend on properties

Â of this matrix and who has influence will also depend on what this matrix ends up

Â looking like when we raise this to multiple, to higher and higher powers,

Â what does it end up looking like? Who ends up with a lot of weight in that

Â system? Okay?

Â So, understanding this is really going to depend on those, these kinds of

Â properties. Okay, so let's talk a little bit about

Â convergence. So let's start with an example a very

Â simple example person 3 here just listens to person 2, 2 just listens to 1, and 1

Â puts equal weight on 2 and 3. Okay, so this is a world where nobody

Â pays attention to themselves, they're all listening to somebody else and who

Â they're listening to depends on who they are.

Â 3:51

this is a world that will converge. And let's start with say, example, you

Â know, belief 1 for person 1 0 for person 2 and 3.

Â And so that's this initial belief. And what's going to happen, well so

Â person 1 starts with the belief of 1. person 2 is going to take over belief,

Â right, they're going to update. So after that we can go through, what's

Â the next belief going to be while person 2 is now putting weight 1 on 1.

Â They're going to get this. Person 1's averaging 2 zeros.

Â They're going to get a zero, person 3 is, is just paying attention to 2, so they're

Â going to get a zero, so the belief after one period goes to this.

Â You can, you know, keep looking at this and, and so forth.

Â this will eventually converge to two-fifths, two-fifths, two-fifths.

Â So if you look at what this process will converges to it's going to be two-fifths

Â for each agent and it, and it converges fairly nicely.

Â Okay. Let's look at another example.

Â And all I'm going to do now is switch. So, before, 3 was listening to 2.

Â And we're going to move that and have instead 3 listen to 1.

Â Okay? So, this was a 1 before.

Â And we're going to move the weight. Instead of 3 listening to 3, 3's now

Â going to listen to 1. Okay?

Â And let's look at what happens in this particular setting.

Â So now we're in a setting, and let's start with the same beliefs we did

Â before. So all we did was change 3's belief from

Â listening to 2 to listening to 1. And what happens now?

Â Well, we start with these beliefs And after one period, 1 is going to now

Â believe 0. But both 3 and 2 are listening to 1, so

Â they're both going to swtich to 1. Okay?

Â okay, so, so now we've got a situation where both 2 and 3 believe 1 and 1

Â believes 0. And 1 is listening to 2 and 3, and 2 and

Â 3 are listening to 1 so they're going to switch again, right?

Â 5:56

And then they're going to switch again, and switch again, and so forth.

Â So what is happening here is that keep just exchanging beliefs.

Â An even easier example of something like this would just be, you know, if, if

Â person 1 listened to person 2 and person 2 just listened to person 1, they're just

Â going to keep swapping beliefs back and forth.

Â And that's effectively what's going on here.

Â So, we get blinking, we get no convergence in this, in this setting,

Â okay? So the beliefs just keep changing over

Â time, they never converge. And, obviously, this is, you know, fairly

Â naive in the sense that the agents don't realize they keep exchanging beliefs back

Â and forth. And they keep doing that infinitely

Â often. But this is, is a system that's not

Â convergent. Okay, Now what's true about this, in

Â order for this to happen. it's actually based on the cycles in this

Â graph and here all the cycles are of length two.

Â And we've got the beliefs in, in such a way that they keep blinking back and

Â forth, and so it's possible for the beliefs to keep switching back and forth

Â on these cycles. And so it's there's a cyclicity in the

Â graph which gets is possible to reproduce in the beliefs.

Â Okay? Now, it could be that, you know, for

Â other beliefs you would, might have converged.

Â If we started everybody with a belief of 1, they would've converged to a belief of

Â 1. So it depends on which things which we

Â choose but whenever there are even cycles, all the cycles are even in, in a

Â given graph, then we could find situations where we're going to get

Â non-convergence. So more generally let's actually look at

Â the conditions which are going to define convergence.

Â So let's say that we'll, we'll starting with some particular network of weights

Â that people put on different individuals. We'll say that that society converges.

Â T converges, if the limit of this process raised to the t.

Â So the limiting belief exists for all initial beliefs.

Â In, in so, so no matter what initial beliefs we started with, we would end up

Â with convergence of this overall belief. Okay?

Â So, that'll be a definition. And in terms of characterizing this we'll

Â say that T is aperiodic if the greatest common divisor of all its simple cycles

Â is 1, okay? So, for instance, of these two things we

Â looked at before here we could find a cycle, so let's look at this, we have a

Â cycle of length three. So we have one cycle of length three, we

Â have a cycle of length two. The greatest common deduct, divisor of

Â three and two they don't divide into each other, the only thing that divides both

Â of those is one. Greatest common divisor is one.

Â This thing is aperiodic. Right.

Â So this one is aperiodic. What do those cycles look like over here?

Â Well, there's a cycle of length two. Here's a cycle of length two.

Â 8:56

Here's a cycle of length four. All the cycles are even.

Â All cycles are 2, 4, 6, 8, etc. Greatest common divisor, 2.

Â So, this one is periodic. It's not aperiodic, not aperiodic.

Â Okay? So when we, when we're looking at, at the

Â structure of these matrices, what turns out to be true is that this aperiodicity

Â is going to be what defines whether or not you get convergence of this process.

Â So in particular, there's a theorem, which you can get out of a variety of

Â sources. But basically can be derived from work in

Â Markov chain theory and, and more generally in linear algebra.

Â Suppose that we have Tb strongly connected.

Â And let me say a little about what strongly connected means.

Â Strongly connected is going to say that from every given individual, there exists

Â a path, a, a directed path from i to j. So there exists a directed path from i to

Â j for all i, j. Okay.

Â So basically, we're not in a situation where some people could never end up

Â getting information from other individuals, everybody could eventually

Â hear from everybody else. Okay?

Â If you, if you have non-strong connection, the con, the characterization

Â here gets a little more complicated. if you're interested in that, I have a

Â paper with Ben Golub in 2010 which gives the complete characterization of

Â convergence for non-strongly connected networks.

Â We'll just deal with the strongly-connected ones, and that's most

Â of the intuition. Okay.

Â So what's the theorem say? The theorem says that you get convergence

Â if and only if you get aperiodicity. So that aperiodic aperiodicity is

Â necessary and sufficient for convergence. and separate parts, so first part is

Â aperiodicity is going to give you convergence.

Â And in the second part of, that we learn is if things are convergent, then we're

Â also going to have that the limit of this matrix actually looks like every row is

Â going to converge to having, so if we look at, at what T raised and, and a

Â limit raised to the infinite power looks like.

Â What it looks like is a vector. A set of each row has entries S1 to Sn,

Â the same S1 to Sn. So, everybody ends up putting this in the

Â limit the same weights on other people's initial beliefs indirectly.

Â Where s is the unique left hand side eigenvector that has eigen value 1.

Â Okay, so that's a mouthful, but it says that in fact, what this is, is the same

Â eigenvector that we were talking about when we talked about eigenvectors

Â centrality. So this an eigenvector that has

Â eigenvalue 1. in this case it's going to be the unique

Â one if we've got strongly connected and aperiodicity.

Â this thing's going to be convergent and it's going to have nicely-defined weights

Â here S1 through Sn, and in fact, they will all be non-negative, they'll all be

Â positive, in fact. So, so these is a very powerful set of

Â results here. It tells us, aperiodicity, gives us

Â convergence, moreover, convergence occurs to a very specific limit and that limit's

Â given by the left hand side unit eigenvector of the matrix.

Â And that's where those 2 sevenths and 2 sevenths, 2 sevenths came from and the

Â other numbers that we found in those examples before.

Â Okay. So, what we're going to look at next is

Â I'll go through how you prove this. it involves a bit linear algebra and some

Â theorems from that, so if you're you can skip that if you like.

Â If you want to see the details I'll explain how you can derive that kind of

Â theorem. And then afterwards we'll begin to put

Â this in process. So what we've got is convergence from

Â aperiodicity plus we know what the limit looks like so we'll then talk a little

Â more in detail about what this S vector actually means; what it comes from.

Â And look at some examples and, and try and understand how we might use this,

Â this model a bit.

Â