So how do we quantify the importance of nodes?

One obvious choice is to use degree for example for this node with two edges

coming in, three edges going out, we say the total degree is five.

In degree is two, out degree three. Dunbar's number is used by sociologists as

an estimate of the largest number of friends in a social network an average

person can keep, Usually quoted at around 150.

But we also know that degree can be misleading number to quantify node

importance. For example, we may want to say that more

important node is pointing to you, make you more important.

And that is exactly what we did in lecture three with Google's ranking of webpages.

Over there we said that construct the matrix G which is a connectivity matrix H

plus rank one matrices and view that matrix as a random walking graph model for

example. And then look at the dominant eigenvector.

The eigenvector associated with the largest eigenvalue of this case, one of

this matrix G. And now we rank web pages based on entries

of this vector. That's what we did with page rank back

then. And then in general, the idea of eigen

vector centrality. Centrality here is just a fancier name to

call importance of nodes. The eigenvector ideas says that it take

the I, dominant eigenvector Okay, the eigenvector says a lot, not just

eigenvalue of some meaningful matrix such as the adjacency matrix A that we just

defined. So, why would you want to do that?

One possible motivation comes from the following linear dynamic system

interpretation. As in the homework problem, you see that

we can often model many useful quantities in the dynamics of, the functionality on a

graph by looking at a vector X. With one entry per node, evolved over this

great time index by T, as multiplying an initialization vector X Time zero does

initialization on the left by this incidence matrix A.

A time T has been applied T Times. Now how do we understand this expression?

We know that any given vector we can decompose that represent that as a way to

sum of the eigenvectors of this matrix A. Call those eigenvectors VI and the

corresponding eigenvalues scalers a lambda. And we can represent any

initialization vector as the linear combination with weight CI of these

eigenvectors. Now substitute this expression back into,

this evolution, we see. The following expression.

A multiply on the left T <i>,,, </i> the summation of CI VI vectors.

Again, CI are the weighted, are the weights to provide the linear combination

among the VIs to give you this vector. Because this is a linear [inaudible], we

can prove the matrix multiplication inside the summation.

. Now if you multiply it, the eigenvector,

an eigenvector of this matrix on the left, and then you just get lunder ivi, and you

have done the key times. Therefor it is just CI times lunder I,

raised to the power of T times VI vector. In other words, we have expressed the time

Ti I vector X as a linear combination of the eigenvector of this matrix a here.

Where the weight's on the CI times lambda [inaudible] lambda I times, to the power

of T. So as time, goes on.