Hi. In this next lecture on networks, what we wanna do is we wanna talk about the

logic. Through which networks form. And by that I mean we want to think of these

nodes as being agents in a way. These nodes make decisions about who, what other

nodes to connect to. And, as they make those decisions that's gonna result in a

structure to the network. So we're also gonna see how that process works, and how

we go really from the micro-level actions of the nodes, which are making these

connections, to macro-level properties of the network. So the network structure's

gonna be something that emerges through these micro-level interactions. We're

gonna focus on three very simple ways from in networks might form. The first one's

gonna be random connections, where each node and randomly decided to connect to

other nodes. The second one is gonna be a small worlds model. And the is, is gonna

work as follows. Each person is gonna have some friends that are, sort of, belong to

a clique. They're sort of nearby. And then some friends that are random, that they

randomly connect to. So we're gonna start out by having people connected to just

people near them. And then assume that they sort of rewire, in a way, and

randomly connect to some people who are further away in social space. ?Cause this

is what a lot of social networks look like. And the last thing we're gonna do is

we're gonna look at something called a preferential attachment network. And this

has been used to describe the internet. And the world wide web. And the idea here

is the following. It's that you're more likely to connect to nodes that are more

connected. So, that's true certainly in the world wide web. [inaudible] like it

links into Google. Or links into major universities because they're more

connected than we are to link to less connected things. And we'll see how that

micro-level rule results in properties in the network globally, structure in terms

of measures that might not have been anticipated. Okay, so let's get started.

Random networks, here's what you do. You assume there's N nodes and p is the

probability that you can. Back to another node. So what you do is you just, you draw

a bunch of nodes. Like this. And then with just some probability. You draw

connections between the nodes randomly and you get some sort of random graph. Now

this has been studied. They're sometimes called Aridish Rainy networks. These have

been studied in great detailed and here's a really interesting result. It turns out

you get a contextual tipping point. For a large [inaudible]. That means you have a

lot, a large number of nodes. The network almost always becomes connected if P, the

probability of being connected, is bigger than 1/N-1. So what you can get is here's

our graph. Here's the probability of being connected which is one of the measures we

care about. And here's 1/N-1. And what you get is you get a graph that looks like

this. You get a tip. So you're less than one of N minus one for large N, odds are

you're not connected. And once you get bigger than one of N minus one, odds are

you are fully connected. So it's cool. A tipping point within networks. Now these

random graphs, you can prove a lot of nice results about random graphs but the thing

is, that's not what real social networks look like. Remember here's that middle

school graph from James Moody. That doesn't look random. Here's the Adult

Network graph from the Friendly Ham study, that doesn't look really random either and

here's an email network. That doesn't look random either. So, we'd like to do is have

a network that looks more like what real networks look like. And this leads to the

small-world graph idea, and this is due to Duncan Watts whose a network theorist and

Steve Strogratz, so here's the idea behind the Small World network. You've got a

pre-, percentage of friends who are local. Or near you, like sort of click friends.

And then you got other friends who are more random. So, your, your local or click

friends would be the people you work with, the people you live nearby, that sort of

things. And your random friends are the people you went to summer camp with, or

your old college roommate, things like that. So if you look at real world social

networks, they've got these sort of clicks of people that interact, and then each

person has sort of a random friend someplace else, and those random friends

also belong to the clicks a little bit. So let's look at that network and see how it

works. So we're gonna open a net logo model here. And what we can do is we can

set this thing up and we're just gonna put people in a circle. Now the thing to

notice is that in the circle each person is connected to people on either side and

there's 40 nodes, and what I can do is I can start rewiring people. One at a time.

So that they've got more and more. Randomly rewired nodes, and then what I

can do, is I can ask, what's happening to the graph. Now notice here on the graph,

you see clustering coefficient, and over here, you see average path length. And

this graph you see up here on the upper left, the green line, so it's the

clustering coefficient. And the red line shows average path length. So as I rewire,

what you can see is that average path length is falling, but the clustering

coefficient. He's also falling. So let's do this one more time. I'm going to sit

here and I'm going to rewire. Notice the clustering coefficient starts out at a

half and the average path length is 5.4. Now as I start rewiring, the clustering

coefficient falls, and that's because before, everybody was connected just to

their neighbors around them so there were lots of clusters. So the clustering

coefficient falls. And the average path length. Is falling fast, it's not under

2.8, 2.7 and so on and so again we see this every time we run this we see a

decrease in the [inaudible] coefficient and a decrease in the average path length

as we construct this small world network. So what we get is, is people have more

random friends, we get less clustering and we also get an average, a shorter average

distance between people. So that's the Small Rope model. We'll come back to that

a little bit in the next lecture. Cuz the Small Rope model, small rope network

actually tells us a lot about social network systems, and it can help us

explain the six degree of separation property that I talked about in the first

lecture. Now, let's go to our third way that nodes can connect unannounced, and

this is called preferential attachment. In preferential attachment, nodes arrive on

the scene, and then they get to decide, who do I connect to? So think of yourself

moving into a new city, or think of you creating a webpage. So, your, your webpage

is your node. Now, you've got to decide what other webpages do I connect to? We're

gonna assume that the probability to connect to an existing node is

proportional to the number of connections it already has. So, here's an example.

Here's a network and some new node comes in. And it's trying to think about what

are the, what's the likelihood I should connect each one of these nodes? Well this

node only has one network connection, one edge. This one has four. And so, what's

gonna happen, is, when this node comes in, it's gonna be four times as likely to

connect to here as it is to over here. So the probability of connecting to someone

is proportional to the number of nodes it already has. Let's look at a Net Logo

model of preferential attachment. Now, I'm gonna set this up, and what's gonna

happen, is, I'm gonna let this go once. I'm just gonna add one node at a time for

a second. And what you're seeing here on these different graphs. The top graph is

showing the degree distribution. So there's one node, these are the nodes that

have one edge. Connected to it and this is in the nodes that have one of them has the

degree of six, and this lot below is going to plot that redistribution on a log, log

plot. So let's let this thing run. And this is always sort of a fun video because

you can see these nodes being added and drawing new connections. And I want you to

notice two things. Let me stop it for a second. I want you to notice two things.

First is there's some nodes that are way more connected to others and that's

captured in this degree distribution. There's a lot of nodes that only have one

and then there's a handful of nodes including one that has 23. Now I can

resize the nodes by the number of connections and that give you another way

to see that this degree distributions always skewed. So there's some nodes with

very few, with. Very low degree and there's some with a lot. So if I let this

run even further what you see is that degree di-, distribution stays skewed,

like there's some nodes that have lots of awful connections and then others that

have very, very few. So this preferential attachment rule results in this very

skewed degree distribution, which is interesting. Now we saw that happen once

but just to prove it happens every time lets run it again, and I'll put the

re-size node button on and again, you see the exact same sort of degree distribution

showing up where. Sub nodes are highly connected others are less connected but

we're getting a very different network. So one of the interesting things is the

distribution properties [inaudible] extend the same even though the exact network

you're gonna get is going to be a little bit different depending on how the process

played out. Well, guess what? That means its path dependent. So the exact network

you're gonna get is gonna be path dependent. The equilibrium distribution of

degrees is actually not gonna be path dependent. You're almost always gonna get

the exact same degree distribution. But the network you get is gonna depend a lot

on the particular history. So we've got sort of path dependent outcomes, but we

don't have a path dependent equilibrium in terms of the degree distribution. That we

know. Just from this process. At least we figured it out. It's, you know, physicist

network there is to figure it out, that you're always going to get this degree

distribution. And this is the way to think of it. What you'll get [inaudible] long

tail of degree distribution. So what that means is, there's a whole bunch of nodes

that have a degree of one. And then there's a handful of nodes that have a

large degree. And so this is the probability that you have one degree, and

here's the probability of a large degree. And this is what that preferential

[inaudible] gives you. This sort of long tail. This is something that's called a

long tailed distribution. And distribution's always gonna occur when you

have preferential attachment rule. Now, which nodes end up being large degree and

which ones don't, that ends up being somewhat path dependent. But. The

distribution itself is not you get these long tails. And what that means is there's

a big skew, some sites have lots of, look at the internet, the World Wide Web. Some

sites have lots of links, other sites have very, very few links. Alright. So, that's

some sense of the logic of networks. If you can stuff random connections then you

get this interesting tipping point phenomena. If you have a small world

connect network, then as people have more random friends, you get a lot less

clustering, but you get shorter social distance between people. And finally, from

the preferential attachment rule what we end up getting is we get this long tailed

degree distribution where most nodes have very, very low degree, but a handful of

nodes have a really, really high degree. Alright? Thanks.