0:17

So what kind of data structures are going to need first of all?

Our goal is to find the shortest path from s to every other vertex.

So the first observation is that there's going to be

a shortest-paths tree solution.

Well, if two paths have the same length,

then certainly it's going to be such a solution.

And there's a number of ways to convince yourself that there's going to be a tree.

If you've got two paths to the same vertex, you can delete the last edge on

one of them and keep going until all that's left as a tree for example.

So what we want to do is compute a tree.

Now, we've done that in several algorithms before and the reasonable way to

represent the shortest-paths tree is to use two vertex-indexed arrays.

1:11

compute the length of the shortest path from s to that vertex.

So in this case, we have the shortest-paths tree and

we'll keep the length of the shortest path from the source 0 to each vertex.

So, 0-2, the length of that shortest path is 0.26.

So 0-2-7 is 0.60 and like that,

because you go from 0 to 2, 0.36.

And from 2 to 7, 0.34, you get 0.60 and so forth.

And then the other thing, as we've done before, is you use apparent link

representation where edgeTo[v] is going to be the last edge that takes us to v.

And by following back through that array,

we can get the path as we've done several times before.

If we want the path from 0 to 6, we go to edge to 6 and

says well, the last thing we do is 3 to 6.

Then we go to 3 and say that way, we got to 3 is from 7.

We got to 7 and say that way, we got to 7 is from 2.

We go to 2 and say that's the way we go to 0.

And if we put all those things on a stack and then we can return them as

notable to the client, and that gives the edges on the shortest path.

So that's the data structures that we're going to use for shortest paths.

[COUGH] This is actually the code that does what I just said.

The client query give me the distance.

It just returns, gives us v,

the index into the instance array and returns the distance.

And if the client asks for the path, then we make a stack.

2:54

And then we use the variable e, which is a directed edge and we start at edgeTo[v].

And as long as it's not null, we push it onto the path.

And then go to the edgeTo entry for the vertex that we came from.

And that gives the vertex that we came from to get to that vertex to keep

going until we run out of vertices, which happens at the source and

then we turn the path.

And that's the interval that gives the client the path.

3:27

So that's the implementation of the two query methods.

So now, what we're going to want to talk about for

the rest of the lecture is the algorithms that build these data structures.

Now all of the algorithms that we look at are based on a concept

called relaxation, edge relaxation.

So now recall there are data structures, all right?

So we're going to talk about relaxing an edge from v to w.

And we have an example here from v to w.

4:01

And at the point that we're going to relax this edge,

we'll [COUGH] have our data structures in process in distTo.

We haven't seen all edges,

we haven't seen all paths in the intermediate part of some algorithm.

But we'll try to make sure that distTo[v], for every vertex,

is the length of the shortest known path to that vertex.

And that's going to be the same for w.

So these are all the edges that are in edgeTo that we know paths from

s to some vertex.

So distTo[v] and distTo[w] will contain the shortest known paths.

And also edgeTo[w] is the last edge in the shortest known path from s to w.

4:54

And same way with edgeTo[v] of course.

Now, so to relax along the edge from v to w, essentially that means,

let's update the data structures to take that edge into account.

And what happens in this case is that the edge gives a better way to get to w.

So that's what relaxing is.

That edge gives us a new shortest path.

So we want to include it in the data structures.

So since it has a shorter path, we have to update both distTo[w] and edgeTo[w].

That is we have a new way to get to w, because we have to throw away that old

edge that came to w and we have a new shorter distance.

Instead of 7.2 that came out old way, we have 4.4 that gets us a new way.

So edge relaxation is a very natural operation.

When we consider a new edge, does it give a new shortest path to that vertex or not?

If it doesn't, we ignore it.

If it does, we update the data structures to include that edge and

forget about the old edge that took us to that vertex.

That's edge relaxation.

And this is the easy implementation of edge relaxation in code.

So to relax an edge, we pull out its from and two vertices in v and

w according to our standard idiom.

And then we just see if the distance to w that was the shortest

path before is bigger than the distance to v plus

the weight of the edge that would take us from v to w.

[COUGH] If it's bigger, that means we found a new path.

If it's less than or equal, we ignore it.

And if we found a new path, we have to update the distance to w to be the new

distance, distance to v plus follow vw.

And then we have to update edgeTo[w] and throw away the old version.

And say that our new edge from v to w is the best path to w as far as we know.

So that's easy code for edge relaxation.

7:00

Now, [COUGH] we're going to use edge relaxation

in a really fundamental way to compute shortest paths.

But there's one other important idea,

which is called optimality conditions and

this is a way to know that we have shortest paths.

We have computed all the shortest paths.

The shortest path optimality conditions are embodied in this proposition.

So we have an edge weighted digraph and we have the distTo array.

Let's just talk about the distances and the paths go with the distances.

But the key point is that the distTo array represents shortest path

distances from the given source s if and only if these two conditions hold.

So the first thing is if it's the length for every vertex,

distTo[v] is the length of some path from s to the vertex.

And our algorithms will always ensure that.

And then the second thing is for every edge vw,

we have this condition that the distTo[w] that

we [COUGH] have stored is less than or

equal to distTo[v] + the weight of the edge from v to w.

That's the shortest path optimality conditions.

So sometimes they'll be equal.

For example, if vw is the last edge on the shortest path.

And sometimes, it'll be greater but it'll never be smaller.

It will never be a way to get to w that we haven't found.

That's the shortest path optimality condition.

And again, just a quick proof.

Although, the best way to understand proofs is to read them slowly

not listen to them spoken quickly but I'll quickly outline them.

9:10

So we want to prove that if this is true, then we have shortest path.

So to do that, we assume the contrary.

Suppose that the distance to w is bigger than the distance to v + e.weight for

some edge.

Then that path is going to give a path to w.

That's shorter than distance w, because v is a path and the weight's shorter and

that is a contradiction to the idea that you have shortest paths.

So there can't be any such edge where that condition holds.

So that's necessary.

And then sufficient.

Suppose that we have the shortest path from s to w.

9:58

Then [COUGH] we're assuming these conditions are all hold.

Then for every edge on the path, this has to all hold.

So if it starts at the end, the last edge goes from vk- 1 to vk.

So distance to vk is less than or

equal to the distance to vk- 1 + the weight of the k edge.

10:29

And so just continuing down that way.

Then from the source to the first edge.

So the source + the weight of the first edge is greater or equal distance

to the first vertex after the source and all those conditions after old.

And then what we can do is just add up all those weights and simplify it.

And then that shows that the distance to w is

equal to the length of the shortest path.

And so it's gotta be the weight of the shortest path,

because it's the weight of some path and it's got that weight.

It's gotta be the weight of the shortest path.

So if those conditions hold, we have shortest paths.

So the point of this proof.

It's a slightly complicated proof but it's not too bad.

It's quite natural is that all we have to know.

11:29

To check that,

we have computed shortest paths is that these optimality conditions hold.

To prove that an algorithm computes shortest paths,

we just have to prove that it ends up with the optimality conditions in force.

And that's what we'll be doing in the optimality condition.

Really, it's just saying there's no edge there that we missed.

[COUGH] Okay, so with that idea, in fact,

there is a very easy to state generic algorithm that is

going to compute shortest paths and it's very simple.

We start with the distance to the source being 0 on

the distance to every other vertex infinity.

And all we do is repeat until the optimality conditions are satisfied,

relax any edge.

Let's go ahead and relax edges until the optimality conditions are satisfied.

So that's a very general algorithm.

We don't say how to decide which edge to relax or

how to know the optimality conditions are satisfied.

But still, it's quite an amazingly simple generic algorithm.

12:44

So how can we show that it computes the SPT?

Well, it's pretty simple.

Throughout the algorithm,

we're making sure that because the way that we do relaxed edges,

the distance to v is the length of a simple path from s to v.

And the edgeTo[v] is the last edge on that path.

That's what relaxation does for any vertex that we touch.

13:19

And we've assumed that there's still a way to get to every vertex and

there's only a finite number of paths.

So it can decrease, at most, a finite number of times.

So the algorithms are going to terminate that's as simple as that.

13:39

Again, this is a little bit of a mind-blowing concept.

But we'll leave that for more careful study and just for

now realize that all we have to do is relax along edges.

And what we're going to do now is look at different ways of

figuring out how to choose which edge to relax.

The first algorithm that we'll look at is a classic algorithm known as Dijkstra's

algorithm, one of the most famous of all algorithms.

14:16

Then we'll look at an algorithm that

works even with negative weights as long as there aren't any directed cycles.

And then we'll look at an even older algorithm than Dijkstra,

the Bellman-Ford algorithm, that can solve the shortest path problem

in graphs with negative weights as long as there's no negative cycles.