0:00

So, here in Lecture 3.2, we're going to continue working on Computational Boolean

Algebra. And we're going to continue our

development of the Binary Decision Diagram or BDDs.

In the last lecture, we showed that a decision diagram was a way of representing

something like a truth table, but just way too big.

And one of the first things that we did was to decide that we should order the

variables in an attempt to make the diagrams all the same for a given piece

of, of Boolean logic. We're going to continue constraining and

constricting the decision diagrams. We're going to introduce another big idea

in this lecture which is that, we're going to reduce the diagrams.

We're going to remove certain forms of redundancy from the diagrams.

So that we have a really wonderful property, which is that if you give me a

Boolean equation. And you constrain the order in which the

variables appear in the BDD, I always get the same diagram.

I always get the same graph. It turns out, that by making the data

structure canonical in this way, becomes an amazingly powerful tool for actually

manipulating Boolean objects and answering really interesting questions.

So let's see how that works. We are back again, looking at binary

decision diagrams and our first big two big ideas just by way of review, were.

The first idea was hey, lets use a decision diagram.

And the second big idea was hey let's impose a global variable ordering so that

every path from the root to the leaf visits the variables in the same order and

what we got was. The fact that the BDDs we create are still

not canonical. We can have two different BDDs

representing the same function. So I need another big idea.

And this is the next really important idea, the idea of reduction.

This is the idea that, you know, there are redundancies in the graph that can remove

unnecessary nodes and edges. There are things that are not needed, we

can get rid of some nodes. We can get rid of some edges, and we can

if, you will, tighten up the graph. Make it smaller.

So, at the end of the previous lecture, we took the X2 node out and its children, and

we replaced it. That was, that was an example of this.

Why are we doing this? Well, we're, we're doing this for two

extremely useful reasons. The first is graph size.

I would like the resulting data structure to be as small as possible.

Well, that makes sense, right? Why use more computer memory if I don't

need it? But the second is that I really want to

canonical form for the same boolean function if you give me the same variable

order. I want there to be exactly.

One graph that represents this function. And that's going to turn out to be a

tremendously powerful idea, if we can make that happen.

So, we're going to use some reduction rules.

And reduction rule, just identifies things in a decision diagram that are

inefficient, that can be replaced with fewer nodes or fewer edges.

So the first reduction rule is really simple.

And it says, we should just keep one copy of ease, each constant leaf.

And so that's the rule that says, merge equivalent leaf nodes.

And all that really says is, I really only need one 0 node at the bottom of this

binary decision diagram. And I only need one 1 node at the bottom

of these decision diagram. And I can really get rid of all these

other separate copies. And I can just take all of those edges in

the very large initial diagram. And I can redirect whatever a variable

node has an edge into the constant node. I can redirect that into a single copy of

the 0 and a single copy of the 1. And if we were to take all of those edges.

That come out of the four X3 nodes at the bottom of the unreduced version of the

binary decision diagram. And simply redirect them into a single 0

node and a single 1 node. That's what we get.

And so oh look, that's a lot better. But like I said, you have a function of

100 variables, you have two to the 100 of these things down at the bottom.

Now, you've got two. That's good.

But, you know you know, we can still do a whole lot better because you know, the

stuff up at the top. As I'm circling here with this triangle of

the top part of the BDD. Well, you know, there's still one X1 node,

two X2 nodes, and four X3 nodes, you know, in that triangle.

I haven't really made things better, but at least for the constant nodes.

This is just less silly. This is less inefficient.

This makes much sense. But let's look at a more interesting

reduction rule. Something that really In, in a more

powerful way reduces the insides of these decision diagrams.

So here's the second reduction rule, and this has a more interesting name.

It says, merge isomorphic nodes. Okay, what is an isomorphic node?

An isomorphic node are two names that have two important characteristics.

They are the same variable, and they have identical children.

Now, the same variable, that's a kind of an easy one.

You're never going to merge an x and a y, or an x and a q.

Okay, you can merge an x and an x. But identical children means that they are

exactly the same physical children. They are not just children with the same

label. And so, we have an example up here of

isomorphic nodes. The x nodes are isomorphic.

Alright. Now, the x nodes have the same variable,

okay. They are both x nodes.

And the x nodes also have identical children.

5:53

And by that I mean they're low pointers which point to their zero children, both

go to a y node and their high pointers. Which go, when the value is set to a 1,

both go to the z node. And they don't go to any old y, random y

node or z node, they both go to the same. X y node and this same z node.

The exact same nodes. And so what happens up here is that I

don't need both of them. I can get rid of one of them.

And so I can simply replace it with one, any one, It doesn't matter.

And I can keep the children. Right?

As I'm just drawing here, right. So there's an x node.

Its low child goes to y, and its high child goes to z, and then y and z are

unaffected, but there are two pointers that go into, in this case, go into the x

nodes. One of them goes into the left x node, and

one of them goes into the right x node. And what now happen is that you simply

redirect them both. To go into the x node.

So whatever was happening above the x node in this BDD, right?

And that traversed some edge with a variable decision to get to x.

Those pointers, they just point to the single copy of x.

And there's a little gray box over at the side here.

That just, just gives the recipe. You remove the redundant node, which in

this case is the extra x node. You redirect all the edges that went into

the redundant node into the one copy that you kept.

So in this case the edges into the right x node are not going into the left x node as

well. That is an example of merging isomorphic

nodes to reduce the complexity of the BDD. So let us apply Rule 2 to our example.

And if we stare at it for a little while, the big thing that we have to look for is,

what are some nodes that have the same variable label.

And identical children. So, you know, there's just no other node

then x1, so it can't be x1. And if we look at the x2's, they don't

have identical children. But if we stare at the x3's for a little

while. What we will discover is that those folks,

the right most x3's, those are all isomorphic.

They all have 0 as the low child, and 1 as the high child.

And so I only need one of them. I don't need all of them.

So I can redraw this BDD. It's got an x1.

Okay, at the top. And then it's got at the next rank, it's

got two x2 nodes, right? And the low child goes to the left x2

node. And the high child goes to the right x2

node. And then on the right side, there is a

single x3 child, right? And there is the high child of x2 and the

low child of x2 both go to that single x3 node.

I'm going to put little stars by them because those are new edges.

On the left hand side, the x2 node also goes, its low child goes to the x3 node.

And down at the bottom, I have a single 0 constant and a single 1 constant.

The x3 child on the left, both of its children low and high, go to the 0 node.

The x3 node on the right, the low child goes to the 0 node, but the high child

goes to the 1 node. And the x2 node on the left, write that,

high child goes to x3. I'm going to put a little star by that,

all right? So, those are all of the things that I

redid, right? As a consequence of killing two of the

three x3 nodes. And replacing them with a single x3 node.

And one of the things you can immediately see is there's less to this graph.

There's just less of it, it's a lot less complicated, that's a good thing.

You know simpler's gotta be better. But wait, there's more, there's another

rule that we can use to eliminate redundant, nodes.

And this is the rule called, eliminate redundant test, and this is another one of

the ones that feels kind of simple, when you look at it.

A test in this case means a variable node, right?

So this is If x equals 1 what do I do? If x equals 0 what do I do?

Redundant tests are the circumstance where a variable has both of its children going

to the same node. And that's just dumb, right?

I don't need, in the diagram above, I don't need to know what x is.

If x is 0, I go ask the y node what to do. If x is 1, I go ask the y node what to do.

That's not necessary. Right?

And so I can kill the x node. I can keep the y node.

Right? And again when I look at the, the in this

case, the edges that were going into the x node, however, many them there are.

I simply redirect them to go into the y node.

Instead, I bypass the x node. The x node goes away.

I do not care what value it takes. It does not affect the output of my

function and so I get rid of it. And so, again, there is a little gray box

at the upper right that gives you a little formula, says, remove the redundant node,

redirect all the edges into the redundant node which was x in this case, into the

child, y in this case, of the removed node.

So that's the, that's the recipe but it, it simplifies things.

So, if we go back to our evolving BDD and we apply rule three, what I have drawn on

the left was the same example. That I did by hand a couple of slides ago.

X1 is the top rank node it has low children and high child x2 respectively.

Right? The next rank are two x3's, right?

The x2 on the left has a low child that goes to x3.

The high child is the right x3. The x2 on the right, both of the children,

go to the right x3. The x3 on the left, both of them, go to 0.

And the x3 on the right, low goes to 0, high goes to 1.

We can immediately look at this and we can see that the x3 on the left is redundant

because both of its children go to the 0 node.

And the x2 on the right is redundant because both if its children go to the x3

node. And so, I can kill both of them.

I can just kill them both and I can redirect the edges.

I'm going to put little stars by them. I can redirect the edges that go into

those nodes that I kill. I can simply send them to their children.

Right? And so in the right-hand diagram, I'm

going to put a little star by each of these edges that were redirected.

And we have a vastly simpler BDD. There's a x1 node at the top, and there's

a single x2 node at the second rank. X1's low child is this x2 node.

There's a single x3 node at the third rank.

X1's high child is that x3 node. X2's child is that x3 node.

And then there are constants at the bottom.

X2's low child is the 0. X3's low child is the 0.

X3's high child is the 1. Wow, this is way less BDD than I started

with. This is way less graph than I started

this. This is way fewer nodes and way fewer

edges. Now, if you might ask.

How does one apply these rules? And for now I'm just going to say

iteratively. When you can't find another graph match,

the graph is reduced. Is, is this how programs really do it?

No. Absolutely not.

Not only no, no with two exclamation points.

Look, if I give you a function with a hundred variables, the way I just showed

you, you have to build the decision diagram with 2 to the 100 you know, nodes

at the bottom. And then you have to apply this reduction

stuff, that's crazy. There's no way you could do that.

There's actually a much smarter set of ways, and we'll do a very high level

overview of what those ways are. But for now, and for some homework

problems and sort of getting kind of intellectually on top of this stuff.

If I give you a function with three or four variables and I ask you to do the BDD

its perfectly okay for you to do this by hand.

You'll get the right answer and you'll get some insight in to how this stuff works.

Now, the big thing going on here is that we started with a decision diagram and it

was really big. And we ordered the variables and we said,

let's see if that makes the graph that we get canonical.

If we always get the same graph, and the answer was no because there could be

redundancies. Even if you always visit all the variables

in the same order from the root to the leaf, as you do decisions, you can still

get a different graph. So then we reduce the diagram by all of

these merging operations. And we got a new thing.

We got a thing called a Reduced Ordered BDD.

And the result of that is a really, really cool result.

And the really cool result is that a Reduced Ordered BDD is a canonical form.

Right? So I'm just going to say data structure.

For any Boolean function. I'm even going to stop it with an

exclamation point. Our reduced order BDD is a canonical form.

The same function always generates exactly the same graph, exactly the same nodes,

with exactly the same edges, with exactly the same children.

If you give me the same variable ordering. Two Boolean functions are identical if and

only if their Reduced Ordered BDD graphs are isomorphic which is a fancy way of

saying they are the same. Now, you might think, wow, how hard is

that to check? And the answer is going to be, amazingly

trivial. It's a wonderful result.

And this is a really beautiful property to have.