0:00

So in lecture 2.5, we're going to continue our exploration of computational Boolean

algebra. Only now, we're going to take it in a

slightly different direction. We've been working very extensively with

the cofactors. And seen how powerful and important and

useful those things are. But what we really haven't given you yet

is a, is a concrete data structure. A way of actually representing the Boolean

object in a way that we can manipulate. And so, we're going to do an interesting

application which is called tautology. And tautology is a particular question you

can ask of a Boolean equation and that question is, is it one all the time.

And, it turns out that it brings together all of the interesting things we need to

see how a computational view of Boolean algebra, a data structure, and operator's

view of Boolean algebra come together. And it also shows just a lot of the

skeleton, the, the style in which a lot of computational Boolean Algebra operators

are actually done, and we're going to use these ideas extensively later in the

course when I show you some other data structures and other important operators.

So let's go talk about recursive tautology.

Lecture 2.5. So what exactly is tautology?

It's actually an interesting problem. I'm going to give you a representation,

some sort of a data structure that can represent a Boolean function.

And you're going to build an algorithm that operates on this data structure.

And you're going to tell me, yes, no answer to this question.

Is this function f that you've given to me and represented in this data structure

equal to 1 for every input? And I know what you're thinking, I know

exactly what you're thinking. You're thinking, Rob, how hard can that

be? The function is one every place and the

answer is very hard. What if I gave you a function that has 50

variables? What if that function has 117,000 product

terms? And it just happens to be that every

single one of those product terms covers every single one of the rows I'm sorry, of

the squares in a Karnaugh map. That's not easy to do.

We're going to need some, some kind of cleverness to be able to calculate

something, if you give me a boolean expression, to calculate tautology.

This turns out to be a great example, because it illustrates all the stuff that

we need to know to be able to solve problems like this in an efficient way.

So, we're going to use the simple, early, representation scheme for functions.

We're going to represent the function as a set of word product terms, and so, the

idea here is that it's the OR'ed product are really the things that I'm going to

specify. Now, this is really nothing more than a

sum of products, but done as a very particular kind of a data structure, a

practical computationally viable kind of a data structure.

And as a simple visual for this, I'm going to, I'm going to describe these things

using, a three-dimensional Boolean cube. It's helpful when you're looking at any

literature about this stuff, you'll see these things all over the place.

You can think of this as really just a way to graph, a small Boolean equation.

So this is really a set of axes, right, and so there is the a axis, and there is

the b axis, and there's the c axis. Right?

And that's the value with a as a 0, and that's the value with a as a 1.

And, this up here, says that this corner, this vertex has the value abc equals 111.

And clearly, you can only have values of the graph at the vertices, right?Because,

it's discrete, doesn't exist in between, and we basically, we put a solid circle on

the vertices where the function makes a 1 there.

Just a simple Boolean cube. And when we talk about product terms just

like in a Karnaugh map, we're going to call them cubes.

And they're really just groups of 2 to the k adjacent corners that we can circle

where the function is 1 everywhere. So suppose that the, the function I'd like

to represent is a bar plus bc, plus ab, right?

Then I could circle, there's a bar and I could circle bc, and I could also circle,

that would be ab, and that would be my representation.

And one of the things that I will note is that this is not necessarily minimum.

And indeed, if you look at this, there are better ways to cover this kind of

obviously. There's actually some better bigger ways

to cover this. So no guarantees that this is minimum or

the most efficient representation, just that it can be correct.

So when we say cube that's what we mean by product term.

And the way we're going to represent product terms is in this historical

representation called positional cube notation.

So, it's a little vector with a slot for every variable and there's two bits per

slot. And there's just some simple rules.

If the variable appears in true form like x, you put a 0 1 in this slot.

If it's compliment form like x bar, you put a 1 0 in the slot.

If it does not appear at all you put 1, 1 in the slot.

And you just don't use 0, 0, it turns out you don't need it.

And, the basic way to think of this is just to give you a little example, right?

So here is a bar, a bar would be represented as this little vector with a 1

0 in the a slot, a 1, 1 in the b slot, and a 1, 1 in the c slot, right?

That's how you would represent that. And if we did let's say this one, this is

bc, alright? So bc would appear as 1, 1, there is no a

there. And 0 1 because there's ab and 0 1 because

there's ac. So that's, that's positional cube

notation. So why the, why the, the two bit slots?

Well, it turns out that if you use this representation, you simply and two cubes

together slot by slot, and you just and the bits together, it does the right

thing. And, also, kind of, you know,

interestingly, if you end two cubes together that, and the result should

actually be zero, right? That the, you know, the, the cube just

goes away what happens is you actually get one of these 0, 0 values, right?

In the slot and that's a signal to you that says, oops, this cube doesn't really

exist. So it's like you took a cube with an x in

it and you ended it with a cube with an x bar in it.

You'll get a 0, 0 and a slot and that's the signal.

It says, hey, kill this cube. I don't need this cube anymore.

So you'll never see it sitting around in a static cube.

You'll only see it if you're doing computations.

So, it's a very nice, very simple, very useful kind of a formulation.

So we will represent a function as a cover of cubes, just like you know, a set of

things you can circle of the ones on in Karnaugh map and this is really just a

list, right? And the list is, is how we're going to do

the OR of the product terms, right? And the positional cubes are how we're

actually going to do each of the products, so if we have this, this function again,

and I'm sorry, that should be a, a bar. Then, what actually happens is that here

is the a bar and this is that cube, alright?

And here is the bc and it's that cube bc. And here is the ab, it's that cube.

Each positional cube notation vector represents one product term.

You put a list of them together. It doesn't matter what the order is for

the list. By the way, you gotta have them in, in the

order in the list somewhere, doesn't matter the order.

This is our representation for something that's basically a sum of products kind of

a form. So, next question, how do we approach

tautology as a computation? Well it turns out that cofactors are

basically what saves us here. Alright.

So, there's a wonderful great result and the result is as follows.

A function f is a tautology if and only if both of its positive and negative

cofactors are also tautologies, so this makes sense.

Suppose it's the case that the function is one everywhere.

Well then, surely, it must be the case that the function, when you, when you take

x as a zero or in the function when you take x as a one must also be one.

Right? So both of the cofactors are also one, so

it works that way. And what if both of the cofactors are a

one, right, both of the cofactors. Well, we know by the Shannon expansion

theorem that we can always represent the function in that nice little

decomposition. Right, but we know that, that cofactor is

a one and that cofactor is one, so you just get what do you get, x plus x bar and

it's equal to one. So it works the other way, so it works

this way and it work this way, and it's a great result, because it gives us a

complication of decomposition strategy. What happens if the function I'm looking

at is too complicated to just tell if it's a tautology?

Break it up into two pieces and answer the question on the two smaller cofactors.

And you know that the original function was a cofactors, tautology only if both of

the cofactors are tautology, so recursively compute that.

So, you can take a hard problem and make it into two simpler problems with a

strategy for glueing the answer back together if you know the answers for both

of them. That's just basically what this picture

says. If you cannot tell immediately with some

simple mechanics that the function is a one.

Pick a variable, split and go ask the question on the positive cofactor and then

go ask the question on the negative cofactor.

And you combine them with simple, you know kind of one bit Boolean function are both

of them one, yes or no. Yes, then yes I'm a tautology.

That is an amazing thing. It gives us a pass, it gives us a strategy

for doing something very complicated. So the things we're going to look at next

are, as an algorithm, what are all the steps in the process to actually make this

a real computation.