Okay? So, essentially, x is different from y

when? When, you know, x is smaller than y, or,

you know, x is greater than y, okay? So this, and these are two linear

constraints. Okay, so you can express that as saying,

x is smaller or equal to y minus 1, or, x is greater or equal to y plus 1.

You know, both of these disjuncts, okay, they are linear constraints.

Of course we have a disjunction here and that's annoying right?

Because the MIP model doesn't support dis junctions right?

So that we have two liner constraints that are in this junction.

Can we actually you know use these two linear constraints but remove this dis

junction? Okay?

So I'm going to show you the transformation which is called the Big M

transformation. Which is used all over the place in MIP

models in general when you have things like dis junctions.

Okay? And so the key idea is to do two things.

You're going to introduce a zero one variable, which is called b, and we are

going to introduce a very large number. Okay?

A huge number. You know, take, you know, 40 billions,

okay? And so now we can realize, essentially,

the junction as two linear inequalities, okay?

And the first one is going to be what? It's going to be x, okay?

It's going to be equal to y minus 1 Plus B times this Big M, this big number.

Okay B is a 0 1 variable. It takes a value of 0 1.

And the second constraint is X is going to be greater than Y plus 1 minus 1

minus B you know times the big M okay. So essentially what I'm showing you here

is a technique to transform a dis junction of two linear constraints into a

conjunction of two linear constraints. And the trick is to introduce one more

binary variable. And then use this big M value, okay?

Now I'm going to give you the intuition of what's happening here, right?

So, essentially, when you look at these two constraints, you know, every one of

them at any one, one of them, when b is equal to either one.

One of them is going to implement one of the part of the disjuncts, okay?

So if, if you look, if you look at the case where b is equal to 1, what's

going to happen if b is equal to 1. Look at that constraint here.

Okay? You know, b times M is going to be a huge

number. Okay?

So that basically means that the constraint here is that x is going to be

smaller to, something related to y, that we really don't care.

Because we have B times this big number. So, the first constraint is going to be

always satisfied, okay. So, the really important thing is the

second constraints and now we have that B is equal to 1, right?

So, 1 minus B is 0. 1 over B times M is also 0.

So we have the constraint that x has to be greater or equal to y plus one.

Okay. So, in a sense, when b is equal to one

the right part of the dejunct has to be true.

We want to be in the case where x is greater than y.

Okay, and of course in that case it's different from y, okay?

And then the case where b is equal to 0 is just the opposite, right?

So when b is equal to 0, 1 minus b is 1 times this big number.

That's a big number, okay. We are subtracting that big, big number

from some value that y has, okay? So, that makes that, that right hand side

tiny, tiny, tiny. And of course x is going to be greater

than that always, okay? So what we are enforcing in that

particular case is the first constraint. B is equal to zero, so we have basically

x is smaller or equal to y minus one, so which basically means x is smaller than

y, okay? So in a sense this Big-M transformation

using this value, this Boolean value b and Big-M is basically going to make sure

that we can transform this b junction into a conjunction.

And essentially x different from y, it can be represented as.

X is smaller than y, or x is greater than y.

Okay? So now you have to think about what the

linear relaxation is going to do. And I told you, this linear relaxation is

a beast. It's evil.

Okay? It's always going to do what you don't

want it be done, that, that you don't want it to do.

Okay? So if you have the linear relation, so

you have to put yourself in the mind of this linear relaxation.

This nasty beast, right? So if you want to make sure that the

linear relaxation is as low as possible, okay, it's, it's satisfying as many

constraints as possible, what are you going to do?

I want to satisfy this constraint and I want to satisfy that constraint and put

as many constraints on x and y. So that I can have you know, the best

relaxation, the, while, so that I can keep these guys, you know, free.

Okay, I don't use the constraints of these guys.

What you are going to do is that, you are going to choose b is equal 0.5, okay?

So if you choose b is equal to 0.5, okay? Half of a big number is still a big

number. Half of that big number is a big number,

so essentially, these two constraints are always going to be satisfied, right?

Always! And so what happens is that it's like you

remove these constraints from the, you know, the linear relaxation.

So your linear relaxation is going to assume that it can minimize, you know,

more and more and more and more, okay? So you will have a big gap between the

MIP model and the linear relaxation. Okay?

So in a sense this relaxation here, you know is going to be really clever and try

to use the value least that you minimize the value of the linear program.

Okay? And therefore, you know, in this

particular case it's basically going to ignore these two constraints okay, which

is the worst that can happen right? So we have this beautiful constraint that

we want to actually get a very strong relaxation.

And actually the relaxation, r, r, r, r, r, r, going to go down, down, down, down.

Okay? So, so when you look at this particular,

you know, model with the big M, this is, this is, the model that, that, that we

have just described, so we minimize the objective function.

Okay. That's the number of color.

We make sure that the objective is greater than all the colors that we are

using. Okay.

So, this is the color of every variable. Okay.

So we make sure that the objective function is at least as large as the

largest color we assign. Okay.

And then we re-express all of the constraints that you know in our two

adjacent countries. Have to have different values by using

this big m transformation that I have shown you, okay?

For every two countries that are adjacent, okay?

And so this is essentially a model that we can use, using this Big-M

transformation, okay? Now, I alluded to in the previous slide,

okay? That this relaxation is probably going to

be poor, what does it do when we try to solve it?

Okay, this is explaining what I just told you, right?

So the objective is greater than the largest color, and, you know, no two

edges and countries received the same color, okay?

So this is the value of the linear relaxation, okay, at the root node, okay?

And so it's a very, very powerful relaxation, right?

So you see that the objective is 0, okay? And you know, you see that, you know, the

value of b is 0.25 on some of the, on some of the, on many of the constraints.

Okay? So what this basically telling you is

that we need at least one color. Yeah, great.

Okay? So that's a very powerful relaxation.

Okay. I'm trying to color this map and what

this linear relaxation is telling you yes, yes, yes, you need at least one

color. Thank you.

Thank you. Okay.

So what basically this tells you is that this linear relaxation is terrible,

right? This big M notation is in part

responsible to that. So, you can use this transformation.

But that doesn't mean that you're going to get a good model okay.

So the MIP when you try to solve it is going to use 65 nodes okay.

Find the optimal solution five nodes but that really depends how you branch right.

So can we find another model okay. So we had this model but of using the

linear relaxation is so bad that it tells you oh you need at least one color.

Okay, Can we find another way of modeling this is MIP which is actually better?

Okay. And so you have to remember what I told

you last time. Okay.

So MIP model like 0/1 variable. They are in love with 0/1 variables.

Okay. So can we find a model with 0/1

variables? So what we're going to do is take all the

variables in this model. All the colors.

Okay? And we're going to binarize that, okay?

What does that mean? That means that instead of assigning a

color to a country, we're going to go use a variety of binary variables.

Which are basically going to decide whether that color is assigned to that

country, and we're going to do that for all the colors.

Okay, is this country red? Is this country blue?

Is this country green? Is this country white?

Okay, so,so we basically, in this particular case, we know that it's a map,

so we need a, at most four color. So we left four binary variables, the

first one is going to, do, say. Oh!

Is you know, x, you know, equal to 0. Is x equal to 1, is x equal to 2, is x

equal to 3. Okay, and so basically the value so when

I use the variable Bx equal 1 or equal 1. That will mean that B you know X equal Y

is 1 whenever you know X is equal to I. Okay, so basically what I have is that

value of these the value of this variables is basically expression oh!

What is the value that I'm assigning to that variable okay?

And so now essentially if you use these binary variables.

So there is one more thing that you need to do.

You need to make sure that the popular variable is actually being assigned a

popular value. So you have to basically say that a sum

of these binary variables is equal to 1. If you have that what are you saying, you

are saying, okay? you are making full decision.

Is x equal to 0 or not. Is you know x equal to 1 or not.

Is x equal to 2 or not, is x equal to 3 or not.

And you want also to say that you have to make one of these decisions exactly.

One of these things variables has to be true, which basically means I have to

assign a color or value to that particular variable.

So, summarizing here what is you take an integer value, the variable has to take

an integer value you know the range of that variable.

And what you do is you replace that integer value by plenty of binary

variables. Looking at every one of these variable,

values. Okay.

And, and having a decision variable which tells you, ooh, is that variable taking

that value. Okay?

That's what we are doing here. Okay.

So in the coloring example is, you know, is this country color red?, is this

country color blue?, and so on, and so forth.

Okay. And now when you have a constraint like X

is different from Y, you know, this country has to be colored by, you know,

differently than X. The country X has to be colored

differently from country Y. Okay?

So, we can replace that by a set of beautiful linear inequalities.

Okay? So, what are we saying here?

Ooh, I don't want x to be 0 and y to be 0 at the same time.

Okay? That's a very simple constraint here.

Okay? So instead of this Big-M notation, what

you're saying, ooh! I don't want x to be 0 and y to be 0 at

the same time, okay, they have to be smaller or equal to 1.

I don't want it to be, I don't want it to be 2.

I don't want this sum to be 2. And then you do the same thing for value

1, okay, I don't want x and y to be 1 at the same time, otherwise they are equal

and I want them to be different. Okay, same thing for 2, same thing for 3.

Okay, so now, essentially, as soon as I have the 0/1 variable it's really easy to

express the constrains and that's always the case in this MIP model.

As soon as you have 0/1 all the things are becoming much simpler okay.

And then I get this beautiful model now. So it's long so I have to disappear from

the screen right. But I'm still here, I'm still here.

Okay so what is this doing is basically looking at the object fucntion first.

Okay so this is the tricky part okay. So you have to make sure now.

That the objective is greater than the largest color.

But we don't assign color to the, to the countries.

So what we have, the only decision variables that we're going to have is for

every, every country, and every color, is that country color with that color?

Okay? And that's a 0, 1 variable.

So to make sure that the objective font to make sure, to find out the minimum

value for the objective, we are going to look at all these decision variables,

right? But we also going to multiply them by the

value that we are actually that corresponds to that particular, you know,

binary variable. So we have a bit, so for instance, if

this is 3. We are basically saying the objective

function has to be greater than 3 times whether, you know, that particular

country c is assigned, you know, value 3, okay?

And we do that for every country and every color.

Okay? So basically, we are basically making

sure that the objective function is greater than the value that we assigned.

But we have to use this expression because we don't have the actual value of

the variables. We just have, you know, this kind of

binary decision yes or no, if that particular country is assigned to that

particular value. Okay?

Then we have these other constraints that we have seen previously, that basically

says that every country is to be colored with exactly one color.

Okay. So which basically means that essentially

one of these binary variables for a particular constraint's going to be 1,

okay? And then we have all these linear

inequalities, which are very simple, okay?

So we take, you know, every two constraints that are adjacent, okay, and

we take all the possible values. And then we said that, you know, the

color, the sum of the two colors, okay? The two binary variables corresponding to

the assignment of these two colors has to be smaller than or equal to 1.

Okay? They cannot be color with the same value,

okay? And that's the MIP model as well, okay?

Very different from the other one. We have very different decision

variables, very different constraints. All of them are different actually.

Okay. So how does this program behave?

Okay. beautiful.

This is what I told you before. Okay.

I let you see it because we did the slides.

So you have to appreciate it. Okay.

So so this is the linear relaxation. Okay.

So the linear relaxation is going to tell you 0.27.

Okay and seven, seven, seven, seven forever.

Right, so which basically means that this is really great, right?

So now, this linear relaxation instead of telling you that you need one color, is

going to tell you you need at least two colors, okay?

Wow, okay? And the you see the linear relaxation

vari, the, the linear relax- the, the values of the various decision variables

inside, inside the the linear relaxation. Okay so for one of the particular

countries these are the various values. Of course they're fraction are right so

we haven't yet to sign this is what the linear relaxation, it's going to always

try to find values, that minimize the objective.

Okay? And then you get these fractional values

all over the place, okay? So, in a sense, but in a sense this

model, right? Is much better than the previous one.

Okay? And we have now these values and we could

branch and we're going to branch on these values and so on.

Okay, and we are going to use in this particular case, only 12 nodes, and 20,

22 for finding the optimal solution and 22 nodes for, you know, proving

optimality. Okay?

So, better relaxation in this particular case, and we have a better and we have a

smaller search read to explore. So, what is interseting here obviously is

the linear relaxation is improved, okay? which is what we wanted.

So, the 01 molar is much better than the big M molar, and, typically, when you can

avoid this big M it's a good thing to do. Okay?

So, now, can I actually do better than this?

Okay, so, so, look at this thing, okay? So, look at these two things that, that

we have discussed before. So, essentially what we are saying in

these particular constraints is that the objective has to be greater than v times.

Color you know CV for a particular country okay?

So which basically means though if, if, if you know if if the color of country C

is V then the objective has to be greater than V because this is a 0 1 variable.

OK so know what we know is well the second constraint is telling you is that

every country is assigned to a single value.

Right? So, so can we actually do better than

this? Okay, so we know that for instance, okay,

a top, a feasible solution, so all these colors, okay.

Exactly one will be a 1 and all the other ones are going to be 0.

For a particular country, you look at all the colors, all these binary variables

there are going to be 0 except one. Okay.

So can we restate the particular constraints here to make it stronger.

Okay, for the linear relaxation. Right?

It doesn't matter the optimality but for he linear relaxation, can it be better.

Okay? And so we can take these two constraints

and try to combine them to actually get a stronger constraint for you know, bonding

the objective value. And so what we can use is we can take all

the colors here. Right?

All the colors. Okay?

And multiplying them by the value V, that's essentially the right hand side of

the constraints that we had before. But now we sum them out.

Okay? Why can't we do it?

Because, as I told you, only one of them is going to be one.

And therefore, these are going to be exactly [INAUDIBLE] you know?

When you, when you have a feasible solution.

And therefore, you know? Eh, they will really corresponds to these

constraints. So essentially, instead of doing this

aggregation in the [INAUDIBLE] location, we are trying to aggregate it.

Yes but of course we are using this constraint to aggregate here.

OK? And so now we have the constraints and we

believe that this objective of course is going to be moving up.

Right? Okay?

No and the hope here is what? So, when we look at these relaxation

here, okay so when I show you the relaxation of this thing okay.

So what you see here is that ooh! You know, we had the objective greater or

equal to 0.5 because that was the co, the largest color, okay?

But if we sum all these guys for a particular color, right?

So they sum, they sum to one obviously. So we could say oh!

But then you know, may, you know, if I, if I multiply this one by 1, this one by

2, and this one by 3, okay? I get a value which is greater than 0.5,

and that's why I want to aggregate this. Okay, so when I put these new constraints

here okay. So I believe that my linear relaxation is

going to be better okay. But, you know this linear relaxation it's

very unpredicatable it's like my saw you never which is going to do okay.

And so look at what is does okay so this is when we add this constraint instead of

the other one. Note the linear relaxation is 0.5.

Yes, it's better, okay not much better but better, okay?

But look it completely changed the value of the assignment, right?

So note the first and the two values are at 0.5, right?

And I remove, the linear relaxation, not me, right, so the linear relaxation has

removed the value for these two other, for these two other colors, okay?

So this is interesting, right? So instead of a fractional value for

these other ones now the linear relaxation found that you know to get the

minimum value I have to assign this guy to 0 and these guys to 2.5 everyone of

them. Why?

Because these guys are multiplied by two and by three so you want to keep them as

small as possible. Okay?

That's what the linear relaxation does to you.

He's always trying to find a way to get down down down.

Okay, we are minimizing right? So essentially what I've shown you here

is that in this particular case we can improve the relaxation but it always

change the value of the solution as well. Okay?

Now we need also at least two colors. What is interesting here is that when you

try, so this is, when you compare these two, you know, that this is, this is,

this is the two solutions that I showed you.

Right? When you try to solve this, this is what

you get. Okay?

So, you have nine nodes. Okay?

So you get to the optimal solution fine, faster with that, with the first

relaxation that I showed you for the binarized version of the, of the model.

But we need 41 nodes, so we need more nodes here, okay?

Though once again we have a stronger relaxation doesn't necessarily mean that

your search is going to be faster because you can branch differently.

We have a stronger relaxation in this particular case but it was solved, you

know, we use node actually. Okay, so, what I've shown you today, you

know, to summarize, is that when you look at the model, okay?

The different model, what you have to do is to try to find a model which has a

good relaxation. I've shown you different techniques,

okay, on how to actually represent constraint that are not you know directly

linear okay. And so you can binarize the variables you

can use the Big-M notation and I've shown you the influence of these things.

I've also shown you that how you express the linear constraints has an impacts on

the linearization and therefore on the efficiency of your algorithm okay.

So next we're going to look at better ways of actually strengthening these

relaxation. Okay?

And it's going to be fascinating. Okay.

Thank you.