If you're taking a local search approach to a constraint satisfaction problem,

something like 2 SAT or 3 SAT, a common approach is to define 2 variable

assignments to be neighbors, if and only if they differ in the value of a single

Variable. So, for the Boolean case, where variables

are either true or false, you're going to define 2 assignments to be neighbors, if

you can get from one to the other, by flipping the value of a single variable.

For the travelling salesman problem, there's a lot of sensible ways to define

neighborhoods. One simple, and popular approach, is to

define 2 travelling salesman tours to be neighbors.

If and only if they differ in the minimum possible number of edges.

If you think about it for a second, you'll realize that the minimum possible

number of edges that 2 TSP tours differ in is 2.

So, for example, in pink I have shown you 2 neighboring TSPs.

P tours, the first tour has edges from u to v and from w to x, but by contrast the

second tour has the crisscrossing edges from u to x and from v to w.

All of the other edges in the 2 tours are the same.

Once you've identified the space of candidate solutions to your computational

problem, and once you've settled on a neighborhood for every candidate

solution, you're in a position to run a local search.

Local search just iteratively improves the current solution, always moving to a

neighboring solution, until you can't improve things any further.

I'm going to specify here a highly under determined version of the local search to

highlight the number of design decisions that you've gotta make if you really

want to apply local search to a problem in your own projects.

We'll talk about the possible instantiations of the different design

decisions in the next couple of slides. So, for starters, you have to initialize

the search with one of the candidate solutions, some little x.

How do you pick it? Well, there's a number of approaches, we'll discuss that

in more detail in a sec. Now, just like in our maximum cut local

search algorithm, we take the current solution, whatever it is x.

We look for a neighboring 1y, which is superior.

If we find such a y, then we move to that superior solution.

If there is no superior y, we're locally optimal.

And then and we just return to the current solution.

Our local search algorithm for the maximum cut problem, that we discussed

last video, is in fact an instinctiation, of this generic procedure, where the set

X of all solutions, is just the cuts of the given graph.

And, 2 cuts are defined to be neighbors, if and only if you can get from 1 to the

other, by moving A single vertex from one group to the other.

And that algorithm, as we are here, we just started from some arbitrary

solution, some arbitrary cut. We repeatedly searched for a superior

neighboring solution. That is we tried to see if there was a

way to move a vertex from one group to the other to get a better cut.

If we found such a superior neighboring solution We iterated from that better

solution, and then when we couldn't iterate it any more, when there were no

better neighboring cuts, we just halted and returned the final result.

You can apply this generic local search procedure to other problems in exactly

the same way. As an example, think about the traveling

salesman problem. Let's say we define neighborhoods like we

did on the last slide with 2 tours being neighbors if and only if they differ in

exactly 2 edges, n - 2 edges. Are in common.

What do you do? You start from some arbitrary traveling salesmen tour, and

now you iterate. You keep looking for a neighboring

superior solution that is from the current tour,.

You consider all tours you can reach by changing exactly two edges of the current

tour. You check if any of those end choose 2

solutions have strictly smaller costs. If any of them do you iterate from that

new superior solution. When you can't iterate anymore, when all

of the neighboring solutions are at least as costly as the one you're working with,

that's a locally optimal tour and you return that as your final output.

And the rest of the video I want to continue discussing local search at a

high level talking about some of the design decisions that you've got to make

as well as some of the performance characteristics you can expect.

After we finish this high level discussion we'll move on to some videos

on another concrete example of local search.

Specifically to 2 SAT, a specific example of a constraint satisfaction problem.

Let's begin with 3 frequently asked questions about under-determined

characteristics of the generic local search procedure I just showed you.

Question number 1 concerns step number 1 of the algorithm.

You need to somehow come up with this arbitrary starting point, x.

How do you do it? To answer this question, let me crudely classify the

situations in which you might be using local search, into 2 categories.

In the first category, you're really depending on local search, to come up

with a good approximate solution, to an optimization problem.

You really have no idea how to get close to an optimal solution, other than

through, some local search Approach. The second category of scenarios is where

you have some pretty good heuristics that seem to be delivering to you pretty good

solutions to your optimization problem and you just want to use a local search

as a post-processing step to do even better.

So let's start with the first category where all of your eggs are in one basket

and you need a local search to deliver to you a pretty good solution to an

optimization Problem. So this is a tricky spot to find yourself

in. Local search is guaranteed to deliver to

you a locally optimal solution. But in many problems locally optimal

solutions can be much, much worse than what you're looking for, a globally

optimal solution. In the special case of the maximum cut

problem, we had a performance guarantee of 50%.

Which already is only a so-so guaranteed but for most optimization problems even

that kinds of performance guarantee is out of question.

There are locally optimal solutions far worse than the globally optimal ones.

On the other hand, you know there are some good locally optimal solutions out

there. In particular the globally optimal

solution is also a locally optimal one. Now if you just run local search once,

it's not clear how to evaluate the quality of the solution that's returned

to you. You run the algorithm, it hands you a

solution, it has cost 79,217. is that good or bad? Who knows? An

obvious approach to doing better is to run the local search many times, say

thousands of times, even millions of times, and then amongst all of the local

optima of the different runs of your local search algorithm returns, you pick

the best one and you make that your final Final solution.

To encourage your local search algorithm to hand back to you different local

optima when you run it over and over again.

You want to be making some random deicsions.

An extremely common point to make a random decision is in step 1 of local

search. Is in choosing your initial starting

point .x so, for example in the maximum cut problem, you'd want to start with a

random cut with each vertex equally likely to be in a or b.

With the traveling salesman problem, you might want to start from a random tour,

that is a random permutation of the end vertices.

And if it's a straint satisfaction problem you could begin by giving each

variable Independently a random value. If this seems kind of like throwing darts

at a dartboard, that's what it is. It turns out there's a lot of stubbornly

difficult problems out there for which the state of the art is not really

substantially better than running independent trials of local search with

different random initial positions and returning the best of the local optimal

that you find. Now let's suppose you're in this second,

happier category of scenarios, where you've got a better handle on your

optimization problem. Maybe you've figured out how to use

greedy algorithms, or maybe mathematical programming.

Anyways, you have techniques that generate close to optimal solutions to

some optimization problem. But why not make these nearly optimal

solutions even better? How do you do that? We'll feed the output

of your current suit of heuristics into a local search post processing step.

After all, local search only moves to better solutions. It can only make your

pretty good solution even better.