[MUSIC] Linear programming duality is useful in the design of approximation algorithms. How so? Let us review some of the approximation algorithms based on LPs seen in the past, and now contrast that with primal-dual approximation algorithms. In the past we have looked at approximation algorithms by LP-rounding. How do such algorithms work? First we find the linear programming relaxation for the problem at hand. Then we solve it and get an optimal solution x, usually fractional. In the third step, we must round this solution x to get an integer solution x' that we output as the solution of our problem. The difficulty, after finding the correct relaxation, a good relaxation, is to get a way to round x so that it does not lose too much of its value. How do we analyze such LP-rounding algorithms? Well, we know because the LP is a relaxation that the value of x is greater than or equal to the unknown optimal integer value, assuming we're talking about a maximization problem. Then the rounding must be done carefully so that the value of x' is not too much smaller than the value of x. It's at least the value of x over some constant c, for example, c = 2. And then together, this guarantees that the value of the output is at least OPT/c. At least half OPT, for example, if c = 2. So that is the broad outline of the design and analysis of LP-rounding algorithms for problems. Now we're moving into a slightly different direction, primal-dual algorithms. What do we do instead? Yes, in the same way, in the first step, we find the linear programming relaxation for the problem. Same first step. However, the next step consists not of solving it and finding the optimal fractional solution. But the next step consists of writing the dual, the linear program in dual D. And then we examine the primal and the dual. And in the third step we construct integer solutions, x for the primal, y for the dual. We construct them in a related manner. And since x is feasible for the primal, and since it's integral, we know that it's a solution for the problem, and we can output x. The output is x. So this construction here is actually the hard part. Earlier the hard part was finding the right rounding. Now the hard part is to find the right pair of solutions, x and y. We worked with integers, and we worked on pair of solutions, x for the primal, y for the dual. x must be integral. How are we going to analyze such primal-dual algorithms? Well, first we try to construct our solutions so that their values are related, so that the value of x is at least the value of y over some constant c. For example, the value of x is at least half of the value of y. So we cannot just take any feasible integral solution x for the primal, any feasible solution y for the dual. Their value must be related. It's important that x is integral. For y, it is not always necessary. Now by weak duality, we know that the value of y is at least OPT. And together, from that we can deduce that the value of the output x is at least half OPT. So that's the broadest speaking, the analysis of the primal-dual approximation algorithm. Where are the difficulties in this? The main difficulty is in the construction, a construction such that the value of x and the value of y are related. The nice thing, compared to what we did before LP-rounding, is that we don't need to actually solve the linear program optimally. It is enough to get a feasible solution. We don't have to find the optimal solution. So we don't need to run any simplex algorithm or other algorithm for solving LPs. We can have our own way to construct things. We can have a combinatorial algorithm. Thanks to that, primal-dual algorithms tend to be faster in practice than LP-rounding algorithms. So for which problems are we going to use this technique? It will become clearer when we use it in practice. And we will try it first for two problems, the Steiner forest problem, here, and the facility location problem, there. Those will be the object of the next two chapters in this course.