In this video, we'll put the master method to use by instantiating it for six different examples. But first, let's recall what the master method says. So the master method takes as input recurrences of a particular format, in particular recurrences that are parameterized by three different constants, A, B and D. A refers to the number of recursive calls, or the number of subproblems that get solved. B is the factor by which the subproblem size is smaller than the original problem size. And D is the exponent and the running time of the work done outside of the recursive calls. So the recurrence has the form, T(n), the running time on the input of size n, is no more than A, the number of subproblems, times the time required to solve each subproblem. Which is T(n/b) because the input size of a subproblem is n/b. Plus O(n to the d). The work outside of the recursive calls. There's also a base case which I haven't written down. So once the problem size drops below a particular constant then there should be no more recursion and you can just solve the problem immediately that is in constant time. Now given a recurrence in this permitted format, the running time is given by one of three formulas depending on the relationship between a, the number of recursive calls, and b raised to the d power. Case one of the master method is when these two quantities are the same, a = b to the d. Then the running time is n to the d log n, no more than that. In case 2, the number of recursive calls, a, is strictly smaller than b to the d. Then we get a better running time upperbound of O(n to the d). And when a is bigger than b to the d, we get this somewhat funky looking running time of O(n raised to the log base b of a power). We will understand where that formula comes from a little later. So that's the master method. It's a little hard to interpret the first time you see it. So let's look at some concrete examples. Let's begin with an algorithm that we already know the answer to, we already know the running time. Namely let's look at merge sort. So again what's so great about the master method is all we have to do is identify the values of the three relevant parameters A, B, and D, and we're done. We just plug them in and we get the answer. So a remembers the number of recursive calls. So in merge sort, recall we get two recursive calls. b is the factor by which the sub problem size is smaller than that in the original. Well we recurse on half the array. So the subproblem size is half that of the original. So b = 2. And recall that outside of the recursive calls, all merge sort does is merge. And that's a linear time subroutine. So the exponent D is 1, a reflection of the fact that it's linear time. So remember the key trigger which determines which of the three cases is the relationship between a and b to the d. So a obviously is 2. And b to the d = 2. So this puts us in Case 1. And remember in Case 1 we have that the running time is bounded above by O(n to the d (logn). In our case d = 1. So this is just O(n log n). Which, of course, we already knew. But at least this is a sanity check, the master method is at least reconfirming facts which we've already proven by direct means. So let's look at a second example. The second example is going to be for the binary search algorithm in a sorted array. Now we haven't talked explicitly about binary search and I'm not planning to, so if you don't know what binary search is, please read about it in a textbook or just look it up on the web, and it'll be easy to find descriptions. But the upshot it, this is basically how you'd look up a phone number in a phone book. Now I realize probably the youngest viewers of this video haven't actually had the experience of using a physical telephone book. But for the rest of you, as you know. You don't actually start with the As and then go to the Bs, and then go to the Cs if you're looking for a given name. You more sensibly split the telephone book roughly in the middle and depending if what you're looking for is early or later in the alphabet, you effectively recurse on the relevant half of the telephone book. So binary search is just exactly the same algorithm when you are looking for a given element in a particular sorted array. You start in the middle of the array, and then you recurse on the left or the right half as appropriate depending on if the element you're looking for is bigger or less than the middle element. Now the master method applies equally well to binary search and it tells us what its running time is. So in the next quiz you'll go through that exercise. So the correct answer is the first one. To see why let's recall what a, b, and d mean. a is the number of recursive calls. Now in binary search you only make one recursive call. This is unlike merge sort. Remember you just compare the element you're looking for to the middle element. If it's less than the middle element you recurse on the left half. If it's bigger than the middle element, you recurse on the right half. So in any case there's only one recursive call, so a is merely 1 in binary search. Now in any case you recurse on half the array so like in merge sort the value of b = 2, you recurse on a problem of half the size. And outside of the recursive column, the only thing you do is one comparison. You just determine whether the element you're looking for is bigger than or less than the middle element of the array that you recursed on. So that's constant time outside the recursive call giving us a value for d of 0. Just like merge sort, this is again Case 1 of the master method because we have a = b to the d, both in this case are equal to one. So this gives us a recurrence, a solution to our recurrence of big O(n to the d log n). Since d = 0 this is simply log n. And again many of you probably already know that the running time of binary search is log n or you can figure that out easily. Again this is just using the master method as a sanity check to reconfirm that it's giving us the answers that we expect. Let's now move on to some harder examples, beginning with the first recursive algorithm for integer multiplication. Remember this is where we recurse on four different products of n over two digit numbers and then re-combine them in the obvious way using adding by zero and some linear time additions. In the first integer multiplication algorithm, which, does not make use of Gauss's Trick where we do the four different recursive calls in a naive way, we have a, the number of recursive calls, = 4. Now in each case, whenever we take a product of two smaller numbers, the numbers have n over two digits so that's half as many digits as we started with. So just like in the previous two examples, b = 2. The input size drops by a factor 2 when we recurse. Now how much work do we do outside of the recursive call? Well again all it is doing is additions and adding by zeros and that can be done in linear time. Linear time corresponds to a primer value of d = 1. So next we determine which case of the master method we're in. a = 4, b to the d = 2, which in this case is less than a. So this corresponds to Case 3 of the master method. This is where we get the somewhat strange formula for the running time of the recurrence. T(n) = O(n to the log base b of a). Which with our parameter values, is n to the log base 2(4). Also known as O(n squared). So let's compare this to the simple algorithm that we all learned back in grade school. Recall that the iterative algorithm for multiplying two integers also takes an n squared number of operations. So this was a clever idea to attack the problem recursively. But at least in the absence of Gauss's Trick where you just naively compute each of the four necessary products separately. You do not get any improvement over the iterative algorithm that you learned in grade school. Either way, it's an n squared number of operations. But what if we do make use of Gauss's Trick, where we do only three recursive calls instead of four? Surely the running time won't be any worse than n squared, and hopefully it's going to be better. So I'll let you work out the details on this next quiz. So the correct answer to this quiz is the fourth option. It's not hard to see what the relevant values of a, b, and d are. Remember the whole point of Gauss's trick is to reduce the number of recursive calls from four down to three so the value of a is going to be 3. As usual we're recursing on a problem size which is half of that of the original. In this case n over two digit numbers so b remains 2. And just like in the more naive recursive algorithm, we only do linear work outside of the recursive call. So all that's needed to do some additions and patterns by 0. So that puts this parameter values a, b, and d. Then we have to figure out which case of the Master Method that is. So we have a = 3, b raised to the d = 2. So a has dropped by 1 relative to the more naive algorithm. But we're still in Case 3 of the Master Method. a is still bigger that the b to the d. So the running time is governed by that rather exotic looking formula. Namely T(n) = O(n to the log base b), which in our case is 2(a). Which is now 3 instead of 4, okay? So the master method just tells us the solution to this recurrence of 3. So what is log of the, what is log base 2(3)? Well plug it in your computer or your calculator, and you'll find that it's roughly 1.59. So we've got a running time of n to the 1.59. Which is certainly better than n squared. It's not as fast as n log n, not as fast as the merge sort recurrence, which makes only two workers for calls. But it's quite a bit better than quadratic. So summarizing, you did in fact learn a suboptimal algorithm for integer multiplication way back in grade school. You can beat the iterative algorithm using a combination of recursion plus Gauss's trick to save on the number of recursive calls. Let's quickly move on to our final two examples. Example number five is for those of you that watched the video on Strassen's matrix multiplication algorithm. So recall the salient properties of Strassen's algorithm. The key idea is similar to Gauss's Trick for integer multiplication. First you set up the problem recursively, one observes that the naive way to solve a problem recursively would lead to eight subproblems. But if you're clever about saving some computations, you can get it down to just seven recursive calls, seven subproblems. So a, in Strassen's argument, is equal to 7. As usual, each subproblem size is half that of the original one. So b = 2. And the amount of work done outside of the recursive calls is linear in the matrix size. So quadratic in the end, quadratic in the dimension because there's a quadratic number of entries in terms of the dimension. So n squared work outside the recursive call is leaving you a value of d = 2. So as far as which case of the master method we're in. Well it's the same as in the last couple of examples. a = 7, b to the d = 4 which is less than a. So once again we're in Case 3 and now the running time of Strassen's algorithm T(n) = O(n to the log base) 2(7) which is more or less n to the 2.81. And again this is a win. Once we use the savings to get down to just 7 recursive calls. This beats the naive federate algorithm, which recall would require cubic time. So that's another win for clever divide and conquer in matrix multiplication via Strassen's algorithm. And once again, the master's method just by plugging in parameters, tells us exactly what the right answer to this recurrence is. So for the final example I feel a little guilty because I've shown you five examples and none of the them have triggered Case 2. We've had two in Case 1 in the master method and three now in Case 3. So this will be a fictitious recurrence just to illustrate Case 2. But there are examples of recurrences that come up where Case 2 is the relevant one. So let's just look at the following recurrence. So this recurrence is just like merge sort. We recurse twice. There's two recursive calls each on half the problem size. The only difference is in this recurrence we're working a little bit harder on the combined step. Instead of linear time outside of the recursive calls we're doing a quadratic amount of work. Okay so a = 2. b = 2 and d = 2. So b to the d = 4, strictly bigger than a. And that's exactly the trigger for Case 2. Now recall what the running time is in Case 2. It's simply n to the d, where d is the exponent in the combined step. In our case, d is 2, so we get a running time of n squared. And you might find this a little counter-intuitive, right? Given that merge sort. All we do with merge sort is change the combine step from linear to quadratic. And merge sort has a running time of n log n. You might have expected the running time here to be n squared log n. But that would be an over estimate, so the master method gives us a tighter upper bound, shows that it's only quadratic work. So put differently, the running time of the entire algorithm is governed by the work outside of the recursive calls. Just in the outer most call to the algorithm, just at the root of the recursion tree